Summary: Meta, led by CEO Mark Zuckerberg, is investing billions in Nvidia’s H100 graphics cards to build a massive compute infrastructure for AI research and projects. By end of 2024, Meta aims to have 350,000 of these GPUs, with total expenditures potentially reaching $9 billion. This move is part of Meta’s focus on developing artificial general intelligence (AGI), competing with firms like OpenAI and Google’s DeepMind. The company’s AI and computing investments are a key part of its 2024 budget, emphasizing AI as their largest investment area.
The real winners are the chipmakers.
Gold rush you say?
Shovels for sale!
Get your shovels here! Can’t strike it rich without a shovel!
I feel like a pretty big winner too. Meta has been quite generous with releasing AI-related code and models under open licenses, I wouldn’t be running LLMs locally on my computer without the stuff they’ve been putting out. And I didn’t have to pay a penny to them for it.
Subsized by boomers everywhere looking at ads on Facebook lol. Same with the Quest gear and VR development
Who isn’t at this point? Feels like every player in AI is buying thousands of Nvidia enterprise cards.
The equivalent of 600k H100s seems pretty extreme though. IDK how many OpenAI has access to, but it’s estimated they “only” used 25k to train GPT4. OpenAI has, in the past, claimed the diminishing returns on just scaling their model past GPT4s size probably isn’t worth it. So, maybe Meta is planning on experimenting with new ANN architectures, or planning on mass deployment of models?
The estimated training time for GPT-4 is 90 days though.
Assuming you could scale that linearly with the amount of hardware, you’d get it down to about 3.5 days. From four times a year to twice a week.
If you’re scrambling to get ahead of the competition, being able to iterate that quickly could very much be worth the money.
Or they just have too much money.
Which will be solved by them spending it.
Would that be diminishing returns on quality, or training speed?
If I could tweak a model and test it in an hour vs 4 hours, that could really speed up development time?
Quality. Yeah, using the extra compute to increase speed of development iterations would be a benefit. They could train a bunch of models in parallel and either pick the best model to use or use them all as an ensemble or something.
My guess is that the main reason for all the GPUs is they’re going to offer hosting and training infrastructure for everyone. That would align with the strategy of releasing models as “open” then trying to entice people into their cloud ecosystem. Or, maybe they really are trying to achieve AGI as they state in the article. I don’t really know of any ML architectures that would allow for AGI though (besides the theoretical, incomputable AIXI).
well Zuck has a lot of users he has to create bullshit for to keep them emotionally engaged and distracted
This is great! I thought there would be a chips LED recession. Sorry homeless people but you’re gonna have to wait another generation to try and get online to maybe buy a house someday far far away… and also some day far far away if you get my drift.
After all he needs a good AI bot to teach him to be “more human” because humans are starting to suspect
I really hope they fail hard and end up putting these devices on the consumer second hand market because the v100’s while now affordable and flooding the market are too out of date.
Meta is the source of most of the open source LLM AI scene. They’re contributing tons to the field and I wish them well at it.
Only other game in town really.
I’ve heard mistral released some good models
total expenditures potentially reaching $9 billion
I imagine they negotiated quite the discount in that.
They signed up for spam email so they could get a coupon code.
Agreed. There’s volume discount, and then there is “Facebook data center with an energy consumption of a small country volume discount”.
Just like the Metaverse…this won’t have legs.