Edit: To clarify:
Is it even possible, financially speaking, to keep adding storage? I mean, advertisements don’t even make a lot of money, is the indefinite growth of server storage even sustainable?
Or will they do what Twitch does with old content and just delete them?
Storage is cheap, especially at the corporate scale.
Make two simplifying assumptions: pretend that Google is paying consumer prices for storage, and pretend that Google doesn’t need to worry about data redundancy. In truth Google will pay a lot less than consumer prices, but they’ll also need more than 1 byte of storage for each byte of data they have, so for the sake of envelope math we can just pretend they cancel out.
Western Digital sells a 22TB HDD for $400. Seagate has a 20TB HDD for $310. I don’t like Seagate but I do like round numbers, so for simplicity we’ll call it $300 for 20TB. This works out to $15/TB. According to wikipedia, Youtube had just under $29b of revenue in 2021. If youtube spend just $100m of that — 0.34% — they’d be able to buy 6,666,666 of those hard drives. In a single year. That’s 6,666,666x20TB = 133,333,333 TB of storage, also known as 133note 1 exabytes.
That’s a lot of storage. A quick search tells me that youtube’s compression for 4k/25fps is 45Mbps, which is about 5.5 megabytes/s. That’s 768,722 years of 4k video content. All paid for with 0.34% of youtube’s annual revenue.
Note 1: Note that I am using SI units here. If you want to use 1024n for data names, then the SI prefixes aren’t correct. It’d be 115 exbibytes instead.
EDIT: I initially did the price wrong, fixed now.
I wouldn’t assume Googe pays less for storage. They need to pay for land use in many countries, power usage, redundancy and the staff that manages all of it.
They also need powerful servers with fast caching storage and a lot of RAM. They also need to pay for the bandwidth.
As far as I know, they save multiple copies of each video in all resolutions they serve. So an 8K video will also have 4K + 1440p + 1980p + 720p + 480p + 240p + 144p Possibly also 60Hz and 30Hz for some of them and also HDR versions.
You have to add all that to the cost per TB. Finally, there is the question of how much additional storage they need per year, 100 PByr? Presumably also increasing yearly?
I wasn’t calculating server costs, just raw storage. Google is not buying hard drives at retail prices. I wouldn’t be surprised if they’re paying as little as 50% of the retail price to buy at volume.
All of what you say is true but the purpose was to get a back of the envelope estimation to show that the cost of storage is not a truly limiting factor for a company like youtube. My point was to answer the question.
With the level of compression youtube uses, the storage costs of everything below 4k is substantially lower than 4k by itself: for back of envelope purposes we can just ignore those resolutions.
Do you absolutely know they’re storing those qualities individually? It’s perfectly plausible that they do on the fly transcoding.
deleted by creator
what is that note notation?
It’s a superscript. You can see it in the comment editor options. It’s:
^text^
which looks like textYou can also check a comment’s source by clicking on the icon that looks like a dog eared piece of paper at the bottom of it.
ah it must be my client not visualizing
Open in your browser to see: https://lemmy.world/comment/2847886
I get invalid response
Ah, the Home Instance button for lemmy.world comments is broken. Try lemmy.ml instead: https://lemmy.ml/comment/3143665
You can use footnotes now[1].
They are neat and don’t look too bad if unsupported by the interpreter.
https://github.blog/changelog/2021-09-30-footnotes-now-supported-in-markdown-fields/ ↩︎
I know you are saying Google doesn’t have to worry about redundancy to simplify the math but I think that makes it completely useless.
Redundancy is not just about having another copy incase of data loss but more importantly for enterprises redundancy allows for more throughput. If each video was on a single hard drive the site would not be able to function as even the fastest multi actuator hard drive can only do 524 MB/s in a perfect vacuum.
It’s useless for answering a questions that wasn’t asked, sure. But I didn’t pretend to answer that question. What it is useful for is answering the topic question. You know, the whole damn point?
How much of a factor off do you think the estimate is? You think they need three drives of redundancy each? Ten? Chances are they’re paying half (or less) for storage drives compared to retail pricing. The estimate on what they could get with $100m was also 134 EB, a mind boggling sum of storage. I wouldn’t be surprised if they’re using up on the order of 1 EB/year in needed storage. There’s also a lot more room in their budget than 0.34%.
The point is to get a quick and simple estimate to show that there really will not be a problem in Google acquiring sufficient storage. If you want a very accurate estimate of their costs you’ll need data that we do not have. I was not aiming to get a highly accurate estimate of their costs. I made this clear, right from the beginning.
The most popular videos are all going to be kept in RAM, they don’t read them all off disk with every single view request. If you wanted a comment going over the finer details of server architecture, you shouldn’t have looked at the one saying it was doing back of the envelope math on storage costs only, eh?