In a similar vein, why can we not use the technology of RAM to prolong the life-cycle of an SSD?

  • Lemvi@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    75
    arrow-down
    7
    ·
    1 year ago

    Writing to an SSD damages the SSD, however things saved to an SSD are persistent, meaning the data isn’t lost when the SSD doesn’t get any power. Writing to RAM doesn’t damage it and it is also quicker. However, data saved on RAM is not persistent, meaning that all data is lost as soon as the RAM is not connected to a power source. Also, RAM is a lot more expensive than SSD storage.

    RAMs are already used to avoid writing to (or reading from) the SSD or HDD when possible, the concept is called “Caching”

    • grahamsz@kbin.social
      link
      fedilink
      arrow-up
      37
      arrow-down
      3
      ·
      1 year ago

      Even if it’s powered, RAM will lose its data on the order of a tenth of a second. RAM doesn’t just require power, it requires that your computer constantly read and rewrite it - so every 64ms your computer has to read every gigabyte of RAM and write it back.

      • Julian@lemm.ee
        link
        fedilink
        English
        arrow-up
        16
        ·
        1 year ago

        Doesn’t the ram do that itself? Otherwise reading/writing all that data would waste tons of time for the CPU.

        • grahamsz@kbin.social
          link
          fedilink
          arrow-up
          36
          ·
          1 year ago

          Yes - it’s been the job of the DRAM controller for almost the entire history of computing. But that’s still a part of the computer and if it stops working then your RAM will go blank in a fraction of a second

        • thepianistfroggollum@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 year ago

          It’s been a very long time since my computer engineering course, and we didn’t cover this topic specifically, but I highly doubt it’s a full dump and reload. What likely happens is each ram address has a ttl flag or some way for the CPU to know when to rewrite the data, and it does it as needed.

          Plus, the bus between the CPU and ram is ridiculously fast. Your pc could dump and reload all of its ram in the time it takes you to blink. And, with multiple cores, the task can be allocated to a single core, or divided up among all of them.

          • al177@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            Modern RAM just needs to be told to refresh. The device itself will go through the refreshing process. But the whole array needs to be refreshed, there’s no LRU scheme to tell what bank or row was last accessed.

            Starting with DDR3 it’s not so easy. Density is so high that reading or writing one row affects cells in adjacent rows. Partial target row refresh (PTRR) counters this, where any access of a row is followed by a refresh of adjacent rows. Flaws in this process in early DDR3 controllers was at the heart of rowhammer exploits, where repeated accesses to a memory location could work out what’s stored in physically adjacent memory, even if it’s not privileged. IIRC DDR4 pulled the PTRR process into the RAM’s built in refresh circuitry so it’s transparent to the memory controller.

          • PeterPoopshit@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            At least on older x86 motherboards, there used to be a dram refresh interrupt. It would trigger every 15 or so milliseconds and tell the dram controller to do a bus hold request and then refresh the ram. This bus hold request means the cpu can’t access the ram when this happens (it can still run stuff in the cache) but at least you aren’t wasting as much cpu time on dram refresh this way.

        • grahamsz@kbin.social
          link
          fedilink
          arrow-up
          6
          ·
          1 year ago

          Some very early systems did do it at kernel level, but yeah you are correct. Though I’d also consider the dram chips to be part of the computer and DRAM refresh makes up a good part of your phones battery consumption at standby.

        • al177@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          10
          ·
          edit-2
          1 year ago

          If you ever have the chance to use an old Apple II computer, run a text mode program, wait til the owner is looking in the other direction and turn the power off and back on quickly.

          For about a second, before you hear the loud BOOP and the screen clears, you’ll see whatever was on the screen just before you powered it off. But a few characters will be corrupted. Try it again, and wait a half a second longer than before. More characters will be corrupted.

          For that brief second you’re looking at the contents of the video RAM, then the ROM (Apple called what we call BIOS now “ROM”) clears the contents and puts up the familiar text banner. The longer the power stays off, the more the contents of those RAM cells decay, and any bit flip will show up as a different character at the corresponding location on the screen.

          On a side note, there was an article in the early '80s in Circuit Cellar by Steve Ciarcia showing how you could make a rudimentary digital camera by prying the top off a DRAM chip (some were ceramic with metal lids, or just metal cans) and adding a CCTV camera lens at the right distance. Light can deplete the charge in DRAM cells even faster, and by writing all 1s to the memory, exposing it to light, and reading back the contents, you could get a black and white image of whatever’s shining on the chip.

        • rickdgray@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          Dynamic RAM tracks bits by using a capacitor for each bit. Caps’ charge bleeds out so you have to top it off again every so often. The way you do that is to just write the same data back again. So it reads and writes the same data to itself every refresh. The opposition to this is static RAM which does not use a capacitor and is just a clever arrangement of transistors. No refresh needed. It’s not typically used commercially except under special requirements, though as transisters are significantly more expensive. So the refresh strategy is the better choice for consumer hardware. DRAM has been dominant for decades.

      • CaptPretentious@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        If I remember, the decay of information in RAM is slower than that. This is an old memory, but I recall I think someone on TechTV talking about how you could, if fast enough, remove a module from one machine and put it in another, and if done right, potentially get the information off it.

        • MaxHardwood@lemmy.ca
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          It’s possible, and can be done at home. You need to literally freeze the RAM very quickly (typically with CO2) and transfer it to the new system. Then you dump the contents of the stick and hopefully find an encryption key.

        • grahamsz@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          From what i’ve read it’s temperature dependent, and at room temp some dram cells might take as long as 10 seconds to decay. The 64mS refresh is a super conservative call because it’s really bad when random bits go missing out of memory. The decay is faster at high temperatures, but some dram controllers might actually adjust based on temperature.

          • fiah@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            it is temperature dependent, if you change this refresh timing in the BIOS to the tightest possible value at a given temperature, you can easily make your PC crash by heating the RAM up a bit (for example by removing a fan)

      • Rikudou_Sage@lemmings.worldM
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Note that when you freeze the RAM a lot, it will hold the data for up to seconds (if I remember correctly). This is used in hacking - you can get the contents of the RAM after the computer has been shutdown.

  • PupBiru@kbin.social
    link
    fedilink
    arrow-up
    41
    ·
    edit-2
    1 year ago

    so people have said that it’s to do with volatile (it forgets) and persistent/non-volatile (it remembers), but i think the crux of your question is a little more nuanced: WHY does the mechanism to “remember” a 1 or a 0 get damaged with SSDs and not for RAM

    now, i’m not expert here but i think i have a basic understanding and i’ve pieced some bits od research together!

    (edit: it should be noted that what ive described here as simply “RAM” is actually SRAM, but modern computers mostly use DRAM which is different: it uses a capacitor instead of a couple of transistors, but the fundamental idea is the same)

    RAM is very simple: for the most part, it’s just a few transistors - they’re basically little switches that work just with electrical current… they can be arranged so that transistors connect to another transistor, so that they’re both telling each other to be “on” (this is SUPER simplified, but kinda think of the electricity being stuck in a loop: it just goes round and round between the transistors, and that’s “on” or 1)… transistors are very reliable! their chemistry doesn’t degrade over time (note though that because electricity doesn’t actually go around in an infinite loop, if the “loop” stops getting power to replenish it, it resets to 0, which is what makes it volatile!)

    SSDs though store their 1s and 0s more in chemicals… think of your SSD like a bunch of little boxes with water in them, and you read the 1s and 0s based on how clear the water is… you add sand to make a 1, and you filter out the sand to reset it to a 0! the more often you do that though, the dirtier the water gets until you can’t tell if it’s just dirty water of if it has sand in it (actually you add electrons to the gates in an SSD which changes the cells resistance and you read based on that, but at some point the electrons just keep ”sticking” in the cell so the resistance doesn’t change as much as we’d like)

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      SSDs typically use flash memory, as I understand it. I’d leave the sand out and say it’s like a tank you fill up with (more or less depending on the data) water. After a while the tank mechanically wears out and starts to leak. Flash memory very much is like a tank filled with electricity and then plugged, and it does start to leak as the insulating oxides degrade.

    • kvothelu@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      oh I get it. since transistors can’t hold charge when machine is off data in RAM go away. and since SSDs store data chemically power status doesn’t affect the storage. nice

  • duckythescientist@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    27
    ·
    edit-2
    1 year ago

    Under the hood, RAM and SSDs use very different structures to store memory. SSDs use “flash memory”. To store a bit in flash, the SSD uses a larger than usual voltage to inject and trap electrons into part of the memory cell. This is a stressful process for the silicon, so there’s a limited number of times it can be done. The benefit is that the injected electrons stay put for decades.

    RAM (specifically DRAM) stores charge in tiny capacitors. These don’t take anything special to charge up unlike flash. However, they are very leaky and will lose their charge (and therefore the memory) in a handful of seconds. RAM chips actually read and rewrite their memory several times per second to make up for the leakyness. Because of this, RAM needs to always be powered to keep its memory. This makes DRAM unreasonable for SSDs.

    There are a couple other types of memory, but they have different power and space trade-offs. One example is SRAM. It needs power to keep its memory, but it doesn’t need to constantly refresh, so it doesn’t take much power. It can be rewritten indefinitely. However, it takes up more space on a chip than DRAM or Flash, so it’s much more expensive per byte.

    • yesdogishere@kbin.social
      link
      fedilink
      arrow-up
      6
      arrow-down
      5
      ·
      edit-2
      1 year ago

      do not worry. superconductivity at room temps is only 2 yrs away. With superconductive SSDs, these will last FOREVER. Like a nightmare marriage.

  • al177@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    15
    ·
    1 year ago

    RAM storage cells are tiny, fast, and don’t degrade over time. Each cell can store an electrical charge that distinguishes a 0 from a 1. However they forget their contents whenever they’re being read or if they’re not topped up occasionally. RAM needs to be refreshed by writing back contents after a cell is read, and making sure that periodically every cell in the chip is read and written back. Without constant power to refresh, RAM forgets its contents.

    Flash is similar to RAM because it also stores data as a charge. To make the contents last without power and survive being read, each cell is tied to the gate of a MOSFET transistor. The MOSFET is like an amplifier in that it takes the small charge at the input and controls a larger signal without depleting the input charge. Think of the cell as a light switch: the switch stays in the same position no matter how much electricity passes through. The part that stores the charge and makes the flash MOSFET different from a normal MOSFET used in other electronics is called the floating gate. This is made of layers just a few atoms thick of insulators and conductors.

    The catch is in programming and erasing the cell. To make the flash cells last, you need to eliminate any possible conductive path in or out of the charge storing part of the cell. Programming a flash cell to a 0 (flash is a “1” in it’s erased state, the N in NOR and NAND flash stands for “not”, i.e. negative logic) requires pushing a high voltage onto the MOSFET, causing it to conduct just enough to push some electrons into the MOSFET’s floating gate. Erasing requires an even higher voltage applied in a different way to drain the charge out. Both processes take advantage of normally undesirable features of MOSFETs called breakdown, where a high enough voltage causes it to conduct in ways it wouldn’t in normal operation.

    Those high voltages, particularly the erase voltage, cause permanent wear on the floating gate and MOSFET, causing the charge to leak faster than normal. Even flash that’s not written or erased often isn’t perfect and a programmed cell will degrade over many years from a 0 to a 1. There is a whole science to counteracting flash wear and inevitable errors that I won’t go into here. The SSD controller chip is responsible for managing wear and data integrity, which is why you sometimes hear about SSDs that could lose your data if you don’t have a bugfixed firmware.

    There are other technologies that are available that could replace flash. MRAM stores magnetically and is immune to wear from writing, not unlike core memory used in computers from the '50s through the '70s. FeRAM is similar to MRAM but less dense. You already have FeRAM in your car’s dash to store the constantly updating mileage, as it’s immune from damage by that constantly updating number or automotive temperature extremes. Phase change memory stores by heating tiny chunks of a crystalline material to get it to change it’s structure. Intel used this in its now-defunct 3D Xpoint memory. Memristor is another that takes advantage of a relatively new kind of electronic component type.

    All of these new technologies are better than flash in longevity and many are close to RAM in performance. However none yet can be made as dense as RAM or as cheaply as flash. Memristor and MRAM are both frontrunners for replacing both RAM and flash, but it’s only fairly recently that fabs started offering the processing steps needed to make these in high density devices.

  • warhammercasey@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    They both use entirely different technologies to store data. RAM is basically just a transistor and a capacitor used to store each bit. This makes it extremely fast to access but it requires a constant supply of power or that capacitor will just discharge. There really isn’t that much that can wear out using a capacitor and transistor so they have long lifespans.

    SSDs use NAND flash. Basically they trap some electrons in an insulated section (the gate of a floating gate MOSFET) and to read that they measure the electric field caused by those electrons. This wears out because sometimes electrons may unintentionally quantum tunnel into the insulated section and become permanently trapped there. And once enough electrons have become permanently trapped there, you can no longer distinguish between different values.

    You can’t use RAM technology in SSDs because it’s volatile - when power is removed all data gets wiped. It’s also much less dense than NAND flash. 1TB SSDs are pretty easy to find but when was the last time you saw a 1TB RAM stick at a reasonable price?

      • mindbleach@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Any solid-state media you can access is almost certainly NAND. There’s a second kind of flash memory called NOR, but it’s gradually disappearing. I think it’s relegated to EEPROMs and similar embedded uses. The number of applications where its advantages matter are outweighed by the seventeen bajillion dollar market for higher-capacity NAND. All the research money and foundry tech are going toward the one that’ll let them sell 1 TB SSDs for $20.

  • AlmightySnoo 🐢🇮🇱🇺🇦@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    edit-2
    1 year ago

    In a RAM you’re just trapping current in a loop inside a logical circuit and the state that you get, since it will be stable until it’s reset, is just a memorized 1 bit. You’re not changing anything physical there.

    See: https://en.wikipedia.org/wiki/Memory_cell_(computing) (not an ELI5 though)

    Wikipedia animation of a basic logical circuit that allows you to do that using NOR gates:

    An animation of a SR latch, constructed from a pair of cross-coupled NOR gates.

    • Jajcus@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      That is static RAM (SRAM). Most RAM in computesr is DRAM, which works a bit differently and is much cheaper and denser, but more difficult to operate.

  • mindbleach@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    Dynamic RAM is a bucket with a hole in it. Genuinely, that is the model that makes it so cheap.

    Static RAM is the proper way to do memory: half a dozen transistors form each bistable flip-flop, so there’s two input wires and one output wire, and the output wire is either high or low depending on which input wire was used most recently. Static RAM will maintain its state using comically low power. Static RAM runs on the idea of electricity. It’s how cartridge games from the 90s had save files. There’s a button-cell battery that was enough to power some kilobits of memory for an entire decade. But because static RAM uses so many gates, it takes up a lot of silicon, and it is hideously expensive, to this day.

    Dynamic RAM is a stupid engineering workaround that happens to be really, really effective. Each bit is a capacitor. That’s all. It will slowly drain, which is why your laptop has to hibernate to disk instead of lasting forever like Pokemon Red. When a capacitor has charge, applying more power is met with resistance, which lets the sole input wire detect the state of that bit. And so long as you check every couple milliseconds, and refill capacitors that are partially charged… the state of memory is maintained. On very old machines this might have been done by the machine. IIRC, on SNES, there’s a detectable stall in the middle of each scanline, where some ASIC reads and then writes a portion of system memory. On modern devices that’s all handled inside the memory die itself. The stall is still there, but if it affects your program, you are doing something silly.

    The RAM in your machine has nigh-unlimited write cycles because it will naturally return to zero. It is impermanent on the scale of microseconds. By design, your data has no lasting impact. That is central to its mechanism.

    • partial_accumen@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      You’re on it.

      With RAM the data is being stored as a voltage level continuously refreshed by computer. When the power is removed, the refresh voltage disappears, and the data it represents disappears. This is volatile storage. Infinite re-writes of the same bits, but data cannot persist without power always on it.

      With NVRAM aka Non-volatile RAM (which is what SSDs are) data is being stored in a physical material. When data is written, the data represented by voltage differences, is used to make a physical change via chemistry to the material that makes up the SSD. This is also a MUCH slower process in NVRAM than updating data in real RAM. However the benefit is, because NVRAM is a physical change, you can remove power, and the data persists. When you power it back up, the data is read from the physical shape of the chemical material that makes up the NVRAM and then represented again as voltage differences when passed back to the computer.

      The cost to this is there are only so many times that the chemical material can be changed. It wears out and is eventually no longer changeable.

  • Semi-Hemi-Demigod@kbin.social
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    1 year ago

    The reason SSDs have limited rewrites is because there’s a physical membrane that electrons tunnel through to store bits. This membrane will break down over time as writes are made.

    For a more visual representation: Imagine you have a bowl covered with plastic wrap that’s upside-down, and you push some BBs through the saran wrap from underneath. At first the BBs will stay in the bowl pretty well, but the more holes you poke in the plastic wrap the more likely they are to fall out. Eventually there’s so many holes it can’t hold anything.

  • Manzas@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Ram gets knocked out to sleep Ssd remembers because it went to sleep by it self

    • eth0slash0@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      I like this one, but as a slight change:

      RAM holds the apples you gave it until it is knocked out and drops them all and has no idea what an apple is when it wakes up.

      Flash holds the apples you gave it and sets them down in the exact order you gave it to them, sleeps, and picks them back up again when it wakes up ready to hand them back.

  • sexy_peach@feddit.de
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    RAM needs a current to keep it’s data, if there is no power it will forget very fast. The SSD won’t forget anything if power fails.

    To save re-writes you would have to have data that only resides in the RAM memory part of the SSD, which, again, is delicate. Also SSDs basically have this, a bit of more durable flash memory which acts as a write back cache.