• moody@lemmings.world
    link
    fedilink
    arrow-up
    1
    ·
    3 months ago

    My OS takes up about 25gb. I have individual games that take up more than 100gb. That kind of OS/storage split is necessary nowadays.

      • howrar@lemmy.ca
        link
        fedilink
        arrow-up
        1
        ·
        3 months ago

        I recall installers always asking you where you want to install things. Sometimes, that’s hidden behind “custom install” or something like that. Is that not the case anymore?

        • TJDetweiler@lemmy.ca
          link
          fedilink
          arrow-up
          1
          ·
          3 months ago

          It’ll generally default to C drive on Windows. Most of the time, you’d click “browse” and select another drive.

      • ByteOnBikes@slrpnk.net
        link
        fedilink
        arrow-up
        1
        ·
        3 months ago

        Steam lets you install on any drive. You can set it as a default.

        My D drive is for games, and my E drive is for spillover games.

  • ftbd@feddit.de
    link
    fedilink
    arrow-up
    1
    ·
    3 months ago

    I found that moderate compression (like zstd-2 or -3) not only increases your effective storage capacity, but increases R/W speeds for HDDs.

  • MystikIncarnate@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    My usual go to drive layout, when it’s impractical to put everything on a single drive, is to have a fast, but small, OS drive with core applications, if it’s large enough then also use that for user data. Add in drives for anything/everything else size intensive. Like for games, I’ll get a lower quality SSD that’s larger than my OS drive, like grabbing a SATA SSD that’s 3-4 TiB for games, with a 500GiB NVMe OS drive for programs and user data.

    If money is tight, then having your fastest storage for OS and using a HDD for everything else, is a decent option…

    For a while there I was running a 240GiB OS drive, and relocated all my user data, and games to a 1TiB HDD. The system ran fine like that, with few exceptions.

    One big issue was that major windows updates basically failed every time, it would seem that having your user account/profile anywhere other than C:\ is problematic for that kind of thing. It’s odd, but ultimately not that big of a deal. Regular security updates and whatnot worked without any issues.

    • szczuroarturo@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      3 months ago

      Dont skimp on ssd for games. Large sata is fine. Hdd for games is not fine. Get worse anything else instead . Loadings are gonna be the death of you.

  • 30p87@feddit.de
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    256 GB root NVMe, 1 TB games hdd, 3* 256 GB SSD as raid 0 for local backups, 256 GB HDD for data, 256 GB SSD for VM images.

      • 30p87@feddit.de
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        Because that’s what Raid 0 for, basically adding together storage space with faster reads and writes. The local backups are basically just to have earlier versions of (system) files, incrementally every hour, for reference or restoring. In case something goes wrong with the main root NVMe and a backup SSD at the same time (eg. trojan wiping everything), I still have exactly the same backups on my “workstation” (beefier server), on also a RAID 0 of 3 1 TB HDDs. And in case the house burns down or something, there are still daily full backups on Google Cloud and Hetzner.

        • Jeroen@lemmings.world
          link
          fedilink
          arrow-up
          2
          ·
          3 months ago

          Well its for faster speeds. So I dont get why you would do a backup on a more fragile but faster storage. You described in another comment that you have many other backups, which is awesome. So good on you for taking care of everything. But yhea, using the opposite of what would be better for backups seems a bit counterintuitive to me. And to presume that it doesn’t matter to use the more secure option because you have many other backups anyway, is also slightly weird since why bother in the first place then.

          I don’t mean any hate, you’re doing way better than me. Can I ask how fast the RAID 0 gets? And how much it would be on individual drives. And how much data you have to backup daily.

          Much respect for your setup, you’ve taken redundancy seriously and I doubt you’ll ever lose anything.

          • 30p87@feddit.de
            link
            fedilink
            arrow-up
            1
            ·
            3 months ago

            The local backups are done hourly, and incrementally. They hold 2+ weeks of backups, which means I can roll back versions of packages easily, as the normal package cache is cleaned regularly. They also prevent losing individual files accidentally through weird behaviour of apps, or me.

            The backups to my workstation are also done hourly, 15 minutes shifted for every device, and also incrementally. They protect against the device itself breaking, ransomware or some rouge program rm -rf’inf /, which would affect local backups too (as they’re mounted in /backups, but those are mainly for providing a file history as I said.)

            As most drives are slower than the 1 Gbps ethernet, the local backups are just more convenient to access and use than the one on my workstation, but otherwise exactly the same.

            The .tar.xz’d backups are actual backups, considering they are not easily accessible, and need to be unpacked and externally stored.

            I didn’t measure the speeds of a normal SSD vs the raid - but it feels faster. Not a valid argument, of course. But in any way, I want to use it as Raid 0/Unraided for more storage space, so I can have 2 weeks of backups instead of 5 days (considering it always keeps space for 2 backups, I would have 200- GB of space instead of 700+).

            The latest hourly backup is 1.3 GB in size, but if an application is used which has a single, big DB that can quickly shoot up to dozens of GB - relatively big for a homeserver hosting primarily my own stuff + a few things for my father. Like synapses’ DB has 20 GB alone. On an uneventful day, that would be 31 GB. With several updates done, which means dozens of new packages in cache, that could grow to 70+GB.