ebay is very international, and is also by far the greatest site for second-hand stuff in most European countries. I normally buy my used drives there.
Safety Engineer, Dad, Husband, Pilot, Musician. Not necessarily in that order.
Ingenieur für funktionale Sicherheit, Vater, Ehemann, Pilot, Musiker. Nicht notwendigerweise in dieser Reihenfolge.
ebay is very international, and is also by far the greatest site for second-hand stuff in most European countries. I normally buy my used drives there.
mixing drive models is certainly not going to do any harm
It may, performance-wise, but usually not enough to matter for a small self-hosting servers.
Sure, SCSI disks will show their defective list (“primary defects”, as delivered by the factory, and grown defects, accumulated during use), and they all have a couple hundred primary defects. But I don’t see why that would affect the reported geometry, given that it is fictional, anway. And all disks have enough spare tracks to accommodate for the defects, and offer the specified full number of total sectors, even for long list of grown defects. Incidentally, all the 4TB disks are still “perfect” in that they have no grown defects.
And yes, ever since LBA, nobody has used sectors and cylinders for anything.
I’m not touching that post again. But a small rant about typesetting in lemmy: It seems there is no way whatsoever to put angle brackets in a “code” section. In an overzealous attempt to prevent HTML injection, everything in angle brackets is just removed when posting (although it remains there in preview). In normal text, you can use “<”, but not inside “code” segments, where it will be retained verbatim.
If you’re as paranoid as me about data integrity, SAS drives on a host adapter card in “Initiator Target” (IT) mode with write-cache on the disks disabled is the safest. It will degrade performance when writing many small files concurrently, but not as badly as with SATA drives (that’s for spinning disks, of course, not SSD). With a good error-correcting redundant system such as ZFS you can probably get away with enabled write cache in most cases. Until you can’t.
RAID is generally a good thing but don’t get complacent, follow the 3-2-1 method
To expand on that: Redundant drive setup and backups serve completely different purposes. The only overlap is in case of a single disk failure, where RAID (or similar) may save the data.
Redundancy is all about reducing downtime in case of single hardware failures. Backups not only protect you from data loss in case of multiple simultaneous failures, but also from accidental deletion. Failures that require restoration of data almost always involve downtime. In short: You always need backups (unless it’s strictly a local cache, and easily recreatable), but if you want high availability, redundancy may help.
3-2-1-rule for backups, in case you’re unfamiliar: 3 copies of important data, on 2 different media, with 1 off-site.
If you want a proper server, it seems that Asrock Rack is the only manufacturer of AM4-socket-based server mainboards. Unlike desktop/gamer boards, these are designed for parallel airflow, typically from front to back in a 19" rack. These also come with IPMI remote maintenance, so can be operated headless in a remote location.
I have considered one of these for a while, such as the X570D4U, which also supports up to 128 GB of ECC RAM. Depending on what you want, this may be overkill, though.
(This was my favourite, because it has two M.2 slots, but there are others with only a single slot, since you said you only need one.)
Unlike gamer or other boards, these have no fancy black vanity covers and often won’t allow overclocking, but are typically very well designed and rock solid for unattended 24/7 operation.
Thanks.
Thanks. Right now I’m away from the machine so can’t look, but I’ll keep an eye open for a T420 mainboard and a second CPU, then. It’ll still be a decent machine, I think, with two E5-2470 V2. DDR3 ECC-RAM is also dirt cheap these days.
Clearly you neither read my post nor looked into what the air baffle in the T320 actually looks like. So whats your point?
It’s much more than a fan shroud. It’s a baffle specifically designed to guide cooling air over the CPU heatsinks and the RAM modules. This kind of airflow design is very common in servers. I wouldn’t trust it without, especially since the CPU heatsinks have no dedicated fans, but rely on the aerodynamic functioning of the baffle.
And yes, I know they are very similar, in fact I am quite (but not absolutely) certain that they are identical except for the actual second CPU socket. It’s almost as if you didn’t read my post. Even the soldering points for the second CPU socket are there in the single-CPU T320. They certainly won’t have different PSU connectors. They even share part numbers for the case.
I’d have to check the baffle shape again. But thanks for the insight.
I don’t think there’s anything intrinsically wrong, but far as I can see you are using only a single disk for the zfs pool, which will give you integrity checks (know when something is corrupted), but no way to fix it.
Since this is, by today’s standards, a tiny disk at 100G, I assume this is just a test setup? I’m not sure zfs is particularly well suited for virtual machines, I think it is better to have the host handle the physical data integrity by having the disk image on a zfs filesystem, or giving the VM a zfs volume (block device) directly.
Same here. It just says “nginx has been successfully installed” or something like that. It serves the appropriate directories or redirects to the respective virtual machines for other (sub) domains.
What are the advantages of raid10 over zfs raidz2? It requires more disk space per usable space as soon as you have more than 4 disks, it doesn’t have zfs’s automatic checksum-based error correction, and is less resilient, in general, against multiple disk failures. In the worst case, two lost disks can mean the loss of the whole pack, whereas raidz2 can tolerate the loss of any 2 disks. Plus, with raid you still need an additional volume manager and filesystem.
ZFS raidz1 or raidz2 on NetBSD for mass storage on rotating disks, journaled FFS on RAID1 on SSD for system disks, as NetBSD cannot really boot from zfs (yet).
ZFS because it has superior safeguards against corruption, and flexible partitioning; FFS because it is what works.
Yes. I use a G7 N36L as an offsite-backup server in my second apartment. Works great with NetBSD and zfs, using rsnapshot to make remote backups every night.
Since it is only active for an hour and a half each night, it is my only server to put the disks into powersave mode the rest of the time. Computing eprformance is so low that I don’t even run a folding@home client. It usually cannot finish any work package before the deadline.
Several do, including afraid.org, which I use. Other similar services were recently discussed here in this thread.
Ah, I see. I suspected it might be something like that. I’ve never tried that.
To add, unlike “traditional” RAID, ZFS is also a volume manager and can have an arbitrary number of dynamic “partitions” sharing the same storage pool (literally called a “pool” in zfs). It also uses checksumming to determine if data has been corrupted. On redundant setups it will then quietly repair the corrupted parts with the redundant information while reading.