• 0 Posts
  • 21 Comments
Joined 1 year ago
cake
Cake day: July 5th, 2023

help-circle
  • An older friend of mine told me years back about an incident that happened on a university VAX running Unix. In those days, everyone was using vt100 terminals, and the disk drives weren’t all that quick. He was working on his own terminal when without warning, he got this error when trying to run a common command (e.g. ls)

    $ ls -l
    sh: ls: command not found
    

    So he went on over to the system admin’s office, where he found the sysadmin and his assistant, staring at their terminal in frozen horror. Their screen had something like:

    # rm -rf / tmp/*.log
    ^C^C^C^C^C^C^C^C^C^C
    # ls -l
    sh: ls: command not found
    # stat /bin/ls
    sh: stat: command not found
    

    A few seconds after hitting return, and the rm command not finishing immediately, he realised about the errant space, and then madly hammered Ctrl-C to try to stop it. It turns out that the disk was slow enough that not everything was lost, and by careful use of the commands that hadn’t been deleted, managed to copy the executables off another server without having to reinstall the OS.






  • I think 30fps (25fps in PAL-land) became the standard because televisions were 30 FPS (NTSC) or 25 FPS (PAL) due to interlacing. While the screen redraw on a NTSC television is 60 per second, it’s done as two fields so you only get 30 actual frames per second. This was done so you could have a decent resolution (525 lines for NTSC or 625 lines for PAL) while maintaining reasonable RF bandwidth limits for the TV signal by sending a single frame as two fields, half of the picture in each field on alternate TV scanlines.

    So you probably have a lot of industry inertia to deal with so 30 fps (or 25 fps where PAL was formerly the standard) ends up being the standard. And for video it’s good enough (although 60fps/50fps is still better - until fairly recently, this would entail too much bandwidth so sticking with the old NTSC or PAL frame rates made sense).

    But for computers no one really used interlaced displays because they are awful for displaying the kind of things computers usually show (the flicker is terrible with a static image in an interlaced screen mode. While it’s true there were some interlace modes, nearly everyone tried to avoid them. The resolution increase wasn’t worth the god-awful flicker). So you always had 60 Hz progressive scan on the old computer CRTs (or in the case of original VGA, IIRC it was 70 Hz). To avoid tearing, any animated content on a PC would use the vsync to stay synchronized with the CRT and this is easiest to do at the exact frequency of the CRT and provided very smooth animation, especially in fast moving scenes. Even the old 8-bit systems would run at 60 (NTSC) or 50 (PAL) FPS (although 1980s 8-bit systems were generally not doing full screen animation, usually it was just animating parts of the screen).

    So a game should always be able to hit at least 60 frames per second. If the computer or GPU is not powerful enough and the frame rate falls below 60 fps, the game can no longer use the vsync to stay synchronized with the monitor’s refresh, and you get judder and tearing.

    Virtual reality often demands more (I think the original Oculus Rift requires 90 fps) and has various tricks to ensure the video is always generated at 90 fps, and if the game can’t keep up, frames get interpolated (see “asynchronous space warp”) although if you’re using VR if you can’t hit the native frame rate, it’s generally awful having to rely on asynchronous space warp which inevitably ends up distorting some of the grpahics and adding some pretty ugly artifacts.




  • Debian (a very conservative distro) switched to Wayland by default in debian 10 if I’m not mistaken (we’re now on 12).

    I didn’t notice the change until I tried to run a niche program that really needs X11. Unless you’re doing this kind of thing, then you can probably just use Wayland. At least in Debian it’s really easy to switch between Wayland and X11 by selecting the session type when you log in.




  • Twice, and they were completely different experiences.

    First was gas at the dentists for taking 3 teeth out as my mouth was overcrowded. I was kind of asleep, I could hear people’s voices in a really trippy flanged way, and I could vaguely feel some tugging at my jaw (but no pain). The gas tasted awful.

    The second was for an operation at hospital after an accident (requiring 6.5 hours of microsurgery). It was like jumping forwards 7 hours in time, literally counting the seconds after the anaesthetic went in at night, then immediately waking up in broad daylight. It is completely unlike deep sleep (where you still are aware that time has passed).


  • But it does help give an idea of who’s making the most reliable drives (both SSD and hard disk). No, this isn’t a guarantee, but it’s still useful information especially when it’s not just a friend-of-a-friend anecdote but gained over tens of thousands of drives.


  • mackwinston@feddit.uktoLinux@lemmy.mlHDD or SSD?
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    The biggest factor in making good, automatic backups for my home server wasn’t speed (it’s an older machine with a SAS array of spinning discs) but the availability of affordable cloud based backup storage (I use Backblaze and sync my files to a storage bucket once a day). Then it becomes automatic, and no one has to remember to do it, and it’s offsite.

    Even when external USB discs got cheap you had to remember to do it regularly and many people would forget.


  • mackwinston@feddit.uktoLinux@lemmy.mlHDD or SSD?
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    1 year ago

    Hard drives are not that unreliable, well, so long as you pick the right model.

    BackBlaze’s statistics are here: https://www.backblaze.com/blog/backblaze-drive-stats-for-q2-2023/ - they run tens of thousands of inexpensive drives to run their cloud backup service. Some HDDs are much better than others.

    That document also links to their SSD statistics (they don’t have that many SSDs yet, so the stats aren’t as good) but while SSDs tend to have lower failure rates, there are some models of SSD that have higher failure rates than HDDs. For example, one Seagate SSD they use has an AFR (annualised failure rate) of just under 2%, but one Toshiba HDD they use has an AFR of only 0.31%. (Another thing to be aware of is that Backblaze’s drives will all be in air conditioned data centres, not in the random temperature/humidity spreads of a PC in someone’s home).

    If you look at the stats as a whole generally SSDs have half the failure rate across the board to HDDs, but it varies a lot by make and model. So be careful on which you pick, and take backups :-) For my money, all my PCs (desktop and laptops) are pure SSD setups. My server still uses spinning disks, mainly because it’s older server class hardware with a SAS array.





  • Sometimes the issues with WiFi chipsets is not the distro but the manufacturer. Debian for instance now includes non-free firmware on its installation ISO image, but some manufacturers do not allow the distribution (e.g. Broadcom) of firmware, so Debian can’t legally include them. And unfortunately the manufacturers don’t make it easy to “just download the firmware” so you can put it on the USB stick so the installer can see them. (Literally the only issue with putting Debian on my old 2013 Macbook Pro was the Broadcom firmware - but fortunately, having a Debian desktop I could install the firmware downloader there to get the two files the installer needed).

    This is not a fault of the Linux distro, but a fault of the hardware manufacturer. Unfortuantely, like the smell of piss in a subway, we all have to deal with Broadcom.