• 0 Posts
  • 35 Comments
Joined 1 year ago
cake
Cake day: July 18th, 2023

help-circle



  • It really depends on the individual case. There are many CS professions where the title “engineer” or “scientist” is incredibly accurate. I believe that is a minority of course, and further depends on how broad your definition of “cs people” is. There are specialties within the incredibly broad field of computer science that require education in classical engineering, as well as specialties that focus on research and experimentation with the scientific method.


  • While I don’t particularly agree with the sentiment, those in the field of Computer Science could be argued to be “scientists”, though often not in the classical sense. As a Computer Science major myself, I would never consider myself a “scientist” in the classical definition of the term. Those involved in actual research, yes, though that does not describe me despite the title of my Bachelor’s. I would consider those involved in the theoretical side of Computer Science to be more akin to mathematicians, as most of the theory is based in mathematical proofs and models (take for instance the field describing formal computational models as a means to defining how computers operate, and how effective specific algorithms are in that context). Though I could understand the argument that those involved heavily in the theoretical side of Computer Science may be considered scientists, given their similarity to theoretical physicists. In that sense, there is also active experimentation to test hypotheses about algorithmic runtime. It’s a fascinating niche of Computer Science that I studied briefly in university, but likely will not be pursuing in the future.

    Generally those involved with active development of commercial software don’t fit into that category, though. It’s very much a question of semantics.


  • Oh, that should be no worry. You can always do a clean install of one distro over another. Just make sure in the setup that when you select your data partitions on your other drives that you don’t remake the partitions (at that would delete them). You’ll also have to deal with differences in config files in your home directory since there is variance between Nobara and Bazzite. You can just grab the ISO and install normally, deleting the Nobara partitions.



  • Hey, I wrote a script for you since this was a really simple operation. I have 2 versions depending on what you want them to do.

    I recommend that you make a test folder with a bunch of test directories and subdirectories to make sure this works as expected on your machine.

    The first will only rename folders with a depth of 1 (meaning it won’t rename subdirectories). This is for if you want to control which specific directories you run this on.

    Non-subdirectory version

    #!/bin/bash
    
    find . -maxdepth 1 -type d -name "_*" | while read FOLDER; do
        newfolder="$(echo ${FOLDER} | sed -e 's/^\.\/___/a/' | sed -e 's/^\.\/__/b/' | sed -e 's/^\.\/_/c/')" ;
        mv "${FOLDER}" "${newfolder}" ;
    done
    

    The second renames all folders including subdirectories (it goes 1 layer deeper at a time). So if you want to just run this from your home directory (or wherever the drive you want to run it on is mounted), you can run it once and be done with it. It only goes 100 folders deep, but you can modify that by changing the {2..100} to another range, like {2..500} for 500 folders deep. Running more layers deep increases runtime, so I assumed you wouldn’t have more than 100 layers of folders, but if you do you can adjust it.

    Subdirectory version

    #!/bin/bash
    
    find . -maxdepth 1 -type d -name "_*" | while read FOLDER; do
        newfolder="$(echo ${FOLDER} | sed -e 's/^\.\/___/a/' | sed -e 's/^\.\/__/b/' | sed -e 's/^\.\/_/c/')" ;
        mv "${FOLDER}" "${newfolder}" ;
    done
    
    for i in {2..100};
    do
        find . -mindepth $i -maxdepth $i -type d -name "_*" | while read FOLDER; do
            newfolder="$(echo ${FOLDER} | sed -e 's/\/___/\/a/' | sed -e 's/\/__/\/b/' | sed -e 's/\/_/\/c/')" ;
            mv "${FOLDER}" "${newfolder}" ;
        done
    done
    

    I assume that you at most have 3 underscores preceding a folder name. If that is not the case, you can modify the script as following.

    If you have more, copy one | sed 's/.../' part for each find section up to the next | symbol (there is only 1 find section for the no subdirectory version and 2 find sections for the subdirectory version) and paste it before or after the others. If you are using the subdirectory version, make sure you copy the corresponding version of the sed command because they differ (the first one containes “^.” that the second one doesn’t)! On your new pasted copy, add an underscore to the part of the text you pasted that has underscores. Then for each of the other sed blocks, change the letter they are replaced with to match.

    Here is an example with 4 max underscores on the subdirectory script:

    #!/bin/bash
    
    find . -maxdepth 1 -type d -name "_*" | while read FOLDER; do
        newfolder="$(echo ${FOLDER} | sed -e 's/^\.\/____/a/' | sed -e 's/^\.\/___/b/' | sed -e 's/^\.\/__/c/' | sed -e 's/^\.\/_/d/')" ;
        mv "${FOLDER}" "${newfolder}" ;
    done
    
    for i in {2..100};
    do
        find . -mindepth $i -maxdepth $i -type d -name "_*" | while read FOLDER; do
            newfolder="$(echo ${FOLDER} | sed -e 's/\/____/\/a/' | sed -e 's/\/___/\/b/' | sed -e 's/\/__/\/c/' | sed -e 's/\/_/\/d/')" ;
            mv "${FOLDER}" "${newfolder}" ;
        done
    done
    

    If you have fewer than 3 max underscores, you just delete the relevant sed parts and update the letters.

    You can also let me know how you want if modified and I can do it for you if you’d like.

    Using the subdirectory version

    If you want to use the one that works on subdirectories, create a text file renamesubdirectories.sh in the folder you want it to start from, and paste in the subdirectory script into that file with whatever text editor you prefer. You can then modify the script if necessary.

    I’m going to try to give GUI instructions, but I haven’t used Nemo in a long time, so I’ve also provided terminal instructions in case those don’t work.

    Nemo

    Navigate to the folder you want to start from in Nemo. Copy or move the renamesubdirectories.sh file into this folder (or create the file here if you haven’t done so already, and paste in the subdirectory script, modifying if necessary). Right click on the file and open its properties/permissions (maybe details? Can’t remember exactly what the option is called). Find the setting to adjust permissions of the file, and allow it to be executed as a program/mark it executable, whatever adding the executable permission is called in Nemo. Now you can exit the permissions/properties/details window, and right click the file and run. After a few seconds, refresh (F5 usually). You should now be done and can delete the file.

    Terminal

    Navigate to the folder you want to start from, and right click > Open in terminal (I believe Nemo has that option, but it’s been awhile; let me know if not, and I can explain now to navigate there from terminal). Now make the file executable with chmod +x renameunderscores.sh. Run it with ./renameunderscores.sh. Once the next line prints (with your username, hostname, and directory), the command is done and you can exit the terminal and delete the file.

    Using the non-subdirectory version

    This will require you to either move the script every time you want to run it, or installing it locally and using the terminal (which is easier). I will explain the terminal version only for this, as moving the script every time you want to use it is very tedious.

    Again, create the renamesubdirectories.sh file using the text editor of your choice, and modify as necessary. Then create a folder called bin in your home folder (this should automatically be in your path) and copy or move the renamesubdirectories.sh file into that folder. Then (in the bin folder) right click and open in terminal in Nemo, or just open a terminal from applications and navigate to the folder with cd ~/bin. Now make the file executable with chmod +x renameunderscores.sh.

    You should now be able to navigate to any folder you want, then right click open in terminal, and run the command renameunderscores.sh. Once you are finished, you can delete the bin folder.


  • This is not very common knowledge, but it is no longer recommended to press S or U before B for SysRq. The official documentation of sysrq has stopped recommending this practice, as it may be harmful to modern filesystems. Writing to a storage device while the kernel is in a bad state has the potential to cause corruption, and modern journaling filesystems like EXT4 and BTRFS are designed to survive crashes like this with minimal (or no) corruption. Instead, you’ll likely want to use Alt+SysRq+REIB (and make sure you are waiting multiple seconds between each keypress, as they do not complete instantly!).

    You may instead try to kill the most memory intensive non-vital process with Alt+SysRq+RF, which may stop you from crashing to begin with (this works especially well for memory leaks). SysRq+F will invoke the oom (out of memory) killer, which will kill the most memory intensive non-vital process without causing a kernel panic.

    If you need to restart, the most ideal situation is to enter a TTY and cleanly reboot, in which case you can do Alt+SysRq+R to grab control from the display manager, then Ctl+Alt+F3 or Ctl+Alt+F4 (I believe most distros have the first login session run on the TTY accessible from Ctl+Alt+F2) to switch to another TTY. You can then log in and do sudo systemctl reboot if your computer is still responsive. You may need to kill some processes before your system becomes responsive enough to log in on a TTY, which is where Alt+SysRq+F is useful, but in extreme situations it may require Alt+SysRq+EIB.

    So a basic order of steps to try may look like:

    1. Try Alt+SysRq+RF and wait a few seconds to see if your system starts responding. If not, you can either try it another time or two, or move on to 2.
    2. See if you can switch to a TTY with Ctl+Alt+F3. If so, try to log in and sudo systemctl reboot. Else move onto 3.
    3. If you are in a TTY, switch back to the main login with Ctl+Alt+F2. Then do Alt+SysRq+REIB.

    In the spirit of other users giving mnemonic devices, you could remember REIB with Reboot Even If Broken, or the oom killer RF with Resolve Freeze (someone else can probably think of something better for RF; I’m not great at making mnemonic devices).

    TL;DR: There are SysRq combinations that are less prone to damage/corruption than Alt+SysRq+REISUB, so use the above flowchart, or just remove the S and U for Alt+SysRq+REIB (if you don’t want to troubleshoot first) for less chance of filesystem corruption from a bad kernel. You can often recover the system without having to hard reset (Alt+SysRq+B). And ALWAYS wait between SysRq keys, as they do not finish instantly.


  • But browsers should be installed as an RPM, because Flatpak uses the same seccomp filter for all apps. That isnt even really secure, but prevents browsers from spawning user namespace sandboxes. Which means they have very little process isolation.

    User namespaces are not the only method of sandboxing in Linux. I use Mullvad browser, which is a fork of Firefox maintained in tandem with the Tor browser (without Tor integration), so I’ll mainly discuss Firefox. Here are some relevant comments on Firefox’s internal sandbox in flatpaks:

    Firefox’s internal sandbox is designed to function properly without user namespaces or chroot

    Firefox uses nested seccomp filters to achieve process isolation

    The TL;DR is that Firefox uses seccomp-bpf on each process (with per-process nested seccomp filters) to intercept all syscalls for sandboxing, which does not require the use of user namespaces. User namespaces are used where possible, simply to add an additional layer of padding as a method of defense in depth. Since the syscalls are already intercepted and handled with seccomp-bpf, it could easily be argued that this is redundant and unnecessary given the way the Firefox sandbox works, based on the comments of the Firefox developer I linked to.

    Chromium browsers had very bad issues with sandboxing, as they assumed that user namespaces would always be available (which breaks on any distro with them disabled in the kernel, as was the case with Debian and Arch just a few years ago, or any install that uses the linux-hardened kernel), and Chromium does not use seccomp-bpf for their process isolation like Firefox (or at least it didn’t when the bugzilla I linked to was made). I believe those issues have been fixed however, and Chromium-based browsers (at least the ones that implement the patch or something similar) should also have proper process isolation in flatpaks now. I don’t follow that very closely since I don’t use Chromium-based browsers, though. Here’s the flatpak Chromium patch that uses flatpak-spawn to fix process isolation in Chromium-based browsers for reference. It was mentioned in one of the Firefox bugzilla pages I linked to earlier. Since it isn’t an upstream fix, I wouldn’t trust that all Chromium-based browsers use it, but that’s an issue to bring up with Google (assuming it hasn’t been fixed upstream in the past couple years). Firefox specifically designed their sandbox to work in these situations where Chromium may fail.

    Mullvad Browser isn’t available as an RPM (or even DEB), and while they have a tar.xz download that I imagine just installs the browser in the folder it’s extracted to (not source tarball; it’s all pre-compiled), I have no idea if that receives automatic updates, and I’ve never used a Linux app packaged like that, so I choose to use the flatpak instead.


  • SteamOS currently runs 6.1, which is an LTS kernel, it just isn’t the latest LTS kernel (that’s 6.6 released at the end of 2023). Steam also makes modifications to the kernel they use in SteamOS, so they have their own versions custom built for Steam Decks. I should revise my previous statement slightly. Debian Bookworm is on 6.1 as well, but SteamOS 3.6 (in beta) uses 6.5 (which is non-LTS). Debian skips every other LTS kernel because they release every 2 years, but SteamOS (eventually) upgrades each LTS kernel or some non-LTS between? They did the same thing with 5.13 a couple years ago (5.10 and 5.15 are LTS). I don’t really follow their releases since I don’t own a Steam Deck, so I don’t really know the rationale there. Funnily enough, looking through posts about it online, it seems that SteamOS is sometimes ahead of Debian on the minor kernel version and sometimes behind (when they’re on an LTS kernel). Currently, they are behind Debian on minor release (6.1.52 vs 6.1.76). Very strange, no idea what’s going on there.

    But I specifically mean the packaging delays. There are sometimes sync issues with drivers, like this recent one with no free stuff that is used alongside the normal stuff.

    Hm, interesting. I don’t recall experiencing anything like that personally since I hardly use anything from RPMFusion, but that does seem frustrating. Looks like it was fixed very quickly, at least.

    And with Cisco-openh264 they cant to anything, Cisco ships the packages which is legally binding, and there are issues sometimes.

    Ah yeah, I’ve heard about that. I can’t remember the last time I installed Cisco’s openh264 though since I started using VLC, which can handle video and audio formats without installing extra codecs. I think MPV can do the same? I’m not sure what comes with my browser, but it is packaged as a flatpak and seems to run media just fine. Maybe there is some other use for openh264 that I’m not aware of that just doesn’t come up in my normal use, but I don’t think I’ve installed any media codecs in Fedora for a couple years now. Granted, I don’t play videos often (but I do play MP4s when I do), and all my music is in FLAC format, so I’m probably an edge case. I also don’t game, but I remember seeing something recently in this sub where someone may have had codec issues while playing a game.

    But Fedora is doing a great job, and the fact that rpmfusion exists alone is pretty hillarious. These are obviously Fedora people maintaining the stuff in secret, in a country where patent laws are not enforced (but are also in place afaik).

    Well, Fedora is a community project, so it’s very difficult for anything individual maintainers do to come back to Fedora so long as the name isn’t put on it directly. If I were to speculate, most of the RPMFusion maintainers are Fedora community contributors (and I imagine they likely wouldn’t work at Red Hat, given Red Hat’s apprehension towards copyrighted material). I don’t think it’s really any different legally speaking from a Fedora contributor working on a personal project on the side. The fact that you can manually add the repo to Fedora doesn’t connect the two in a legally binding sense. So as long as it isn’t being funded by Fedora, and their branding is absent, then it shouldn’t really matter. I don’t know about the actual legal aspects of the packages they are distributing, or what country/countries RPMFusion repos are hosted in, but so long as nobody is profiting/losing substantial profit, it likely isn’t even worth pursuing any legal recourse to begin with.

    You are at the bleeding edge, but I often find bugs that are simply there and need to be fixed. Once KDE Plasma 6 is on some LTS release like CentOS Stream, I may think about switching.

    Yeah, that’s fair. There are definitely bugs that pop up every once and awhile, but for the most part they’re minor (at least the ones I notice). This kernel bug is among the more major bugs I’ve seen with Fedora in the past few years, but I only know about it from this post; I haven’t experienced it myself. I imagine there have been similar things (or worse) like this that have gone over my head as I didn’t experience them myself. Perhaps my experience has also been more stable because I’ve been using GNOME up until Fedora 40. I do find my experience with Fedora to be much more stable than Arch, but that is to be expected given their release models. I can only recall having experienced 1 or 2 bugs in the past year on Fedora, which is less than I experienced when I used Ubuntu many, many years ago, and the bugs were fixed much faster than they were on Ubuntu, where it would often take months for a patched version of the package to enter the Ubuntu repos. That’s all anecdotal, however.

    The reason I usually recommend Fedora to people (and uBlue images by extension) is that it sits on some middle ground between the rolling release bleeding edge distros like Arch, and the stable, LTS, frozen for 2 years distros like Debian. I have grievances with both of those models that are addressed with Fedora, and that’s what makes it a good distro for me. My experience with bugs hasn’t really been any more common than when I was using LTS distros, but that may be a fluke. I will likely be moving one of my servers to Debian in the future though, because it makes sense for its purpose. Different release models benefit different uses (and people), of course.


  • Yes, that may be the case, but that comes with its own downsides as well. The most recent version of SteamOS runs the 6.1.52 kernel from September (thus it should be unaffected by this bug, since it was introduced in 6.6.30). I don’t follow kernel changelogs very closely (so I don’t know all the new features and improvements that are being missed from new versions), but there are lots of optimizations and new features constantly being added to the kernel. Of course, the tradeoff is that you don’t get new bugs, but you also have to backport bug fixes or else you’ll have the bugs present in your current version for a very long time (often the kernel devs do this, but depending on what version a given distro uses, the distro maintainers may have to do it themselves). It’s not as big of a freeze as Debian based systems (EDIT: Some of the time; right now they are technically behind Debian on the kernel minor release, but in SteamOS 3.6 (which is in beta), they will be updating to 6.5), of course, but it’s a choice that has tradeoffs. Different people will subscribe to different opinions on kernel updates, given that no one way is clearly superior for user experience and features alike.

    As for proprietary packages that are held from Fedora for copyright issues (media codecs and Nvidia drivers, for instance), there are always uBlue images like Bazzite, Bluefin, and Aurora that fix that. One of the very few stipulations to the Red Hat sponsorship for Fedora is that they do everything possible to avoid legal trouble, hence why those packages aren’t included in the base repos or installed by default. It’s a small caveat that disappears once you install the correct packages.

    I think SteamOS is by far the most optimized OS for the Steam Deck, but I don’t think it’s very useful to use it on any other hardware (there are better options). Kernel updates will always be a point of conflict for at least some people regardless of what model you use, but I personally appreciate the quick turnaround for major kernel versions in Fedora. It’s actually improved my experience on my laptop significantly, as there have been recent changes that apply to my specific hardware (in some of the 6.6 releases, for instance). Of course, anyone can be free to prefer a slower rollout, and that is equally valid. The bug fixes for the issue OP is having should be backported to 6.8 anyway, so it shouldn’t necessitate waiting for 6.9 to hit Fedora in a few weeks.


  • Fedora does test everything before they ship it. Each major kernel release can go through as much as a month of testing for stability and regression. SteamOS is based on Arch, where they don’t test the kernel for regression. Despite testing though, this is an incredibly obscure issue, and obviously the Fedora team can’t catch every kernel bug. It only happens on some hardware, and only in the event that the VRAM visible to the CPU is filled, and less used portions of the CPU-visible VRAM are moved to other parts of VRAM that only the GPU can see. This is why resizable bar fixes the issue for many, as it makes all VRAM visible to the CPU, so there is no move that happens (moving the VRAM data has an off by one error). This issue goes all the way back to 6.6.30, and was only discovered 3 weeks ago, and took 2 weeks to find the root cause of and patch in the stable version of kernel 6.9. It was only found because the 6.9 release candidates added checks for hardware capabilities, and the off by one error that is the root cause of this issue threw an error with the hardware capability checks. I’m not a kernel developer, so I don’t know all the details, but it is discussed in the issue I linked if you want more explanation.



  • Probably, but this is not the place to ask if you want answers. This forum is for Linux discussion, not Windows, and while I could get this set up in Linux for you, I wouldn’t even know where to start with Windows, as I haven’t used it in a decade. You’ll see a lot of the same with experienced Linux users here. Most of us will not be able to help you. I recommend you ask in a Windows forum instead, as you’ll have a much greater chance of finding someone knowledgeable to help. Maybe there’s a forum for Windows command line (or Powershell? I don’t know what they’re calling it these days).


  • Well, for your particular case, you’d make a script, and a service to run that script on boot. Once the service starts, it will keep itself alive.

    Here’s the script:

    bluetooth-reconnect.sh

    #!/bin/bash
    
    dbus-monitor --session "type='signal',interface='org.gnome.ScreenSaver'" |
      while read x; do
        if echo $x | grep -q "boolean false"
          bluetoothctl connect A1:11:22:3A:CD:F1
        fi
      done
    

    You’d place this script somewhere that has system execution privilege (if your distro uses SELinux). I will use the directory /usr/scripts/ for example purposes (note that you will have to create this folder). Make sure to mark it executable with chmod +x /usr/scripts/bluetooth-reconnect.sh

    You’d then write a service to start at boot, just really barebones and simple:

    bluetooth-reconnect.service

    [Unit]
    Description=Reconnect Bluetooth after waking from sleep
    After=default.target
    
    [Service]
    Type=simple
    ExecStart=/usr/scripts/bluetooth-reconnect.sh
    
    [Install]
    WantedBy=multi-user.target
    

    Move the service into /etc/systemd/system/ (filepath should be /etc/systemd/system/bluetooth-reconnect.service), and enable it and start it:

    sudo systemctl enable bluetooth-reconnect.service && sudo systemctl start bluetooth-reconnect.service

    And you should be good to go. At least assuming your distro doesn’t have some specific quirk, which I wouldn’t be able to help you with unless I knew what distro you run. Granted, this is my adaptation of what I saw in the linked forum and my own experience with services, I haven’t actually tested this. But even if it has an issue, this will get you 90% of the way there, and there’s a good chance it just works if the forum answers work for your distro.



  • This should work, though I believe I used a different method when I first installed it a few years ago. I had to copy the firmware to the boot partition, but I believe that should be done for you so long as you use the imager that is linked in the Fedora article. If you’re on Windows, you won’t have to use the command line as the Fedora article suggests. After downloading the image, so long as you have 7zip, you can just right click and extract. Then just follow the Raspberry Pi instructions they link for using the Raspberry Pi imager. Fedora 40 comes out on Tuesday, and I highly recommend you use Fedora Server, because Raspberry Pi devices are not well suited to running desktop environments. They are meant mostly to run headless (no graphical interface, only a terminal). If you just want to try out Fedora 40 for the desktop experience, I recommend using a VM, because the Raspberry Pi will not give you a good desktop experience.


  • You can install to a USB mounted SSD like a normal system by using a liveUSB and installing to the SSD. That’s how you get a portable install with persistence. I actually run my raspberry pi home server this way; it runs entirely off a USB SSD with Fedora Server. It works just as well as an internal install, minus the bandwidth lost from it being in a USB enclosure. I’m not sure about getting Tails on the same drive, I’ve never tried to put an ISO on a drive with an OS installed. Tails is meant to be run on a USB drive, so it doesn’t have an installer. ISO images have their own boot partitions, so while I’m sure you could decompress it, move files manually, and fix the holes created from removing the files from the ISO, that would be very complicated and technical, and have a high probability of failure. I believe you can run Tails in a VM, though the liveUSB is definitely the preferred method. Everything in the VM will connect through Tor just like it would if you used the liveUSB.


  • If secure boot is the only issue with installing from a VM, then there’s a good chance you could get it to work. The problem is that this is a very untested method, and there’s a very reasonable probability that there could be more issues than just that. The only problem I personally know about is UEFI secure boot mode, but I only have a small amount of experience with VMs, so you’d be doing that at your own risk.

    You could use a burnable DVD (not a CD, it’s too small) if you have that option. Also, you only need a 2GB flash drive, just in case you were excluding something because you thought it was too small. But buying a small capacity flash drive is pretty cheap nowadays, you don’t need anything fancy. I’m seeing 128GB flash drives for $13 and 64GB for $8 on Amazon. You can probably get one for cheaper at a grocery store or electronics store. They’re really handy to have laying around. The liveUSB is definitely the easiest method, and it’s pretty much the only tested method, so if it’s an option for you, that’s what I’d recommend.