DefederateLemmyMl

  • Gen𝕏
  • Engineer ⚙
  • Techie 💻
  • Linux user 🐧
  • Ukraine supporter 🇺🇦
  • Pro science 💉
  • Dutch speaker
  • 1 Post
  • 111 Comments
Joined 11 months ago
cake
Cake day: August 8th, 2023

help-circle
  • We are talking about addresses, not counters. An inherently hierarchical one at that. If you don’t use the bits you are actually wasting them.

    Bullshit.

    I have a 64-bit computer, it can address up to 18.4 exabytes, but my computer only has 32GB, so I will never use the vast majority that address space. Am I “wasting” it?

    All the 128 bits are used in IPv6. ;)

    Yes they are all “used” but you don’t need them. We are not using 2^128 ip addresses in the world. In your own terminology: you are using 4 registers for a 2 register problem. That is much more wasteful in terms of hardware than using 40 bits to represent an ip address and wasting 24 bits.


  • you are wasting 24 bits of a 64-bit register

    You’re not “wasting” them if you just don’t need the extra bits, Are you wasting a 32-bit integer if your program only ever counts up to 1000000?

    Even so when you do start to need them, you can gradually make the other bits available in the form of more octets. Like you can just define it as a.b.c.d.e = 0.a.b.c.d.e = 0.0.a.b.c.d.e = 0.0.0.a.b.c.d.e

    Recall that IPv6 came out just a year before the Nintendo 64

    If you’re worried about wasting registers it makes even less sense to switch from a 32-bit addressing space to a 128-bit one in one go.

    Anyway, your explanation is a perfect example of “second system effect” at work. You get all caught up in the mistakes of the first system, in casu the lack of addressing bits, and then you go all out to correct those mistakes for your second system, giving it all the bits humanity could ever need before the heat death of the universe, while ignoring the real world implications of your choices. And now you are surprised that nobody wants to use your 128-bit abomination.


  • Hmm, I can’t say that I’ve ever noticed this. I have a 3950x 16-core CPU and I often do video re-encoding with ffmpeg on all cores, and occasionally compile software on all cores too. I don’t notice it in the GUI’s responsiveness at all.

    Are you absolutely sure it’s not I/O related? A compile is usually doing a lot of random IO as well. What kind of drive are you running this on? Is it the same drive as your home directory is on?

    Way back when I still had a much weaker 4-core CPU I had issues with window and mouse lagging when running certain heavy jobs as well, and it turned out that using ionice helped me a lot more than using nice.

    I also remember that fairly recently there was a KDE/plasma stutter bug due to it reading from ~/.cache constantly. Brodie Robertson talked about it: https://www.youtube.com/watch?v=sCoioLCT5_o



  • It’s when you have to set static routes and such.

    For example I have a couple of locations tied together with a Wireguard site-to-site VPN, each with several subnets. I had to write wg config files and set static routes with hardcoded subnets and IP addresses. Writing the wg config files and getting it working was already a bit daunting with IPv4, because I was also wrapping my head around wireguard concepts at the same time. It would have been so much worse to debug with IPv6 unreadable subnet names.

    Network ACLs and firewall rules are another thing where you have to work with raw IPv6 addresses. For example: let’s say you have a Samba share or proxy server that you only want to be accessible from one specific subnet, you have to use IPv6 addresses. You can’t solve that with DNS names.

    Anyway my point is: the idea that you can simply avoid IPv6’s complexity by using DNS names is just wrong.





  • I ran it perfectly on a 33MHz 486 with 4mb RAM for a long time. Even Doom II with some of its heavier maps ran fine.

    “Perfectly” would mean it ran at 35fps, the maximum framerate DOS Doom is capped at. In the standard Doom benchmark, a dx33 gets about half that: 18fps average in demo3 of the shareware version with the window size reduced 1 step. Demo3 runs on E1M7, which isn’t the heaviest map, so heavier maps would bog the dx33 down even more.

    I’m sure you found that acceptable at the time, and that you look back on it with slightly rose-tinted glasses of nostalgia, but a dx2/66 and preferably even better definitely gave you a much better experience, which was my point.


  • If anyone can enlighten me, This is pretty much why you can find DooM on almost any platform BECAUSE of its Linux code port roots?

    I mean yeah. Doom was extremely popular and had a huge cultural impact in the 90s. It was also the first game of that magnitude of which the source was freely released. So naturally people tried to port it to everything, and “but can it run Doom?” became a meme on its own.

    It also helps that the system requirements are very modest by today’s standards.


  • It ran like absolute ass on 386 hardware though, and it required at least 4MB of RAM which was also not so common for 386 computers. Source: I had a 386 at the time, couldn’t play Doom until I got a Pentium a few years later.

    Even on lower clocked 486 hardware it wasn’t that great. IIRC, it needed about a 486 DX2/66 to really start to shine.


  • How the fuck am I supposed to know that Network Manager won’t support DNS over TLS

    Read the documentation? Use google?

    The very first hit when you google “dns over tls tumbleweed” provides the answer: https://dev.to/archerallstars/using-dns-over-tls-on-opensuse-linux-in-4-easy-steps-enable-cloud-firewall-for-free-today-2job

    A more generic query “dns over tls linux” gives this, which works just the same: https://medium.com/@jawadalkassim/enable-dns-over-tls-in-linux-using-systemd-b03e44448c1c

    Both google searches return several more hits that basically say the same thing.

    Even the NetworkManager reference manual refers you to systemd-resolved as the solution: https://www.networkmanager.dev/docs/api/latest/settings-connection.html

    Key Name Value Type Description
    dns-over-tls int32 Whether DNSOverTls (dns-over-tls) is enabled for the connection. DNSOverTls is a technology which uses TLS to encrypt dns traffic. The permitted values are: “yes” (2) use DNSOverTls and disabled fallback, “opportunistic” (1) use DNSOverTls but allow fallback to unencrypted resolution, “no” (0) don’t ever use DNSOverTls. If unspecified “default” depends on the plugin used. Systemd-resolved uses global setting. This feature requires a plugin which supports DNSOverTls. Otherwise, the setting has no effect. One such plugin is dns-systemd-resolved.

    I don’t use NetworkManager, I’ve never even used Tumbleweed and I found the answer in all of 10 minutes. Of course that doesn’t help if you’re so clueless that you didn’t even know that you were using DNS-over-TLS, or that DoT is a very recent development that differs significantly from regular DNS and that it requires a DNS resolver that supports it.

    when every other operating system does?

    Like Windows 10? (Hint: it doesn’t)

    You use Arch. Mr skillful

    Who cares what I use. When I’m messing with something I don’t understand, I at least read the documentation first instead of complaining on the internet and calling the whole community toxic and, I quote, “Butthurt Linux gobblers” when you get the slightest bit of pushback.







  • To get basic GPU passthrough working, I mainly followed the Arch Linux guide: https://wiki.archlinux.org/title/PCI_passthrough_via_OVMF

    Be warned though that this is just the start of the journey. There are all kinds of issues that you need to deal with and decisions that you need to make if you want to practically use it for gaming, and those require lots of googling, piecing bits of information together from all over the place, and trial and error. From memory these are things I had to deal with:

    • How to handle storage? Just a qcow2 file or pass through a partition or drive?
    • How to handle mouse and keyboard input? Emulated or through a passed through USB port? Both have their pros and cons.
    • Audio is a pain in the ass… emulated it either crackly or laggy. There is a way to pass it through to pipewire through a unix socket, but it’s convoluted to setup. Or perhaps you can pass an entire audio device through to your guest?
    • Bluetooth audio, for my wireless headset, was an even bigger issue because audio did not get routed correctly to the headset if I just connected to the host. In the end, I got a separate bluetooth dongle for my VM, and passed it through.
    • How do you handle the display between guest and host? Two separate monitors? A monitor with dual inputs, and toggle between them? Or something like looking-glass, which sounds appealing but again introduces issues like vrr not working properly, and your GPU will probably need a dummy “dongle” to work without an actual monitor connected.
    • Then there’s the CPU and how to divide the cores between guest and host: for best performance, the guest’s cores need to be reserved, and should take into account the CPU topology. For example, I have a 5900x and reserved the 6 cores on one CCX for my VM , leaving the other 6 for my host.

    For more information, there’s the /r/VFIO subreddit. Yeah I know, f*** reddit, but it has a lot of useful information. The looking glass site has some FAQs too, even on things not directly related to looking-glass itself. There is some VFIO discussion on the level one forums as well, but they’re not so active.

    Anyway if all this sounds like a cool project to spend a few weeks on, I heartily recommend you try it. I sure enjoyed setting this all up and getting it working, but I spent way more time configuring and troubleshooting things than I did gaming with that setup, and in the end I decided that just gaming on Proton and occasionally dual booting for problematic games is a much more practical solution.



  • That’s not GPU passthrough. That just enables VirGL, which is a translation layer that passes some OpenGL calls through to the host’s Mesa installation. It has rather poor performance though, it’s extremely limited and is rather buggy too. You certainly can’t use it for cutting edge gaming.

    GPU passthrough is when you pass through an entire GPU device as-is to the virtual machine. That is: if you have an Nvidia RTX 3060, the guest operating system will see an Nvidia RTX 3060 and it can use the native drivers for it. This gives you near-native performance for gaming.

    Now, I didn’t even know this was possible with VirtualBox (if so: cool!), but it’s certainly doable with KVM if you have the right motherboard and GPU combination. I have done it, but it is quite the hassle indeed though that isn’t really KVM’s fault.