That’s mostly down to Teams though (being the bloated web app that it is), and not the underlying operating system.
That’s mostly down to Teams though (being the bloated web app that it is), and not the underlying operating system.
I guess you could sell a literal copy, yeah. But ironically, the lack of DRM binding that copy to an account by a user makes a “proof of original ownership” harder, if that’s what you want.
That’s not how it works with digital goods, but that’s a limitation of digital goods really.
When talking about the kernel, Windows actually skipped 3 major versions iirc from the top of my head. Windows 8 was Windows (NT) 6.2, and Windows 10 skipped that version number to, well, 10.
Why when a simple alias will do?
They don’t even know what they want to do themselves.
If you’re talking about Steam, while it provides its own DRM system, games can be published on there without any DRM whatsoever, so you can do whatever you want with the downloaded files and then play the game without Steam.
I also experienced less “hiccups” since switching to Linux with KDE but I’d like to know on what combination of hardware and Windows you experienced anywhere close to an average of 1s response time to “any input”.
I expected something more shocking when I read “working with Russia”.
Kagi uses multiple search backends, and of course it needs to forward search terms to these backends. These backends probably can’t trace the searches back to the individual Kagi user though, but Yandex could still analyze search trends for example.
What’s worse is that - unless they use Yandex’ API for free - customers indirectly (and likely unknowingly) support a Russian company with their paid Kagi subscription.
Kagi should at the very least release a statement about this claim.
I’m no expert here, but I’m pretty sure branch prediction logic is not part of the instruction set, so I don’t see how RISC alone would “fix” these types of issues.
I think you have to go back 20-30 years to get CPUs without branch prediction logic. And VSCodium is quite the resource hog (as is the modern web), so good luck with that.
Not really, just some wording…?
There was a vulnerability in Project64 so a malicious ROM could escape outside of the emulator. So while unlikely, it’s certainly possible.
It’s kind of in the word distribution, no? Distros package and … distribute software.
Larger distros usually do a quite a bit of kernel work as well, and they often include bugfixes or other changes in their kernel that isn’t in mainline or stable. Enterprise-grade distributions often backport hardware support from newer kernels into their older kernels. But even distros with close-to-latest kernels like Tumbleweed or Fedora do this to a certain extent. This isn’t limited to the kernel and often extends to many other packages.
They also do a lot of (automated) testing, just look at openQA for example. That’s a big part of the reason why Tumbleweed (relatively) rarely breaks. If all they did was collect an up-to-date version of every package they want to ship, it’d probably be permanently broken.
Also, saying they “just” update the desktop environment doesn’t do it justice. DEs like KDE and GNOME are a lot more than just something that draws application windows on your screen. They come with userspace applications and frameworks. They introduce features like vastly improved HDR support (KDE 6.2, usually along with updates to Wayland etc.).
Some of the rolling (Tumbleweed) or more regular (Fedora) releases also push for more technical changes. Fedora dropped X11 by default on their KDE spin with v40, and will likely drop X11 with their default GNOME distro as well, now that GNOME no longer requires it even when running Wayland. Tumbleweed is actively pushing for great systemd-boot support, and while it’s still experimental it’s already in a decent state (not ready for prime time yet though).
Then, distros also integrate packages to work together. A good example of this is the built-in enabled-by-default snapshot system of Tumbleweed (you might’ve figured out that I’m a Tumbleweed user by now): it uses snapper to create btrfs snapshots on every zypper (package manager) system update, and not only can you rollback a running system, you can boot older snapshots directly from the grub2 or systemd-boot bootloader. You can replicate this on pretty much any distro (btrfs support is in the kernel, snapper is made by an openSUSE member but available for other distros etc.), but it’s all integrated and ready to go out of the box. You don’t have to configure your package manager to automatically create snapshots with snapper, the btrfs subvolume layout is already setup for you in a way that makes sense, you don’t have to think about how you want to add these snapshots to your bootloader, etc.
So distros or their authors do a lot and their releases can be exciting in a way, but maybe not all of that excitement is directly user-facing.
Exactly, and I’d rather devs focus their time on making sure their Windows version works well via Proton than using that same time for a half-assed native Linux version.
What’s “Google hardware”? Likely just NVIDIA hardware running in Google’s cloud?
I’m coping for RDNA4.
In my experience, even when a game has a native Linux version, the Windows version run via Proton can often be the better choice.
In Tabletop Simulator, I wasn’t able to join my friends’ multiplayer sessions with the native Linux version. No problem with the Windows version via Proton.
The Linux version of Human Fall Flat isn’t feature complete/outdated.
There are better examples though. Valheim runs fantastic aside from a bug that it picks the first instead of the default audio device for sound output on startup. It even supports mods and r2modman supports Linux as well.
Didn’t have any problems with Spiritfarer either.
Pricing seems to be a lot cheaper than from “the big three” (Amazon, Google, Microsoft), but similar to competitors like Backblaze or Wasabi.
Most of their other services are super attractive in terms of price (and also quality in my experience), this seems more like an “hey we have S3 too”.
Then don’t buy a new iPhone/MacBook/iPad every year.
I personally prefer to buy a device that’s as up-to-date as possible whenever I buy it. I wouldn’t want to buy an expensive device that’s 2.5 years old when I buy it only for it to get replaced by its successor half a year down the line.
Crazy how quickly NVIDIA went up. I wonder if they’ll crash down just as fast should the AI hype either die off or shift to other manufacturers (Intel, AMD etc.) or in-house solutions (ex. Apple Intelligence).
I think I have a simple function in my
.zshrc
file that updates flatpaks and runsdnf
orzypper
depending on what the system uses. This file is synced between machines as part of my dotfiles sync so I don’t have to install anything separate. The interface of most package managers is stable, so I didn’t have to touch the function.This way I don’t have to deal with a package that’s on a different version in different software repositories (depending on distribution) or manually install and update it.
But that’s just me, I tend to keep it as simple as possible for maximum portability. I also avoid having too many abstraction layers.