• 1 Post
  • 43 Comments
Joined 1 year ago
cake
Cake day: June 6th, 2023

help-circle


  • No, just because it is reproducible doesn’t mean you are able to (re)produce something that works. With something like fedora silverblue you know that this specific composition of packages and their versions has been tested, and that all the other users run this exact composition as well.

    When you roll your own composition, where you install whatever stuff, you may be the one finding out that there’s some conflict between package a version u.v.w and package b version x.y.z.


  • I encourage you to go to town with whatever crazy setup you come up.

    I just want to note that the reboot-to-update mechanism also has its positive sides, as ancient as it may seem (we do not succumb to windows level backwardness, because that fails to reap the benefits despite requiring so many reboots). Namely, you get atomic updates, hence the name “fedora atomic” for example. That means you have no transient periods where your OS is running in an inconsistent state. Like when you update a traditional distro, the new files/libraries/binaries/kernel-modules do not match anymore what is in RAM, including the currently running kernel. That leads to stuff like the nvidia driver / cuda not working until reboot, running applications failing to load a library they need now etc… The vast majority of times this is no huge problem, but in theory the only way of maintaining a system with it never running in basically undefined state is with atomic udpates.




  • Research what happened to Upstart, Mir or Unity. It won’t take long until snap becomes one of them. Somebody at canonical seems to desperately obsess over having something unique, either as a way to justify canonicals existance or even in the hopes of making the next big thing. Over all these years they never learned that whatever they do exclusively will always fall short of any other joint efforts in the linux world, because they always lack the technical advances, ability/will to push it for a prolonged time and/or the non-proprietary-ness. So instead of collaborating like every serious linux vendor, they’re polluting their distro with half-assed, ever changing and unwanted experiments. They’re even hijacking apt commands to push their stupid snap stuff against the users intent. With the shengians they’re pulling Ubuntu cannot be relied on, and with that they’re sabotaging their own success and drive away any commercial customers that generate revenue.




  • As far as I understand, in this case opaque binary test data was gradually added to the repository. Also the built binaries did not correspond 1:1 with the code in the repo due to some buildchain reasons. Stuff like this makes it difficult to spot deliberately placed bugs or backdors.

    I think some measures can be:

    • establish reproducible builds in CI/CD pipelines
    • ban opaque data from the repository. I read some people expressing justification for this test-data being opaque, but that is nonsense. There’s no reason why you couldn’t compress+decompress a lengthy creative commons text, or for binary data encrypt that text with a public password, or use a sequence from a pseudo random number generator with a known seed, or a past compiled binary of this very software, or … or … or …
    • establish technologies that make it hard to place integer overflows or deliberately miss array ends. That would make it a lot harder to plant a misbehavement in the code without it being so obvious that others note easily. Rust, Linters, Valgrind etc. would be useful things for that.

    So I think from a technical perspective there are ways to at least give attackers a hard time when trying to place covert backdoors. The larger problem is likely who does the work, because scalability is just such a hard problem with open source. Ultimately I think we need to come together globally and bear this work with many shoulders. For example the “prossimo” project by the Internet Security Research Group (the organisation behind Let’s Encrypt) is working on bringing memory safety to critical projects: https://www.memorysafety.org/ I also sincerely hope the german Sovereign Tech Fund ( https://www.sovereigntechfund.de/ ) takes this incident as a new angle to the outstanding work they’re doing. And ultimately, we need many more such organisations and initiatives from both private companies as well as the public sector to protect the technology that runs our societies together.




  • I think one puzzle piece of improvement is flatpak:

    • It has a verification system, such that users can see which apps are packaged by their developers. For those apps, this eliminates the need to trust a separate maintainer entirely
    • It targets almost all linux distributions with a single package. This cuts down the packaging effort for covering the majority of the linux landscape so much, that the number of package maintainers required to be trusted collapses - in the ideal case to just the developers themselves as in the first bullet point
    • It makes use of sandboxing, so in case of a malicious app it (in theory) only has access to the stuff the user gave it permission to.

    In reality there’s a plethora of problems obviously:

    • verified apps are the minority
    • some people don’t like the additional storage needed for runtimes (although the more flatpaks you use the more runtimes can be shared and its overall impact gets smaller)
    • A lot of apps do not yet use all the portals, and require the classical full access to the system to work properly (in some cases the user can still remove some permission if certain features of the application are not needed by them though). This is just a question of ongoing development work, and hopefully we reach a point in the near future where a flatpak app without tied down permissions raises eyebrows


  • Because the seemingly great choice of Webbrowsers in reality boils down to a risky monoculture of chromium (/its webengine). The only real alternative is Firefox/Blink. Risky, because the main driver behind Chrome-/ium (Google) is not acting on behalf of the public interest towards a free, open and privacy preserving internet. Instead they’re working on a privacy exploiting one that gets locked down using DRM technologies. Them being a vendor of major parts of the internet as well as the browser to use it makes this a lethal combination. Firefox will definitely exist for as long as Google exists, because its their tool to defy claims of a monopoly, but they will do everything to keep it the small and mostly irrelevant “competitor” it is currently. Therefore, stand against Googles evil play and help Mozilla to gain some actual indipendence and leverage for keeping the internet free (as in freedom), open and privacy preserving.






  • For me it would be open-ness and through that privacy. The dream device would be some mobile convertible with the repairibility of framework, that is completely free and open source hardware and software. Like powered by risc-v, with some future open gpu, and every (storage-/keyboard-/touchpad-/touchscreen-/battery-/network-/wifi-/ etc) controller on it being risc-v and running open firmware as well. Just such that for every byte being processed in this device you could pin down the piece of circuit and line of code that makes it so. In terms of linux some future version of gnome on a immutable distro with flatpaks that have very tied down permissions would be a nice future to me.

    And I think overall many aspects of this are moving in that direction. The biggest roadblock is probably a truly open gpu, and then highly integrated controllers like for storage.