• 0 Posts
  • 157 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle


  • The core is immutable, but it comes with flatpak which writes to a writeable location so you can install and update applications independently of OS updates without having them wiped after an upgrade. You can also install and use tools like distrobox to give you container environments that you can install and change as much as you like as well.


  • It seems to be geared toward people who want to constantly maintain there system

    That is where your assumptions are wrong. It is for people that know how and want control over their setup. But after the initial setup maintenance is no worst that any other distro - simpler even in the longer term. Just update your packages and very occasionally manually update a config somewhere or run an extra command before hand (I honestly cannot remember the last time I even needed to do that much…). Far easier than needing to reinstall or fix a whole bunch of broken things after a major system upgrade that happens every few years on other distros.

    People that like to tinker and break their system can do that on any distro. That does not mean it is high maintenance, quite the opposite in fact as it is easier to fix as Arch is generally easier to fix when you do break something (so does attract people that do like to tinker). But leave it alone and it wont just randomly break every week like so many people seem to think it does.


  • You dont even need a separate partition, just delete the non-home directories and reinstall. pacstrap might even do that for you 🤔 it has been a while since i last needed to reinstall. And most of the time you dont even need a full reinstall, Arch is trivial to fix most things from a live cd by partially following the install process - most often get a chroot and start reinstalling select packages/configs in some of the worst case scenarios.


  • Unfortunately, I’ve never been able to really daily-drive Linux (and this Arch experiment is no exception). Don’t get me wrong: I love linux and the idea of having independent open-source and infinitely customizable OS. But unfortunately I professionally rely on some of the apps, that have no viable alternatives for Linux (PowerPoint, Photoshop, Illustrator, Proton Drive).

    There are viable alternatives for Linux as you mentioned. But non are going to just be drop-in replacements for those tools. There are a lot of graphics design tools out there now that are just as powerful as Photoshop for what most people need. But the big issue is they are different in just enough ways that it can be a challenge to switch to them once you are used to the way Photoshop and the other windows only tools work. This is just something you are going to have to get over if you want to try Linux longer term.

    But it can be far too much to switch all at once and with a completely new OS as well. So don’t. Instead start using these tools and alternative on your Windows install now. Start trying out different ones (there are a lot, both open and closed source), and giving each a decent attempt to use. Start out with smaller side projects so you don’t interrupt your main workflows and slowly over time start learning and getting used to the different way these other tools work. If you make some effort to do that while on Windows then the next time you try out Linux they wont seem as bad. But if you keep sticking with Windows only software on Windows you are going to find the same issue every time you try to switch.


  • “We had relied and started to rely too much this year on self-checkout in our stores,” Vasos told investors. “We should be using self-checkout as a secondary checkout vehicle, not a primary.”

    That is the key point here. Use them to replace the express lanes but dont replace all checkout points with them.

    they actually increase labor costs thanks to employees who get taken away from their other duties to help customers deal with the confusing and error prone kiosks

    Now that is bullshit… how can it cost more to have someone spend part of their time to help a customer when they have a problem vs having an extra person help them full time during checkout.

    Still, 60% of consumers said they prefer self-checkout as of 2021, presumably because they’ve never seen Terminator (wake up sheeple).

    WTH… I really don’t understand why this person hates them so much. Seems to have some hidden agenda but I cannot for the life of me tell what it is.




  • Theoretically you could build a male to male contraption from multiple adapters and a cable.

    You already can as these exist:

    letting you plug in any existing USB A to mini cables together to get a male to male device - nothing unsafe about that though. So this is not a very good reason to not allow USB C to mini adapters.

    Also you could be providing too much current to a device, however this is specific to the combination of adapter, cable and power supply you use.

    Current is pulled by the device - you cannot supply too much current. Devices take just as much current as they need or as much as the adapter can supply. The only way a device would take more than that is by badly designed or faulty - but that is a problem with the device, if the power supply can supply the power there is no issues on that side.

    Also USB C connectors can and do by default operate with USB 2 power - supplying 5V and limiting the current to the USB 2 standards and so any existing charger with USB A or mini connectors on. Thus any USB 2 device will only have access to the power given by the spec. You would require a handshake from newer USB protocols to get access to more voltage/current that some USB C chargers can supply.

    There is nothing unsafe about any other this baring faulty devices - but if we worried about faulty devices then we would not allow any electronics devices to exist as any of them could be faulty. USB C to USB mini does not dramatically increase any risk of fire or devices exploding no more so than any device using USB mini or USB C alone.

    The real reason is there is likely just not much of a market for them so they are harder to find - but they do exist.




  • I think this is true from the original definition of the word. But decades of one side calling out the other sides propaganda in harsh and negative lighting leaves a negative connotation to the word. Which results in each side avoiding the word for their own messaging and using the word for their opponents messaging. Which further reinforces the negative perception of the word and over decades of people doing this it has left a lot of people thinking it only ever applies to negative or deceptive messaging. And I think this was more impactful in places like the US where there were a lot of political people using the word in a negative way - such as in the big red scare campaign in attacking communist ideas by calling it communist propaganda and similar messaging.

    Which is shown by various comments in this post thinking it only applies to negative or deceptive messaging. So I would argue the meaning of the word has or is still changing - as words naturally do over time due to how people use them. Which I think goes a way to answering the OPs question, some places used the word more negatively which gives the people that live in those areas a more negative view on the word. While others have not and so people there have a more neutral take on the word.




  • Sockets are just streams of bytes - no defined structure to them at all. Dbus is about defining a common interface that everything can talk. That means when writing a program you don’t need to learn how every program you want to talk to talks over its own socket - just can just use a dbus library and query what is available on the system.

    At least that is the idea - IMO its implementation has a lot to be desired but a central event bus IMO is a good idea. Just needs to be easy to integrate with which I think is what dbus fails at.

    A great example is music player software - rather than every music player software creating its own socket and each having its own API to basically all do the same operations. So anything that want to just play/pause some music would need to understand all the differences between all the various different music applications. Instead with a central event bus system each music app could integrate with that and each application that wants to talk to a music app would just need to talk to the event bus and not need to understand every single music app out there.



  • (for example) a 250GB drive that does not use the full address space available

    Current drives do not have different sized addressable spaces and a 256GiB drive does not use the full address space available. If it did then that would be the maximum size a drive could be. Yet we have 20TB+ drives and even those are no where near the address size limit of storage media.

    then I suspect the average drive would have just a bit more usable space available by default.

    The platter size might differ to get the same density and the costs would also likely be different. Likely resulting in a similar cost per GB, which is the number that generally matters more.

    My comment re wear-levelling was more to suggest that I didn’t think the unused address space (in my example of 250GB vs 256GiB) could be excused by saying it was taken up by spare sectors.

    There is a lot of unused address space - there is no need to come up with an excuse for it. It does not matter what size the drive is they all use the same number of bits for addressing the data.

    Address space is basically free, so not using it all does not matter. Putting in extra storage that can use the space does cost however. So there is no real relation between the address spaces and what space is on a drive and what space is accessible to the end user. So it makes no difference in what units you use to market the drives on.

    Instead the marketing has been incredibly consistent - way back to the early days. Physical storage has essentially always been labeled in SI units. There really is no marketing conspiracy here. It just that is they way it was always done. And why it was picked that way to begin with? Well, that was back in the day when binary units where not as common and physical storage never really fit the doubling pattern like other components like ram. You see all sorts of random sizes in early storage media so SI units I guess did not feel out of place.


  • Huh? What does how a drive size is measured affect the available address space used at all? Drives are broken up into blocks, and each block is addressable. This is irrelevant of if you measure it in GB or GiB and does not change the address or block size. Hell, you have have a block size in binary units and the overall capacity in SI units and it does not matter - that is how it is typically done with typical block sizes being 512 bytes, or 4096 (4KiB).

    Or have anything to do with ware leveling at all? If you buy a 250GB SSD then you will be able to write 250GB to it - it will have some hidden capacity for ware-leveling, but that could be 10GB, 20GB, 50GB or any number they want. No relation to unit conversions at all.