The catarrhine who invented a perpetual motion machine, by dreaming at night and devouring its own dreams through the day.

  • 2 Posts
  • 85 Comments
Joined 6 months ago
cake
Cake day: January 12th, 2024

help-circle

  • So Mint can perform the same role as a tablet

    Yeah, you could argue that Mint allows that laptop to perform the same role as a tablet; it’s at most used for simple image edition, web browsing, and listening music through the SMB network (from my computer because hers has practically no storage).

    Without a Linux distro the other options would be to “perform” as electronic junk or virus breeding grounds.

    I keep seeing these posts and comments, trying to convince people This Is The Year of The Linux Desktop.

    Drop off the strawman. That is neither what the author of the article said, nor what I did.

    The rest of your comment boils down to you noisily beating that strawman to death, and can be safely disregarded as such.


  • To reinforce the author’s views, with my own experience:

    I’ve been using Linux for, like, 20 years? Back then I dual booted it with XP, and my first two distros (Mandriva and Kurumin) are already discontinued. I remember LILO.

    So I’m probably a programmer, right? …nope, my grads are Linguistics and Chemistry. And Linux didn’t make me into a programmer either, the most I can do is to pull out a 10 lines bash script with some websearch.

    So this “Linux is for programmers” myth didn’t even apply to the 00s, let alone now.

    You need a minimum of 8GB of RAM and a fairly recent CPU to do any kind of professional work at a non-jittery pace [in Windows]. This means that if you want to have a secondary PC or laptop, you’ll need to pay a premium for that too.

    Relevant detail: Microsoft’s obsession with generative models, plus its eagerness to shove wares down your throat, will likely make this worse. (You don’t use Copilot? Or Recall? Who cares? It’ll be installed by default, running in the background~)

    Linux, on the other hand, can easily boot up on a 10-year-old laptop with just 2GB of RAM, and work fine. This makes it the perfect OS for my secondary devices that I can carry places without worrying about accidental damage.

    My mum is using a fossil like this. It has 4GB or so; it’s a bit slow but it works with an updated Mint, even if it wouldn’t with Windows 10.

    Sure, you can delay an update [in Windows], but it’s just for five weeks.

    I gave the link a check… what a pain. For reference, in Linux Mint, MATE edition:

    That’s it. You click a button. It’s probably the same deal in other desktop environments.


  • Yeah, it’s actually good. People use it even for trivial stuff nowadays; and you don’t need a pix key to send stuff, only to receive it. (And as long as your bank allows you to check the account through an actual computer, you don’t need a cell phone either.)

    Perhaps the only flaw is shared with the Asian QR codes - scams are a bit of a problem, you could for example tell someone that the transaction will be a value and generate a code demanding a bigger one. But I feel like that’s less of an issue with the system and more with the customer, given that the system shows you who you’re sending money to, and how much, before confirmation.

    I’m not informed on Tikkie and Klarna, besides one being Dutch and another Swedish. How do they work?


  • Brazil ended with a third system: Pix. It boils down to the following:

    • The money receiver sends the payer either a “key” or a QR code.
    • The payer opens their bank’s app and use it to either paste the key or scan the QR code.
    • The payer defines the value, if the code is not dynamic (more on that later).
    • Confirm the transaction. An electronic voucher is emitted.

    The “key” in question can be your cell phone number, physical/juridical person registre number, e-mail, or even a random number. You can have up to five of them.

    Regarding dynamic codes, it’s also possible to generate a key or QR code that applies to a single transaction. Then the value to be paid is already included.

    Frankly the system surprised me. It’s actually good and practical; and that’s coming from someone who’s highly suspicious of anything coming from the federal government, and who hates cell phones. [insert old man screaming at clouds meme]


  • Do you mind if I address this comment alongside your other reply? Both are directly connected.

    I was about to disagree, but that’s actually really interesting. Could you expand on that?

    If you want to lie without getting caught, your public submission should have neither the hallucinations nor stylistic issues associated with “made by AI”. To do so, you need to consistently review the output of the generator (LLM, diffusion model, etc.) and manually fix it.

    In other words, to lie without getting caught you’re getting rid of what makes the output problematic on first place. The problem was never people using AI to do the “heavy lifting” to increase their productivity by 50%; it was instead people increasing the output by 900%, and submitting ten really shitty pics or paragraphs, that look a lot like someone else’s, instead of a decent and original one. Those are the ones who’d get caught, because they’re doing what you called “dumb” (and I agree) - not proof-reading their output.

    Regarding code, from your other comment: note that some Linux and *BSD distributions banned AI submissions, like Gentoo and NetBSD. I believe it to be the same deal as news or art.






  • No, that’s a coincidence. I wanted something that: started loud, was easy to recognise, I don’t mind hearing, and my neighbours don’t listen to. Wicked World it is.

    Here’s the code by the way, with the echo translated:

    lvxInternetCheck () {
    	while [[ $(ping -c 5 8.8.8.8 | grep -o "100% packet loss") == "100% packet loss" ]]
    		do echo "No internet at $(date +%R)." ; sleep 300
    		done
    	echo "Internet came back at $(date +%R)."
    	cvlc /[redacted]/08\ -\ Wicked\ World.mp3 
    	}
    

    It’s dirty but it works. (My functions start with “lvx” to avoid the tiny chance that they might clash with system functions.)


  • A script full of functions that I perform often, like:

    • Probe every 5min for internet connection. Play Black Sabbath when there is. (My internet goes down often.)
    • Create individual tarballs/zips/rars for each subdir.
    • Extract all tarballs/zips/rars from a dir. (It detects the format on its own)
    • Extract all files of a DwarFS file into a dir.
    • Re-encode all vids from a dir.
    • Delete all thumbnail pictures from my user.
    • Find and remove all desktop.ini and thumbs.db files in a dir, recursively.

    My .bashrc then sources that script, so to use those functions I simply open a terminal. And if I ever need to delete my .bashrc and recreate it anew, they’re safely stored in my scripts directory.


  • Think on the available e-books as a common pool, from the point of view of the people buying them: that pool is in perfect condition if all books there are DRM-free, or ruined if all books are infested with DRM.

    When someone buys a book with DRM, they’re degrading that pool, as they’re telling sellers “we buy books with DRM just fine”. And yet people keep doing it, because:

    • They had an easier time finding the copy with DRM than a DRM-free one.
    • The copy with DRM might be cheaper.
    • The copy with DRM is bought through services that they’re already used to, and registering to another service is a bother.
    • If copy with DRM stops working, that might be fine, if the buyer only needed the book in the short term.
    • Sharing is not a concern if the person isn’t willing to share on first place.
    • They might not even know what’s the deal, so they don’t perceive the malus of DRM-infested books.

    So in a lot of situations, buyers beeline towards the copy with DRM, as it’s individually more convenient, even if ruining the pool for everyone in the process. That’s why I said that it’s a tragedy of the commons.

    As you correctly highlighted that model relies on the idea that the buyer is selfish; as in, they won’t care about the overall impact of their actions on the others, only on themself. That is a simplification and needs to be taken with a grain of salt, however note that people are more prone to act selfishly if being selfless takes too much effort out of them. And those businesses selling you DRM-infested copies know it - that’s why they enclose you, because leaving that enclosure to support DRM-free publishers takes effort.

    I guess in the end we are talking about the same

    I also think so. I’m mostly trying to dig further into the subject.

    So the problem is not really consumer choice, but rather that DRM is allowed in its current form. But I admit that this is a different discussion

    Even being a different discussion, I think that one leads to another.

    Legislating against DRM might be an option, but easier said than done - governments are specially unruly, and they’d rather support corporations than populations.

    Another option, as weird as it might sound, might be to promote that “if buying is not owning, pirating is not stealing” discourse. It tips the scale from the business’ PoV: if people would rather pirate than buy books with DRM, might as well offer them DRM-free to increase sales.


  • Does this mean that I need to wait until September to reply? /jk

    I believe that the problem with the neolibs in this case is not the descriptive model (tragedy of the commons) that they’re using to predict a potential issue; it’s instead the “magical” solution that they prescribe for that potential issue, that “happens” to align with their economical ideology, while avoiding to address that:

    • in plenty cases privatisation worsens the erosion of the common resource, due to the introduction of competition;
    • the model applies specially well to businesses, that behave more like the mythical “rational agent” than individuals do;
    • what you need to solve the issue is simply “agreement”. Going from “agreement” to “privatise it!!!1one” is an insane jump of logic from their part.

    And while all models break if you look too hard at them, I don’t think that it does in this case - it explains well why individuals are buying DRM-stained e-books, even if this ultimately hurts them as a collective, by reducing the availability of DRM-free books.

    (And it isn’t like you can privatise it, as the neolibs would eagerly propose; it is a private market already.)

    I’m reading the book that you recommended (thanks for the rec, by the way!). Under a quick glance, it seems to propose self-organisation as a way to solve issues concerning common pool resources; it might work in plenty cases but certainly not here, as there’s no way to self-organise people who buy e-books.

    And frankly, I don’t know a solution either. Perhaps piracy might play an important and positive role? It increases the desirability of DRM-free books (you can’t share the DRM-stained ones), and puts a check on the amount of obnoxiousness and rug-pulling that corporations can submit you to.


  • This is going to be interesting. I’m already thinking on how it would impact my gameplay.

    The main concern for me is sci packs spoiling. Ideally they should be consumed in situ, so I’d consider moving the research to Gleba and ship other sci packs to it. This way, if something does spoil at least the spoilage is near where I can use it. Probably easier said than done - odds are that other planets have “perks” that would make centralising science there more convenient.

    You’ll also probably want to speed up the production of the machines as much as possible, since the products inherit spoilage from the ingredients. Direct insertion, speed modules, upgrading machines ASAP will be essential there - you want to minimise the time between the fruit being harvested and outputting something that doesn’t spoil (like plastic or science).

    Fruits outputting pulp and seeds also hint me an oil-like problem, as you need to get rid of byproducts that you might not be using. Use only the seeds and you’re left with the pulp; use only the pulp and you’re left with the seeds. The FFF hints that you can burn stuff, but that feels wasteful.


  • I also apologise for the tone. That was a knee-jerk reaction from my part; my bad.

    (In my own defence, I’ve been discussing this topic with tech bros, and they rather consistently invert the burden of the proof. Often to evoke Brandolini’s Law. You probably know which “types” I’m talking about.)

    On-topic. Given that “smart” is still an internal attribute of the blackbox, perhaps we could gauge better if those models are likely to become an existential threat by 1) what they output now, 2) what they might output in the future, and 3) what we [people] might do with it.

    It’s also easier to work with your example productively this way. Here’s a counterpoint:


    The prompt asks for eight legs, and only one pic was able to output it correctly; two ignored it, and one of the pics shows ten legs. That’s 25% accuracy.

    I believe that the key difference between “your” unicorn and “my” eight-legged dragon is in the training data. Unicorns are fictitious but common in popular culture, so there are lots of unicorn pictures to feed the model with; while eight-legged dragons are something that I made up, so there’s no direct reference, even if you could logically combine other references (as a spider + a dragon).

    So their output is strongly limited by the training data, and it doesn’t seem to follow some strong logic. What they might output in the future depends on what we add in; the potential for decision taking is rather weak, as they wouldn’t be able to deal with unpredictable situations. And thus their ability to go rogue.

    [Note: I repeated the test with a horse instead of a dragon, within the same chat. The output was slightly less bad, confirming my hypothesis - because pics of eight-legged horses exist due to the Sleipnir.]

    Neural nets

    Neural networks are a different can of worms for me, as I think that they’ll outlive LLMs by a huge margin, even if the current LLMs use them. However, how they’ll be used is likely considerably different.

    For example, current state-of-art LLMs are coded with some “semantic” supplementation near the embedding, added almost like an afterthought. However, semantics should play a central role in the design of the transformer - because what matters is not the word itself, but what it conveys.

    That would be considerably closer to a general intelligence than to modern LLMs - because you’re effectively demoting language processing to input/output, that might as well be subbed with something else, like pictures. In this situation I believe that the output would be far more accurate, and it could theoretically handle novel situations better. Then we could have some concerns about AI being an existential threat - because people would use this AI for decision taking, and it might output decisions that go terribly right, as in that “paperclip factory” thought experiment.

    The fact that we don’t see developments in this direction yet shows, for me, that it’s easier said than done, and we’re really far from that.


  • Chinese room, called it. Just with a dog instead.

    The Chinese room experiment is about the internal process; if it thinks or not, if it simulates or knows, with a machine that passes the Turing test. My example clearly does not bother with all that, what matters here is the ability to perform the goal task.

    As such, no, my example is not the Chinese room. I’m highlighting something else - that the dog will keep doing spurious associations, that will affect the outcome. Is this clear now?

    Why this matters: in the topic of existential threat, it’s pretty much irrelevant if the AI in question “thinks” or not. What matters is its usage in situations where it would “decide” something.

    I have this debate so often, I’m going to try something a bit different. Why don’t we start by laying down how LLMs do work. If you had to explain as full as you could the algorithm we’re talking about, how would you do it?

    Why don’t we do the following instead: I play along your inversion of the burden of the proof once you show how it would be relevant to your implicit claim that AI [will|might] become an existential threat (from “[AI is] Not yet [an existential threat], anyway”)?


    Also worth noting that you outright ignored the main claim outside spoilers tag.


  • I don’t think that a different training scheme or integrating it with already existing algos would be enough. You’d need a structural change.

    I’ll use a silly illustration for that; it’s somewhat long so I’ll put it inside spoilers. (Feel free to ignore it though - it’s just an illustration, the main claim is outside the spoilers tag.)

    The Mad Librarian and the Good Boi

    Let’s say that you’re a librarian. And you have lots of books to sort out. So you want to teach a dog to sort books for you. Starting by sci-fi and geography books.

    So you set up the training environment: a table with a sci-fi and a geography books. And you give your dog a treat every time that he puts the ball over the sci-fi book.

    At the start, the dog doesn’t do it. But then as you train him, he’s able to do it perfectly. Great! Does the dog now recognise sci-fi and geography books? You test this out, by switching the placement of the books, and asking the dog to perform the same task; now he’s putting the ball over the history book. Nope - he doesn’t know how to tell sci-fi and geography books apart, you were “leaking” the answer by the placement of the books.

    Now you repeat the training with a random position for the books. Eventually after a lot of training the dog is able to put the ball over the sci-fi book, regardless of position. Now the dog recognises sci-fi books, right? Nope - he’s identifying books by the smell.

    To fix that you try again, with new versions of the books. Now he’s identifying the colour; the geography book has the same grey/purple hue as grass (from a dog PoV), the sci book is black like the neighbour’s cat. The dog would happily put the ball over the neighbour’s cat and ask “where’s my treat, human???” if the cat allowed it.

    Needs more books. You assemble a plethora of geo and sci-fi books. Since typically tend to be dark, and the geo books tend to have nature on their covers, the dog is able to place the ball over the sci-fi books 70% of the time. Eventually you give up and say that the 30% error is the dog “hallucinating”.

    We might argue that, by now, the dog should be “just a step away” from recognising books by topic. But we’re just fooling ourselves, the dog is finding a bunch of orthogonal (like the smell) and diagonal (like the colour) patterns. What the dog is doing is still somewhat useful, but it won’t go much past that.

    And, even if you and the dog lived forever (denying St. Peter the chance to tell him “you weren’t a good boy. You were the best boy.”), and spend most of your time with that training routine, his little brain won’t be able to create the associations necessary to actually identify a book by the topic, such as the content.

    I think that what happens with LLMs is a lot like that. With a key difference - dogs are considerably smarter than even state-of-art LLMs, even if they’re unable to speak.

    At the end of the day LLMs are complex algorithms associating pieces of words, based on statistical inference. This is useful, and you might even see some emergent behaviour - but they don’t “know” stuff, and this is trivial to show, as they fail to perform simple logic even with pieces of info that they’re able to reliably output. Different training and/or algo might change the info that it’s outputting, but they won’t “magically” go past that.


  • I’m reading your comment as “[AI is] Not yet [an existential threat], anyway”. If that’s inaccurate, please clarify, OK?

    With that reading in mind: I don’t think that the current developments in machine “learning” lead towards the direction of some hypothetical system that would be an existential threat. The closest to that would be the subset of generative models, that looks like a tech dead end - sure, it might see some applications, but I don’t think that it’ll progress much past the current state.

    In other words I believe that the AI that would be an existential threat would be nothing like what’s being created and overhyped now.