• 3 Posts
  • 118 Comments
Joined 1 year ago
cake
Cake day: June 1st, 2023

help-circle


  • Whataboutism? Really? That’s the game we’re playing?

    Sure, okay, I’ll bite.

    Edward Snowden: He’s a hero, no doubt in my mind. But from this perspective, no one has attacked him since his departure from the US. Formal requests have been made to extradite him and they’ve been turned down. Once on foreign soil the US respected Russian sovereignty.

    Julian Assange: Okay personally I find Assange to be a piece of shit, but that aside, the extradition process has been followed legally.

    Chelsea Manning: Broke the law. And while her initial imprisonment situation was absolutely concerning, it was legal. The legal process was followed, and the sentence given was far short of the maximum. Her sentence was commuted by a sitting president. No foreign governments were involved, so no sovereignty was violated.

    Drake and Binny: Always were on US soil. No foreign involvement whatsoever. They were raided and Drake was changed with crimes. He received probation and community service. Once again, the legal process was followed and no foreign sovereignty violated.

    Boeing Whistleblowers: What the fuck is this arguement? You think the US is happy one of it’s biggest military manufacturers and transportation providers has serious quality issues? You think the US is taking action against the whistleblowers? Be serious.

    Basically: you’re saying the US charges people who violate the laws around information handling as criminals. Yes, that’s true. Now, I personally am sympathetic to most of these cases. I assume you are too. Whistleblowers should be better protected, but at the same time some information, like the names and personal information of government assets abroad, reasonably should be protected. It’s a delicate balance, and one I think the US could greatly improve.

    However, these are not similar to the cases in question. The cases in question are actions by governments on foreign soil or against US citizens. This is an enormous violation of sovereignty, legality, and due process. That’s the issue at hand.












  • Yeah I’m with you on this. Even from a pure science fiction perspective there’s just no way the experience of consciousness “transfers” by any currently understood science.

    Just like when you move a computer’s file across the Internet the result would be a copy, and that wouldn’t really be noticable or impactful to the copy or the people who know you and the copy would interact with, but it would make a hell of a lot of difference for the person going in. Great if you’re dying and want to do what you can (The Culture book series covers this possibility quite well) but otherwise small comfort.

    Best case scenario is “The Prestige”, but with a much quicker and cleaner death.

    And if someone slaps “quantum entanglement” on the table like that is a real answer for anything, imma not even bother.






  • Hard disagree on them being the same thing. LLMs are an entirely different beast from traditional machine learning models. The architecture and logic are worlds apart.

    Machine Learning models are "just"statistics. Powerful, yes. And with tons of useful applications, but really just statistics, generally using just 1 to 10 variables in useful models to predict a handful of other variables.

    LLMs are an entirely different thing, built using word vector matrices with hundreds or even thousands of variables, which are then fed into dozens or hundreds of layers of algorithms that each modify the matrix slightly, adding context and nudging the word vectors towards new outcomes.

    Think of it like this: a word is given a massive chain of numbers to represent both the word and the “thoughts” associated with it, like the subject, tense, location, etc. This let’s the model do math like: Budapest + Rome = Constantinople.

    The only thing they share in common is that the computer gives you new insights.


  • You’re talking about two very different technologies though, but both are confusingly called “AI” by overzealous marketing departments. The basic language recognition and regressive model algorithms they ship today are “Machine Learning”, and fairly simple machine learning at that. This is generally the kind of thing we’re running on simple CPUs in realtime, so long as the model is optimized and pre-trained. What we’re talking about here is a Large Language Model, a form of neural network, the kind of thing that generally brings datacenter GPUs to their knees and generally has hundreds of parameters being processed by tens of thousands of worker neurons in hundreds of sequential layers.

    It sounds like they’ve managed to simplify the network’s complexity and have done some tricks with caching while still keeping fair performance and accuracy. Not earth shaking, but a good trick.