Man… Anybody remember “Back Orifice”? The late nineties were weird.
Man… Anybody remember “Back Orifice”? The late nineties were weird.
If not vanilla Ubuntu, I’d still suggest trying an Ubuntu derivative like Linux Mint or POP! OS. Ubuntu has a huge community, so in the event you run into issues it’ll be easier to find fixes for it.
What you’ll find is that Linux distros are roughly grouped by a “family” (my term for it anyway). Anyone can (theoretically, anyway) start from a given kernel and roll their own distro, but most distros are modified versions of a handful of base distros.
The major families at the moment are
Debian: A classic all-rounder that prioritizes stability over all else. Ubuntu is descended from Debian.
Fedora: Another classic all-rounder. I haven’t used it in a decade, so I won’t say much about it here.
Arch: If Linux nerds were car people, Arch is for the hot rodders. You can tune and control pretty much any aspect of your system. … Not a good 1st distro if you want to just get something going.
There are many others, but these are the major desktop-PC distro families at the moment.
The importance of these families is that techniques that work in one (say) Debian-based distro will tend to work in other Debian-based distros… But not necessarily in distros from other families.
Oh, for sure. I focused on ML in college. My first job was actually coding self-driving vehicles for open-pit copper mining operations! (I taught gigantic earth tillers to execute 3-point turns.)
I’m not in that space anymore, but I do get how LLMs work. Philosophically, I’m inclined to believe that the statistical model encoded in an LLM does model a sort of intelligence. Certainly not consciousness - LLMs don’t have any mechanism I’d accept as agency or any sort of internal “mind” state. But I also think that the common description of “supercharged autocorrect” is overreductive. Useful as rhetorical counter to the hype cycle, but just as misleading in its own way.
I’ve been playing with chatbots of varying complexity since the 1990s. LLMs are frankly a quantum leap forward. Even GPT-2 was pretty much useless compared to modern models.
All that said… All these models are trained on the best - but mostly worst - data the world has to offer… And if you average a handful of textbooks with an internet-full of self-confident blowhards (like me) - it’s not too surprising that today’s LLMs are all… kinda mid compared to an actual human.
But if you compare the performance of an LLM to the state of the art in natural language comprehension and response… It’s not even close. Going from a suite of single-focus programs, each using keyword recognition and word stem-based parsing to guess what the user wants (Try asking Alexa to “Play ‘Records’ by Weezer” sometime - it can’t because of the keyword collision), to a single program that can respond intelligibly to pretty much any statement, with a limited - but nonzero - chance of getting things right…
This tech is raw and not really production ready, but I’m using a few LLMs in different contexts as assistants… And they work great.
Even though LLMs are not a good replacement for actual human skill - they’re fucking awesome. 😅
What I think is amazing about LLMs is that they are smart enough to be tricked. You can’t talk your way around a password prompt. You either know the password or you don’t.
But LLMs have enough of something intelligence-like that a moderately clever human can talk them into doing pretty much anything.
That’s a wild advancement in artificial intelligence. Something that a human can trick, with nothing more than natural language!
Now… Whether you ought to hand control of your platform over to a mathematical average of internet dialog… That’s another question.
Lots of little quality of life things. For instance, in Kotlin types can be marked nullable or not. When you are passing a potential null into a non-nullable argument, the compiler raises an error.
But if you had already checked earlier in scope whether or not the value was null, the compiler remembers that the value is guaranteed not to be null and won’t blow up.
Same for other typechecks. Once you have asserted that a value is a given type, you don’t need to cast it everywhere else. The compiler will remember.
Kotlin is Java with all the suck taken out.
It’s a modern, ergonomic language that runs on the JVM and removes as much GD boilerplate as it can.
It’s fantastic.
That one dude still using Delphi is getting screwed.
Also, these salary numbers seem… real low. I get that it’s the median so maybe a huge number of overseas engineers are pulling the results down but in my neck of the woods 105K is less than what we pay juniors.
Argh. I hate that argument.
Yes - “Rewriting history” is a Bad Thing - but o argue that’s only on ‘main’ (or other shared branches). You should (IMHO) absolutely rewrite your local history pre-push for exactly the reasons you state.
If you rewrite main’s history and force your changes everybody else is gonna have conflicts. Also - history is important for certain debugging and investigation. Don’t be that guy.
Before you push though… rebasing your work to be easily digestible and have a single(ish) focus per commit is so helpful.
I use a stacked commit tool to help automate rebasing on upstream commits, but you can do it all with git pretty easily.
Anyway. Good on you; Keep the faith; etc etc. :)
For anybody else curious, he’s using KalibriOS.
It’s an open source, ultra-lightweight os that is not a fork of Linux… I think.
Neat project though!
less effort than properly setting up sendmail
My brother in *nix, while I agree with your conclusion, that bar is so low you can’t use it for limbo.
Copyright laws are by country. Some countries have treaties to respect each other’s copyrights and some don’t.
So, it is entirely possible to have something considered “public domain” in the USA still be protected in the UK.
…But given the relative economic weight of the two countries, simply banning the export of your locally-infringing Peter Pan/Steamboat Willie slash fic novel would be pretty easy.
I know a guy who grew up occasionally homeless. He has ended up as a well paid tech manager and his approach is that his family can usually just afford the things they want, so instead of buying stress gifts the last month of each year, his family picks a charitable cause to donate time and money to instead.
They’ve bought goats for third world families; paid for education, transportation and home construction; fed hungry and clothed the naked.
He’s a cool guy.
Sorry, I missed one more critical detail there… This game was in space! Played on a 2D, wraparound surface, with a top-down perspective, but it was definitely in space.
The fighters were fast and cheap but weak and could only shoot lasers.
The bombers were slower but tougher and could fire missiles. (Missiles could also be scripted, come to think of it. And if you made them stop, they turned into mines)
The fleet ships could manufacture other ships. You only have a single fleet ship at the start, but as time goes on, you can build more. …if you haven’t spent all your resources on building fighters and bombers.
Most obscure videogames I ever played:
A 3D, first person pacman clone that I played on a 286 MS DOS laptop in the nineties. I don’t remember its name and I’ve never seen it since.
A programming game from the early 2000s called something like Fleet Commander. (But none of the many games named something like Fleet Commander that I can currently find online are it.) This game had a VB-inspired, event driven programming language. You used it to command fighters, bombers and fleet command ships. Each ship had its own AI script it would execute.
Hephaestus or Athena by career, but my heart belongs to Artemis.
If you’re suffering from depression, look into Transcranial Magnetic Stimulation (TMS). After over a decade on SSRIs and other meds had failed, it turned my life around in six months. Literally life saving.
The effectiveness is proven (at much better rates than SSRIs), but the exact mechanism is under study.
But… There was a recent study that suggested that many cases of depression are caused by misordered neuron firing, where the emotional center of the brain fires before the “imagine the future” bit finishes firing. Normally, when a healthy brain imagines a future state, the emotional center fires in response to our anticipated feeling. (Imagination: We’re going to the movies. Emotions: FUN) But in a depressed brain, the emotional core fires immediately, resulting in the current, crappy mood being applied to every imagined future. (Emotions: Everything is shit. Imagination: We’re going to the movies?)
TMS may work as well as it does because one of the things it does is increase neuroplasticity, allowing the brain to correctly order the firing of our emotional response to imagined futures.
Anyway - TMS is right at the edges of our understanding of treating depression, but it really does work for a supermajority of patients.
For me, I went from having literally lost all emotions and being essentially dead (and being willing to die), to feeling… normal. I haven’t had a major depressive episode in the two years since. It’s been liberating.
I don’t trust anyone. I have a total of two friends.
Elysium.
Ok, so the resource allocation of the moon/earth society is completely broken and the moon-dwelling oligarchs sucked. Agreed.
But the end of the movie makes the computer system unable to differentiate between the handful of moon lords vs the unwashed masses on the earth’s surface. There are not enough resources to go around in Elysium. All that medicine and food from the moon bastards is gonna run out in about ten minutes and then the last bits of society will finish collapsing. Any hope of ever rebuilding a functioning society ends about a week after the end of that movie.
Not GP, but I’ve always called this Toad in the hole. Western USA.
Yup. Zorin’s another great Debian-based distro. I’ve been running it on my laptop for awhile now and I’m a fan.