• 0 Posts
  • 191 Comments
Joined 1 year ago
cake
Cake day: July 7th, 2023

help-circle
  • Absolutely. A lot of the time the biggest difficulty with researching something is not even knowing the right terms to search for. Asking a few questions can give you a starting point to know where and how to look.

    And the thing is, I personally hate asking questions on forums and the like. I can probably count on one hand the number of times I’ve done it. I’m very good at digging up answers by myself, and I generally do work better with essays than I do with conversations. But my experience should not be seen as the default, and people shouldn’t be shit on for trying to learn through community rather than through textbooks.


  • So, when you create a virtual machine in KVM, you have the ability to attach a Spice or VNC display to the VM.

    Unlike running VNC inside the virtual machine, what this does it is runs VNC on the host, at a port that you designate (or a randomly assigned port if you don’t designate) and then you can view that by connecting to the host through VNC. For Spice its exactly the same, except you use something like the Remote Viewer application to connect to it.

    As others have mentioned, the easiest way of handling all of this is with Virtual Machine Manager, which integrates its own Spice console and makes everything happen automagically. You can also install Cockpit with the Cockpit-Machines plugin on the host, which gives you a web interface for controlling virtual machines, just like vmware esxi. The display manager on cockpit is pretty rough at the moment though.

    KVM is a very “build it yourself” virtualization solution. I use it extensively, and I love it, but you’ll need to be prepared for a lot of “Oh, KVM doesn’t do that, that’s handled by this program/library/whatever”. It’s definitely not a user friendly toolkit. If you’re looking for a Workstation Player alternative, you may be better off with something like Virtbox (although do try out Virtual Machine Manager first, it’s really slick and for your use case probably solves all the problems I’ve mentioned). If you’re looking for an esxi alternative, maybe look into Proxmox.


  • I’ve been looking for documentation on this but Google search is now so bad that technical documents are completely hidden behind marketing blurbs or LLM generated rubbish.

    Its honestly tragic that people feel the need to put these disclaimers. “Just google it” was always a shitty response to people asking legitimate questions (some people learn better from conversational interaction rather than just reading an essay), but with the slow death of search engines we’re now experiencing, at this point anyone who yells “Just google it” needs to be ejected into the fucking sun.







  • This is really the crazy part; tech is basically operating a stream of bubbles, endlessly collapsing one into the next in the search for infinite growth. It’s gotten to the point where the bubble collapse barely even seems to matter anymore unless you’re one of the suckers going down with it. The industry as a whole has basically embraced perpetual collapse as its fundamental structure.





  • Get to grips with Docker. OCI containers are the standard method of self hosting basically everything now, so once you’re comfortable with Docker and compose files, literally anything you could want to host is available as a drop in component for your system.

    An excellent way of playing around with Docker is to install Dockge. It’s a web UI with some really helpful features. First, it can convert Docker Run commands into compose files for you (once you start to play around with this it’ll be clear why that matters), and second, its very good at pointing out where and how you’ve made errors in your compose files. But most importantly, unlike Portainer (the most popular Docker UI) it works with the Docker command line rather than trying to replace it. With Dockge you know exactly where all of your files are and if any part of your setup breaks you can repair it very easily. It also doesn’t have Portainer’s problem of flashing error messages on the screen for 0.3 seconds then whisking them away. It exposes the entire Docker terminal output so your debugging process is much, much easier.

    You’ll also want to learn about reverse proxies (I reccomend Caddy for its unbelievably simple config file; an entire site is three lines). These are really important for serving multiple different services from one source.

    For anything that you can’t run in Docker, VMs are an acceptable solution, and LXC containers are a better solution, but one that requires a little more work to get to grips with (fun fact, LXC has its own web UI, which is fantastic, but almost nobody seems to even know it exists). Since you’re already familiar with Linux, you may want to ignore the suggestion to use Proxmox and just set up a server with your preferred flavour and go from there. All of this can be done with any modern Linux distro, so you might as well work in an environment you’re comfortable in.





  • You are correct, left hand is a fork bomb. Specifically, it creates and then runs a function named “:”. What this function does is pipe its output into itself while running in a background process, which instantly spawns infinite copies of itself. Technically I believe the : character could be any character as its just a name. The creator just picked a colon for aesthetics.



  • We not only have to stop ignoring the problem, we need to be absolutely clear about what the problem is.

    LLMs don’t hallucinate wrong answers. They hallucinate all answers. Some of those answers will happen to be right.

    If this sounds like nitpicking or quibbling over verbiage, it’s not. This is really, really important to understand. LLMs exist within a hallucinatory false reality. They do not have any comprehension of the truth or untruth of what they are saying, and this means that when they say things that are true, they do not understand why those things are true.

    That is the part that’s crucial to understand. A really simple test of this problem is to ask ChatGPT to back up an answer with sources. It fundamentally cannot do it, because it has no ability to actually comprehend and correlate factual information in that way. This means, for example, that AI is incapable of assessing the potential veracity of the information it gives you. A human can say “That’s a little outside of my area of expertise,” but an LLM cannot. It can only be coded with hard blocks in response to certain keywords to cut it from answering and insert a stock response.

    This distinction, that AI is always hallucinating, is important because of stuff like this:

    But notice how Reid said there was a balance? That’s because a lot of AI researchers don’t actually think hallucinations can be solved. A study out of the National University of Singapore suggested that hallucinations are an inevitable outcome of all large language models. **Just as no person is 100 percent right all the time, neither are these computers. **

    That is some fucking toxic shit right there. Treating the fallibility of LLMs as analogous to the fallibility of humans is a huge, huge false equivalence. Humans can be wrong, but we’re wrong in ways that allow us the capacity to grow and learn. Even when we are wrong about things, we can often learn from how we are wrong. There’s a structure to how humans learn and process information that allows us to interrogate our failures and adjust for them.

    When an LLM is wrong, we just have to force it to keep rolling the dice until it’s right. It cannot explain its reasoning. It cannot provide proof of work. I work in a field where I often have to direct the efforts of people who know more about specific subjects than I do, and part of how you do that is you get people to explain their reasoning, and you go back and forth testing propositions and arguments with them. You say “I want this, what are the specific challenges involved in doing it?” They tell you it’s really hard, you ask them why. They break things down for you, and together you find solutions. With an LLM, if you ask it why something works the way it does, it will commit to the bit and proceed to hallucinate false facts and false premises to support its false answer, because it’s not operating in the same reality you are, nor does it have any conception of reality in the first place.