To the best of my knowledge, this information only exists in the prompt. The raw LLM has no idea what it is and the APIs serve the raw LLM.
To the best of my knowledge, this information only exists in the prompt. The raw LLM has no idea what it is and the APIs serve the raw LLM.
Quickly filtering out a subset of them to prioritize so that we get the most value possible out of the time that humans spend on it.
LLMs cannot:
LLMs can
Semantics aside, they’re very different skills that require different setups to accomplish. Just because counting is an easier task than analysing text for humans, doesn’t mean it’s the same it’s the same for a LLM. You can’t use that as evidence for its inability to do the “harder” tasks.
Sounds to me like a 50% improvement over zero human eyes.
It certainly would be. Thankfully, there’s many more than zero human eyes involved in this.
Considering that it’s a language task, LLMs exist, and the cost, it’s a reasonable assumption. It’d be pretty silly to analyse a bag of words when you have tools you can use with minimal work with much better results. Even sillier to spend over $200 for something that can be run on a decade old machine in a few hours.
Yeah, this sounds like something I’d find at a diner.
Count yourself lucky. My front burner has become a secondary backburner and I’ve moved on to using a portable cooktop.
Same. I keep thinking back to my time TAing for an intro programming course and getting students who just add random braces until their code compiles. That’s me right now with Rust pointers.
Much faster to skim the contents of an article than a video.
Your comment is a great example of the kind of biases I’m telling everyone to avoid. You misunderstood my initial message, then decided to cling on to that interpretation despite clarifications.
In any case, if you have feedback (e.g. what made the comment unclear, or how you interpreted it), I’d appreciate hearing about it so I can improve my writing. I’m not always aware of the hidden meanings non-autistic people pull out of words that weren’t intended to have any.
https://lemmy.ca/post/28915538/11651615
I’ve rephrased this comment more explicitly and concretely here. Feel free to read through the rest of that thread. I’d rather not repeat myself unless you have something new to add.
Evidence which wasn’t available to the participants of the conversation at the time. With only what we see in the article, there’s no reason to believe that this post she made was racist.
Alright, I found a screenshot of it posted on twitter. Can confirm that this is racist.
It certainly doesn’t. But in the absence of evidence in either direction, I think it’s most reasonable to not assume the worst of people.
Have you seen what she actually wrote, and if yes, can you share? I think we should set the actual message before casting judgement.
Who’s to say this isn’t a real person? Generative AI has been known to sometimes spit out content that’s nearly identical to a piece of its training data.
A picture is worth a thousand words
Not just the mental strain of syntax, but also variable naming, which I find to be a much bigger hurdle. Copilot will nearly always give you a “good enough” name so you can just move on to solve the actual problem. You can always come back to rename it if necessary.
Yep. It’s part of their mating ritual. You can learn more about it at c/fuckcars.
Same. That’s when everyone else goes to sleep and actually leaves you time to focus on your work.