What’s the experience so far?
What’s the experience so far?
I see it more of a limitation, you don’t want your laptop to warm (and it shouldn’t in light use), but you want to cool it for the few times it does.
I think they do have their help, but it’s not nearly as dramatic as some companies earning money from it want us to think. It’s just a tool that helps just like a good IDE has helped in the past.
I mean, if LLMs really make software engineering easier, we should also expect Linux apps to improve dramatically. But I’m not betting on it.
Why wouldn’t companies have already got their data long ago? Internet archive is nothing new.
Yeah when you use Gemini, it seems like sometimes it’ll just answer based on its training, and sometimes it’ll cite some source after a search, but it seems like you can’t control that. It’s not like Bing that will always summarize and link where it got that information from.
I also think Gemini probably uses some sort of knowledge graph under the hoods, because it has some very up to date information sometimes.
I don’t even think it’s correct to say it’s querying anything, in the sense of a database. An LLM predicts the next token with no regard for the truth (there’s no sense of factual truth during training to penalize it, since that’s a very hard thing to measure).
Keep in mind that the same characteristic that allows it to learn the language also allows it to sort of come up with facts, it’s just a statistical distribution based on the whole context, which needs a bit randomness so it can be “creative.” So the ability to come up with facts isn’t something LLMs were designed to do, it’s just something we noticed that happens as it learns the language.
So it learned from a specific dataset, but the measure of whether it will learn any information depends on how well represented it is in that dataset. Information that appears repeatedly in the web is quite easy for it to answer as it was reinforced during training. Information that doesn’t show up much is just not gonna be learned consistently.[1]
Yeah, there’s a reason this wasn’t done before generative AI. It couldn’t handle anything slightly more specific.
I don’t think it’s awkward, it’s kinda necessary.
Because the people who are answering questions there are doing it for that ideal of having a knowledge repository. No one is helping you because they think you and your specific problem are so important to demand their time. Especially with very tricky errors.
I’m not entirely sure because I’m not very knowledgeable about CPUs, but it seems this is largely a problem with ARM architectures and their lack of standardization, isn’t it?
I wonder what percentage of desktop users still use Ubuntu nowadays. Seems like there’s no way to have a clear picture, besides DistroWatch which is more like “interest” and not actual usage?
There’s also the issue of testing all the packages. They have to make sure all the versions frozen in the repository will work smoothly together.
Shareholders: we can have the same profit without a CEO!
This is only true if you ignore all the other variables. Which is, let’s say, another company hiring writers and now they’ll grow their market share in comparison with the shitty AI articles company.
Amazon has a lot of competition in Brazil and the more they make their service worse, the better for the competition. But so far Amazon only raised the bar (with fast deliveries), making all other companies improve their own services.
Not only did the AI predict elements of whale vocalizations already thought to be meaningful, such as clicks, but it also singled out acoustic properties.
This is an amazing use of machine learning models.
Whatever form of entertainment you want to see. TikTok algorithm quickly adjust the algorithm to show you what you like or don’t skip instantly, and it’s very good at it.
The problem is it’s all superficial content that will vanish from your mind 3s later, so 2h scrolling on TikTok or Reels feel like 2 blank hours from your day. Besides, since the algorithm decides what you’ll see, it’s like your brain shuts down similarly to what happens when you’re vegetating in front of TV watching whatever crap they’re throwing at you.
If more people joined Lemmy you’d see the amount of spam this place would get. Now it’s only a bunch of nerds who will quickly report any spammy activity. It’s a small “friendly” community for now.
Right? People simply expect someone else to pay the bills.
I think we only liked them as enthusiasts, but for the general public (say a student) they were very bad because being cheap meant they had crappy hardware just like modern Chromebooks. In fact, I’ve been interested in having a Chromebook lately that could run Android apps, but quickly realized a good one is as expensive as a good laptop in Brazil.
In case people didn’t know what company he was referring to. /s