![](/static/66c60d9f/assets/icons/icon-96x96.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
Yes, because like I said, nothing is ever perfect. There can always be a billion little things affecting each and every detection.
A better statement would be “only one false detection out of 10 million”
Yes, because like I said, nothing is ever perfect. There can always be a billion little things affecting each and every detection.
A better statement would be “only one false detection out of 10 million”
Ok, some context here from someone who built and worked with this kind tech for a while.
Twins are no issue. I’m not even joking, we tried for multiple months in a live test environment to get the system to trip over itself, but it just wouldn’t. Each twin was detected perfectly every time. In fact, I myself could only tell them apart by their clothes. They had very different styles.
The reality with this tech is that, just like everything else, it can’t be perfect (at least not yet). For all the false detections you hear about, there have been millions upon millions of correct ones.
That just sounds like the Internet in a nutshell for various topics.
It is very good for boilerplate code
Personally I find all LLMs in general not that great at writing larger blocks of code. It’s fine for smaller stuff, but the more you expect out of it the more it’ll get wrong.
I find they work best with existing stuff that you provide. Like “make this block of code more efficient” or “rewrite this function to do X”.
Bad take. Is the first version of your code the one that you deliver or push upstream?
LLMs can give great starting points, I use multiple LLMs each for various reasons. Usually to clean up something I wrote (too lazy or too busy/stressed to do manually), find a problem with the logic, or maybe even brainstorm ideas.
I rarely ever use it to generate blocks of code like asking it to generate “a method that takes X inputs and does Y operations, and returns Z value”. I find that those kinds of results are often vastly wrong or just done in a way that doesn’t fit with other things I’m doing.
It’s incredibly easy, and in fact desirable, to not hear anything in that voice.
Go vegan
I swear vegans are eventually going to out class religious people for pushing their own beliefs.
Sony is just as bad in their own ways.
I’ve tried a lot of scenarios and languages with various LLMs. The biggest takeaway I have is that AI can get you started on something or help you solve some issues. I’ve generally found that anything beyond a block or two of code becomes useless. The more it generates the more weirdness starts popping up, or it outright hallucinates.
For example, today I used an LLM to help me tighten up an incredibly verbose bit of code. Today was just not my day and I knew there was a cleaner way of doing it, but it just wasn’t coming to me. A quick “make this cleaner: <code>” and I was back to the rest of the code.
This is what LLMs are currently good for. They are just another tool like tab completion or code linting
Linux is sadly very messy for a sysadmin.
wut?
CUDA and AI stuff is very much Linux focused. They run better and faster on Linux, and the industry puts their efforts into Linux. CNC and 3D printing software is mostly equal between Linux and Windows.
The one thing Linux lacks in this area is CAD support from the big players. FreeCAD and OpenCAD exist, and they work very well, but they do miss a lot of the polish the proprietary software has. There are proprietary CAD solutions for Linux, but they’re more industry specific and not general purpose like AutoCAD.
You’re username, for one.
I would assume the military
Edit: and yes I realize that would be very close to civil war territory, and yes I would assume the Texas State people involved would probably be hanged
Can someone explain how a US state can legally/physically block access to anything from a US federal law enforcement agency?
I genuinely don’t understand what’s going on here.
The LG washer app asked for literally every possible permission. If it could ask for my DNA, it would have.
Yup. And at this point if you want to buy a regular TV, they’re harder to find and often cost more now.
Oh, don’t get me wrong. I had an LG washer and dryer with those “smart” features. Out of curiosity I tried it once. The app wanted every permission short of asking for my DNA and to be my power of attorney. And then once setup it just… barely worked. It was buggier than an ActiveX plugin running on IE5. I nuked the app off my phone and booted the LG’s off the network and didn’t touch the smart crap for the rest of the 5 years I had them.
Here’s a Podcasting Index link to the episode which supports the podcasting 2.0 standard and is open source.
So that they can call it “smart” and charge more for it.
Interesting. Can you elaborate on this?
Edit: downvotes for asking an honest question. People are dumb