It isn’t hedging on anything. It’s already here, it already works. I run an LLM on my home computer, using open-source code and commodity hardware. I use it for actual real-world problems and it helps me solve them.
At this point the ones who are calling it a “fantasy” are the delusional ones.
By it’s already here, and it already works, you mean guessing the next token? That’s not really intelligence. In any sense, let alone the classical sense. Any allegedly real world problem you’re solving with it. It’s not a real world problem. It’s likely a problem you could solve with a text template.
I’m not sure you understand what the LLM is doing, or how support responses have been optimized over the decades. Or even how “AI” responses have worked for the past couple decades. But I’m glad you’ve got an auto-responder that works for you.
It isn’t hedging on anything. It’s already here, it already works. I run an LLM on my home computer, using open-source code and commodity hardware. I use it for actual real-world problems and it helps me solve them.
At this point the ones who are calling it a “fantasy” are the delusional ones.
By it’s already here, and it already works, you mean guessing the next token? That’s not really intelligence. In any sense, let alone the classical sense. Any allegedly real world problem you’re solving with it. It’s not a real world problem. It’s likely a problem you could solve with a text template.
It works for what I need it to do for me. I don’t really care what word you use to label what it’s doing, the fact is that it’s doing it.
If you think LLMs could be replaced with a “text template” you are completely clueless.
I’m not sure you understand what the LLM is doing, or how support responses have been optimized over the decades. Or even how “AI” responses have worked for the past couple decades. But I’m glad you’ve got an auto-responder that works for you.