I don’t see what this has to do with what I’ve said. I haven’t claimed otherwise. I don’t even like the guy and I wouldn’t have voted for him if I lived in the U.S.
Independent thinker valuing discussions grounded in reason, not emotions.
Open to reconsider my views in light of good-faith counter-arguments but also willing to defend what’s right, even when it’s unpopular. My goal is to engage in dialogue that seeks truth rather than scoring points.
I don’t see what this has to do with what I’ve said. I haven’t claimed otherwise. I don’t even like the guy and I wouldn’t have voted for him if I lived in the U.S.
I’ve been accused of being both Trump and Kamala supporter today despite not even living in the U.S.
Nope, I’m just a dispassionate observer who doesn’t think in binary. Trump is not all bad no more than Kamala is all good. Things are nuanced and complicated.
Also no, I don’t think people either lie 100% of the time or not at all, but I also don’t think you get to arbitrarily choose which is a lie and what is not based on which better fits your agenda.
What’s beneficial to himself and the U.S. seems like the only thing he cares about.
“Make America Great Again” is his motto, and the actions he took during his first term mostly aligned with it, even if the outcomes didn’t always turn out as intended. If you don’t believe he means what he says, then I don’t think he should be criticized for the rest of what he says either - since he wouldn’t mean that, either.
He says he wants to end the war. Ending support doesn’t do that and I don’t see how Russia winning would be beneficial to the US.
I don’t have TikTok myself but having seen my friends show videos from their screen I think it’s absolutely hideous how half of the video is covered with comments and buttons and shit. That would be a total dealbreaker for me.
If it’s a modern, quality bike from a reputable brand, virtually all the parts on it are standardized and used on other bikes as well. A bike brand, such as ‘Specialized,’ mainly refers to the frame. All the other components are the same ones used by other similar brands.
There’s not a single mention of LLMs in my post. Not one.
There’s not a single mention of LLM’s in my entire post. The argument I’m making there isn’t even mine. I heard it from Sam Harris way before LLMs were even a thing.
But I wasn’t talking about LLMs
I didn’t say we need to improve on what we have. We just need to keep making better technology which we will keep doing unless we destroy ourselves first.
I get what you’re saying but to me, that still just sounds like a timescale issue. I can’t think of a scenario where we’ve improved something so much that there’s just absolutely nothing we could improve on further. With AI we only need to reach the point of making it have human-level cognitive capabilities and from there on it can improve itself.
You seem to be talking about LLMs now and I’m not. LLMs being a dead end is perfectly compatible with what I just said. We’ll just try a different approach next then. Even the fact of realising they’re a dead end is yet another step towards AGI.
Then you need to give me an explanation for why it’s a dead end
A chess engine is intelligent in one thing: playing chess. That narrow intelligence doesn’t translate to any other skill, even if it’s sometimes superhuman at that one task, like a calculator.
Humans, on the other hand, are generally intelligent. We can perform a variety of cognitive tasks that are unrelated to each other, with our only limitations being the physical ones of our “meat computer.”
Artificial General Intelligence (AGI) is the artificial version of human cognitive capabilities, but without the brain’s limitations. It should be noted that AGI is not synonymous with AI. AGI is a type of AI, but not all AI is generally intelligent. The next step from AGI would be Artificial Super Intelligence (ASI), which would not only be generally intelligent but also superhumanly so. This is what the “AI doomers” are concerned about.
Interesting to contrast the votes on this with the other thread about conservatives.
No problem with extremists as long as they’re our extremists.
The fact that human brain is capable of general intelligence tells us everything we need to know about the processing power needed to run one.
If there were a giant asteroid hurling toward Earth, set to impact sometime in the next 20 to 200 years, I’d say there’s definitely a need for urgency. A true AGI is somewhat of an asteroidal impact in itself.
AGI is inevitable unless:
General intelligence is substrate independent and what the brain does cannot be replicated in silica. However, since both are made of matter, and matter obeys the laws of physics, I see no reason to assume this.
We destroy ourselves before we reach AGI.
Other than that, we will keep incrementally improving our technology and it’s only a matter of time untill we get there. May take 5 years, 50 or 500 but it seems pretty inevitable to me.
Size is relative. Here’s my “huge” truck parked next to an American one.
This implies that U.S. stopping weapons deliveries would leave them without weapons and ammunation which is not the case. It would make things a lot harder for Ukraine and make them lose more soldiers and land but it wouldn’t stop the war. They’d rather die than submit to Russia. Also, U.S. is not their only weapons supplier.