So he’s finally admitted it. I’m guessing there will be silence from the people that said Israel was definitely going to give Gaza back to Palestine after Hamas was gone.
So he’s finally admitted it. I’m guessing there will be silence from the people that said Israel was definitely going to give Gaza back to Palestine after Hamas was gone.
Yemen was warned not to block the ports of a genocidial neighbour. But they just wouldn’t stop. And now the US has to bomb them. So sad.
Making the seas safe for trade = shutting down trade sanctions on a genocide
The Houthis are blocking Israeli ports in response to genocide.
Why were US ships going to the ports of a country actively committing genocide?
The US is now bombing a sovereign nation to stop trade sanctions on a genocide.
So not a US oil tanker then?
This post is a disaster.
It says “An experiment in 2020s incentivizing the workplace as a dot-com-era adult playground where work also” three times.
I get that it’s embedded or whatever, but reading this post was like having a stroke.
Comments are full of AI experts with wild theories about how Chat GPT works, lmao
If only we had listened to you
No doubt a fascist done this.
Under capitalism your choice is to sell yourself or become destitute. That’s not really a choice, it’s just indirect coercion.
Lmao. I love when Americans just make shit up about Europeans
Not an ELI5, sorry. I’m an AI PhD, and I want to push back against the premises a lil bit.
Why do you assume they don’t know? Like what do you mean by “know”? Are you taking about conscious subjective experience? or consistency of output? or an internal world model?
There’s lots of evidence to indicate they are not conscious, although they can exhibit theory of mind. Eg: https://arxiv.org/pdf/2308.08708.pdf
For consistency of output and internal world models, however, their is mounting evidence to suggest convergence on a shared representation of reality. Eg this paper published 2 days ago: https://arxiv.org/abs/2405.07987
The idea that these models are just stochastic parrots that only probabilisticly repeat their training data isn’t correct, although it is often repeated online for some reason.
A little evidence that comes to my mind is this paper showing models can understand rare English grammatical structures even if those structures are deliberately withheld during training: https://arxiv.org/abs/2403.19827