• 0 Posts
  • 57 Comments
Joined 11 months ago
cake
Cake day: August 21st, 2023

help-circle



  • Degenerate, 88, 14, the Roman salute, multiple names, the fascista, shaven heads, lighting motifs, runic symbols.

    That’s just what I came up with off the top of my head. The other person is right, and I say we should reclaim every symbol because those fuckers shouldn’t be allowed to call anything their own or have anything to ralley around or identify each other with. The only symbol I’m aware of that the made was the black sun which is itself simply the ss symbol repeated around a circle, which itself is an appropriated rune.

    Reclaim every symbol.













  • It suggests to me that AI

    This is a fallacy. Specifically, I think you’re committing the informal fallacy confusion of necessary and sufficient conditions. That is to say, we know that if we can reliably simulate a human brain, then we can make an artificial sophont (this is true by mere definition). However, we have no idea what the minimum hardware requirements are for a sufficiently optimized program that runs a sapient mind. Note: I am setting aside what the definition of sapience is, because if you ask 2 different people you’ll get 20 different answers.

    We shouldn’t take for granted it’s possible.

    I’m pulling from a couple decades of philosophy and conservative estimates of the upper limits of what’s possible as well as some decently-founded plans on how it’s achievable. Suffice it to say, after immersing myself in these discussions for as long as I have I’m pretty thoroughly convinced that AI is not only possible but likely.

    The canonical argument goes something like this: if brains are magic, we cannot say if humanlike AI is possible. If brains are not magic, then we know that natural processes can create sapience. Since natural processes can create sapience, it is extraordinarily unlikely that it will prove impossible to create it artificially.

    So with our main premise (AI is possible) cogently established, we need to ask the question: “since it’s possible, will it be done, and if not why?” There are a great many advantages to AI, and while there are many risks, the barrier of entry for making progress is shockingly low. We are talking about the potential to create an artificial god with all the wonders and dangers that implies. It’s like a nuclear weapon if you didn’t need to source the uranium; everyone wants to have one, and no one wants their enemy to decide what it gets used for. So everyone has the insensitive to build it (it’s really useful) and everyone has a very powerful disincentive to forbidding the research (there’s no way to stop everyone who wants to, and so the people who’d listen are the people who would make an AI who’ll probably be friendly). So what possible scenario do we have that would mean strong general AI (let alone the simpler things that’d replace everyone’s jobs) never gets developed? The answers range from total societal collapse to extinction, which are all worse than a bad transition to full automation.

    So either AI steals everyone’s job or something worse happens.




  • This is good, and I’m piggybacking so I can add on:

    Get a passport, make friends in another country.

    Vote to slow them down, yell at anyone who tries to say that it’s better to not vote or that Trump would be better for Palestinians.

    If you want to engage in electoral reform, you need to start years in advance.

    If you live in Texas: Vote for Biden and encourage GOP voters to not vote for Trump. If you’re in a place with lots of GOP weirdos, try publicly and loudly watch his rallies at 1.5x speed; I’ve heard it breaks the spell b/c his cadence gets disrupted. If Texas goes blue, Biden wins and all the DNC voters suddenly know they can win, making it immediately more likely.

    If all else fails, remember someone might [Comment Cannot Legally Be Finished], which would solve multiple problems.