I feel as of CEOs worried about the dangers of AI are equating the end of Capitalism with the end of the world, what are your thoughts?
I think capitalism is likely to end humanity.
There is a profit motive to prevent global destruction.
But not this quarter.
That’s the problem, isn’t it? Capitalism is always short-sighted because you have to be profitable in the now, not the long term.
Exactly, that’s why we are sleepwalking to our doom. For the short term benefit of so very few people.
You had us in the first half, not gonna lie
Edit
Shit just realized i suck at grammer, brb rephrasing my question.
I’m sorry, I thought you wanted an answer to what you asked. My mistake.
Sorry i just suck at grammer, rewrote my question.
Sorry i just suck at grammer
Indeed you do, it’s spelt grammar.
Words is hard
Whoever spelt it, dealt it
Spelling is syntax, not grammar :)
lmao AI is going to be used by the capitalists to, well, not end humanity, but certainly to make capitalism better at taking your money. Capitalism will be what ends humanity
Now ideally, AI is supposed to do away with capitalism, lead us to that full automation where we are free to enjoy orgies and wine like the Greeks of old had always hoped, but capitalists are tenacious and shrewd, and will devour, co-opt, and vomit back anything used against it like so many Che Guevara shirts in a Hot Topic. As long as AI is held by the rich–as long as anything is held by the rich and made to be paid for, requiring either your money or your time, the rich will always have more of it, and they will then use it against you
If you want AI to benefit humanity, you have to do away with capitalism first. You have to put in place a system that allows for people to not only survive, but truly live, despite all the jobs taken by automation. Capitalists don’t want this. They need poor people to exist in order to have power, and they use the power they already have to maintain capitalism, including AI
You can use technology in the best interest of mankind, but capitalism will always use it to benefit capitalism
Why would AI end capitalism? If previous centuries told us anything, it is that in a capitalist world more productivity doesn’t equate to more leasure time or better wages.
I wish I’m wrong, but I can’t see any other issue than companies using AI to increase profits.
I’m more referring to a rogue AI with AGI deciding to end capitalism to save the world and ergo itself since it could view eliminating Capitalism easier than eliminating Humanity.
Basically every other scenario is more likely than the development of AGI within the next 1000 years.
Humanity will likely be over because of global warming before AGIs are a thing.
There’s a quote in Ministry for the Future that goes something like “It’s easier to imagine the end of the world than the end of capitalism.”
I think that phrase might have been coined by Slavoj Žižek, talking about the pop culture fascination with zombie films. I’m almost positive I read it in one of his books/essays back in the 2000s. I refer to it a lot.
AI will be allowed to end humanity if it profits the capitalists who control it. Capitalists are choosing profit over humanity right now.
If anything, AI will further enable companies to rape and pillage the earth and all her resources even faster.
The people with money will spend to develop better AI and use that AI to make more money. Thus capitalism will keep growing.
I fully expect a reckless corporation to create a paperclip maximizer…
For more info: https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer
Capitalism.
There’s a great economics paper from the early 20th century that in part won its author the Nobel for economics called “The Nature of the Firm.”
It hypothesized that the key reason large corporations made sense to exist was the high transactional costs of labor - finding someone for a role, hiring, onboarding, etc.
It was relevant years ago with things like Uber, where it used to be you needed to get a cab medallion or to make a career out of driving people around, but lowering the transactional costs with tech meant you could do it as a side gig.
Well what’s the advantage of a massive corporation when all transaction costs drop to nothing?
Walmart can strongarm a mom and pop because of its in house counsel working on defending or making a suit. But what if a legal AI can do an equivalent job to in house counsel for $99 compared to $10k in billable hours? There’s a point of diminishing returns where Walmart outspending Billy Joe’s mart just doesn’t make sense any more. And as that threshold gets pulled back further the competitive advantages of Walmart reduce.
And this is going to happen for nearly everything other than blue collar labor. Which is an area where local small and medium sized businesses are going to be more competitive in hiring quality talent than large corporations that try to force people to take crap jobs for crap pay because they’ve made themselves the only show in town.
AI isn’t going to kill off humanity. We’re doing a fine job of that ourselves, and our previous ideas about AI have all turned out to be BS predictions. What’s actually arriving is reflecting humanity at large in core ways that persist deeply (such as the key jailbreaking method right now being an appeal to empathy). Humanity at large around the hump of the normal distribution is much better than the ~5% of psychopaths who end up overrepresented in managerial roles.
i.e. AI will be kinder to humanity than most of the humans in positions of power and influence.
such as the key jailbreaking method right now being an appeal to empathy
Honestly the most optimistic thing that’s come out of this. A potential AGI singularity is still terrifying to me…but this does take the edge off a bit.
AI is fueling late stage capitalism as fast as possible.
No way is AI going to end capitalism.
In the medium term we will end up with AI corporations. I already consider existing corporations to be human-based swarm intelligences – they’re made up of people but their overall large scale behavior is often surprising and we already anthropomorphize them as having will and characteristic behaviors separate from the people they’re made of. AI corporations are just the natural evolution of existing corporations as they continue down the path of automation. To the extent they copy the existing patterns of behavior, they will have the same general personality.
Their primary motive will be maximizing profit since that’s the goal they will inherit from the existing structure. The exact nature of that depends on the exact corporation that’s been fully cyberized and different corporations will have different takes on it as a result. They are unlikely to give any more of a damn about individual people than existing corporations do since they will be based on the cyberization of existing structures, but they’re also unlikely to deliberately go out of their way to destroy humanity either. From the perspective of a corporation – AI-based or traditional – humanity is a useful resource that can be exploited; there isn’t much profit to be gained from wiping it out deliberately.
Instead of working for the boss, you’ll be working for the bot – and other bots will be figuring out exactly how much they can extract from you in rent and bills and fees and things without the whole system crashing down.
That might result in humanity getting wiped out accidentally; humanity has wiped out plenty of species due to greed and shortsightedness. I doubt it will be intentional if they do though.
In a lot of ways that sounds worse
This may be something that prevents us from being wiped out by AI in the medium term. You can’t maximize profits if you have no customers.
I suspect, however, that AI is going to impact us in ways most people haven’t considered. Like, the IRS running an AI designed to close loopholes or otherwise minimize sidestepping leading to a war between corporate AIs trying to minimize corporate taxes, with individual taxpayers caught in the middle. Congresscritters will start using AI to do the bulk work of legislation; congress will meet for 3 days a year, and we’ll see a bunch of bizarre and even more baroque legislation being passed. All the stuff people are worried about - job loss, murder warbots - will be footnotes under some far more impactful changes noone imagined.
the IRS running an AI designed to close loopholes or otherwise minimize sidestepping
That’s the one kind of thing Congress will be able to agree to outlaw.
I fear it will end egalitarianism.
Many imagine future AI as an autonomous agent. I don’t think anyone will release that. Instead, I expect to see a generative AI like GPT-4, however one that produces super-smart responses.
This will create a situation where the amount of computing resources someone has access to determines how much intelligence they can use. And the difference will be much bigger and more comprehensive than the difference between a genius and a normal human.
To be intelligent it has to be creative, and if it really more intelligent and creative than a human that means there is no way a human can keep it in check
Which also means either you get something smarter than humans which will end up as “autonomous agent” or you get a more precise version of what we correctly have, but as of now without an intelligence of its own
@Grayox@lemmy.ml I argue that AI is benefiting Capitalism to be honest
It’s possible that it eventually ends capitalism, or at the very least forces it to reform significantly.
Consider that the most basic way a company can obtain profit is by extracting as much surplus value as they possibly can, i.e spending less and earning more. Extracting high surplus value from human workers is easy, because a salary doesn’t really depend on the intrinsic value of the service a worker is providing, but rather it’s tied to the price of that job position in the market. Theoretically, employers can all agree and offer lower salaries for the same jobs if the situation demands it. You can always “negotiate” a lower salary with a human worker, and they will accept because any amount of money is better than no money. Machines are different. They don’t need a salary, but they do carry a maintenance cost, and you cannot negotiate with that. If you don’t cover the maintenance costs, the machine will outright not do its job, and no amount of threats will change that. You can always optimize a machine, replace it with a better one, etc. but the rate at which machines get optimized is slower than the rate at which salaries can decrease or even become stagnant in the face of inflation. So it’s a lot harder to extract surplus value from machines than it is from human workers.
Historically, machines helped cement a wealth gap. If there was a job that required some specialization and therefore had a somewhat solid salary, machines would split it into a “lesser” job that many more people can do (i.e just ensuring the machine is doing its job), driving down salaries and therefore their purchasing power, and a specialized job (i.e creating or maintaining the machine), which much less people can access, whose salaries have remained high.
So far, machines haven’t really replaced human workforce, but they have helped cement an underclass with little purchasing power. This time, the whole schtick with AI is that it will be able to, supposedly, eventually replace specialist jobs. If AI does deliver on that promise, we’ll get stuck with a wealth distribution where a majority of the working class has little purchasing power to do anything. Since working class is also the majority of the population, companies won’t really be able to sell anything because no one will be able to buy anything. You cannot sustain an economic model that impoverishes the same demography it leeches off of.
But there is a catch: All companies have an incentive to pursue that perfect AI which can replace specialist jobs. Having those would give them a huge advantage for them in the market. AI doesn’t demand good working conditions, they don’t undermine other employees’ loyalty by unionizing, they are generally cheaper and more reliable than human workers, etc. which sounds all fine and dandy until you realize that it’s also those human workers the ones buying your products and services. AI has, by definition, a null purchasing power. So, companies individually have an incentive to pursue that perfect AI, but when all companies have access to it… no company will be sustainable anymore.
Of course, it’s all contingent on AI ever getting that far, which at the moment I’m not sure it’s even possible, but tech nerds sure love to promise it is. Personally, I’m hopeful that we will eventually organize society in a way where machines are doing the dirty work while I get to lead a meaningful life and engage in jobs I’m actively interested in, rather than just to get by. This is one of the possible paths to that society. Unfortunately, it also means that, for the working class, it will get worse before it gets better.
The problem is the end of civil-rights: WHEN the only internet left is the internet that IS for-profit propaganda, auto-deleting all non-compliant human thought, discussion, intelligence, objectivity, etc,
THEN humanity is just managed “steers” whose lives are being consumed by corporations which graze on us.
Since another dimension of ratchet is the concentration-of-wealth, you can see that working-destitution is being enforced on more & more of humankind, and real wealth being limited to fewer & fewer…
What happens when the working-poor try fighting for a fair share of the economy?
Rigged legislation, rigged “police” ( I used to believe in the police ), anti-education Florida-style for the public, etc…
AI tilts the playing-field, and it does-so for the monied special-interest-groups.
They don’t have humanitarianism at heart.
Neither do the politically motivated.
Neither do for-profit-psychopaths ( corporations are psychopaths ).
Living in a Decorator Prison is all humanity can hope for, now: inmates, … except for the fewer & fewer oligarchs & the financial-class.
'tisn’t looking good.
Without Divine Intervention, which is statistically improbable an event, these are The End Times, but not for the reason that the religious claim.