
Yuval Noah Harari, a historian, philosopher, and lecturer at the Hebrew University of Jerusalem, has an interesting article on AI in The Economist (“Yuval Noah Harari Argues that AI Has Hacked the Operating System of Human Civilization,” April 28, 2023). He presents a more sophisticated argument on the danger of AI than the usual Luddite scare. A few excerpts:
Forget about school essays. Think of the next American presidential race in 2024, and try to imagine the impact of AI tools that can be made to mass-produce political content, fake-news stories and scriptures for new cults. …
While to the best of our knowledge all previous [QAnon’s] drops were composed by humans, and bots merely helped disseminate them, in future we might see the first cults in history whose revered texts were written by a non-human intelligence. …
It is utterly pointless for us to spend time trying to change the declared opinions of an AI bot, while the AI could hone its messages so precisely that it stands a good chance of influencing us.
Through its mastery of language, AI could even form intimate relationships with people, and use the power of intimacy to change our opinions and worldviews. …
What will happen to the course of history when AI takes over culture, and begins producing stories, melodies, laws and religions? …
If we are not careful, we might be trapped behind a curtain of illusions, which we could not tear away—or even realise is there. …
Just as a pharmaceutical company cannot release new drugs before testing both their short-term and long-term side-effects, so tech companies shouldn’t release new AI tools before they are made safe. We need an equivalent of the Food and Drug Administration for new technology.
The last bit is certainly not his most interesting point: it looks to me like the feared AI-bot propaganda. Such a trust in the state reminds me of what New-Dealer Rexford Guy Tugwell wrote in a 1932 American Economic Review article:
New industries will not just happen as the automobile industry did; they will have to be foreseen, to be argued for, to seem probably desirable features of the whole economy before they can be entered upon.
We don’t know how close AI will come to human intelligence. Friedrich Hayek, whom Harari may never have heard of, argued that “mind and culture developed concurrently and not successively” (from the epilogue of his Law, Legislation, and Liberty; his underlines). The process took a few hundred thousand years, and it is unlikely that artificial minds can advance “in Trump time,” as Peter Navarro would say. Enormous resources will be needed to improve AI as we know it. Training of ChatGPT-4 may have cost $100 million, consuming a lot of computing power and a lot of electricity. And the cost increases proportionately faster than the intelligence. (See “Large, Creative AI Models Will Transform Lives and Labour Markets,” The Economist, April 22, 2023.) I think it is doubtful that an artificial mind will ever say like Descartes, “I think, therefore I am” (cogito, ergo sum), except by plagiarizing the French philosopher.
Here is what I would retain of, or deduct from, Harari’s argument. One can view the intellectual history of mankind as a race to discover the secrets of the universe, including recently to create something similar to intelligence, concurrent with an education race so that the mass of individuals do not to fall prey to snake-oil peddlers and tyrants. To the extent that AI does come close to human intelligence or discourse, the question is whether or not humans will by then be intellectually streetwise enough not to be swindled and dominated by robots or by the tyrants who would use them. If the first race is won before the second, the future of mankind would be bleak indeed.
Some 15% of American voters see “solid evidence” that the 2020 election was stolen, although that proportion seems to be decreasing. All over the developed world, even more believe in “social justice,” not to speak of the rest of the world, in the grip of more primitive tribalism. Harari’s idea that humans may fall for AI bots like gobblers fall for hen decoys is intriguing.
The slow but continuous dismissal of classical liberalism over the past century or so, the intellectual darkness that seems to be descending on the 21st century, and the rise of populist leaders, the kings of “democracy,” suggest that the race to create new gods has been gaining more momentum than the race to general education, knowledge, and wisdom. If that is true, a real problem is looming, as Harari fears. However, his apparent solution, to let the state (and its strongmen) control AI, is based on the tragic illusion that the it will protect people against the robots, instead of unleashing the robots against disobedient individuals. The risk is certainly much lower if AI is left free and can be shared among individuals, corporations, (decentralized) governments, and other institutions.
READER COMMENTS
MarkW
May 6 2023 at 12:07pm
Think of the next American presidential race in 2024, and try to imagine the impact of AI tools that can be made to mass-produce political content, fake-news stories and scriptures for new cults.
There has never been a supply problem with the mass production of political content. In fact, millions of people log on every day and produce it for free (e.g. in blogs, in youtube and tiktok videos, in comments sections everywhere, etc, etc). I see no reason at all to assume that AI produced content will be more likely to persuade than human produced content. So this anti-AI argument really doesn’t strike me as more sophisticated. It seems like the ‘Russian disinformation panic’ mixed with portions of Luddism, the ‘precautionary principle’, and a statist preference for government control of…well…just about everything, but especially anything new.
Pierre Lemieux
May 6 2023 at 12:16pm
Mark: You make good points, but there is matter of numbers. Imagine if 100,000,000 AI bots start arguing online that QAnon or SJWs speak the truth. The availability bias will become much more potent.
MarkW
May 6 2023 at 1:41pm
Imagine if 100,000,000 AI bots start arguing online that QAnon or SJWs speak the truth.
Attention is scarce, not text. What makes you think the output of these 100K bots would find a place to host the output where it was actually read? There’s a stat along the line of ‘the median number of references to an article in a refereed journal is 1’. What would we expect the median number of readers to be for each bot item? Many (most?) online publications turned off their comments sections in part because they struggled with being overwhelmed by both trollish human commentators and pre-LLM bots.
The internet is already saturated with dubious factual claims that need to be verified. Many people don’t bother (especially if the claims support their world view), but that is not at all a new problem. I see no reason why people would be more gullible when it comes to bot-generated ‘facts’. As for opinion, if people can more persuasive, more compelling political arguments using LLMs — so what? Though I really doubt LLMs will possess rhetorical superpowers that humans lack (what possible arguments for and against the issues of the day could bots devise that some humans have not already come up with?)
It’s really the same argument as against the ‘Russian Disinformation’ panic — what reason did we have to think the the Russian government (far from being known and the source of the smoothest, most clever rhetoric) had some kind of evil-genius ability to persuade Americans about elections? And of course the Russians didn’t and it was BS being promoted for scurrilous domestic political purposes. We should be similarly suspicious of those trying to scare us about ‘disinformation’ when it comes to AI.
Dylan
May 7 2023 at 9:50am
What I think is both interesting and a little scary is the ability to potentially scale 1-1 communication and personalize it in a way that hasn’t been economical until now.
I recently got a “wrong number” text scam and have been playing along because I’ve been curious how it goes and figured that any time spent texting me is time that can’t be spent on another potential victim. At the same time, I’ve been reading up on how the scam works and the fact that many will take weeks or months of chatting to build up a rapport and trust before springing the scam. Right now that takes a lot of actual human labor, some of which are basically slaves if this report from Vice is to be believed. But, imagine how much better the scam could run if you had chatbots that not only replaced the human labor, but were better than it. Chatbots that can remember everything you’ve told them over months and tailor their pitch based on what they’ve learned.
Now, take that same model and apply it to politics and disinformation and the ability for a bot to convincingly pretend to be someone else. I think most of us can think of at least one person that we know that seemed to have been taken in by some type of conspiracy theory over the last few years. Think how much more powerful that could be if arguments and “evidence” could be individually tailored to prey on our built in biases. I’m reminded of how when George W. talked about all the reasons to invade Iraq, I was strongly against it. But, when I’d hear Tony Blair talk about the same thing, the arguments were framed differently (and he had that intelligent and charming British accent!) and all of a sudden I start thinking that there might be something to this whole invasion after all. I think we’re all vulnerable to the right kind of message and I think we’ll find out soon enough if AI can deliver it.
MarkW
May 7 2023 at 12:54pm
Now, take that same model and apply it to politics and disinformation and the ability for a bot to convincingly pretend to be someone else.
How? Where? I mean how would a bot even begin to engage in a conversation with me? Bots can’t hang around in public and like most people, I don’t answer call from unknown numbers. If bots started calling in force, presumably absolutely everybody would stop answering.
I’m reminded of how when George W. talked about all the reasons to invade Iraq, I was strongly against it. But, when I’d hear Tony Blair talk about the same thing, the arguments were framed differently (and he had that intelligent and charming British accent!) and all of a sudden I start thinking that there might be something to this whole invasion after all.
But you weren’t influenced because by some random dude with an English accent — you were influenced by Tony Blair — a famous, powerful person you knew by reputation. How would a random, English-accented bot-created non-existent ‘person’ even begin to influence you? I mean, how would it ever even get your ear?
Dylan
May 7 2023 at 2:35pm
That’s a good question and the answer is, I don’t know, but I don’t think it is insurmountable. Apparently, a fair number of people not only answer wrong number texts, they stay engaged long enough to build a relationship and eventually get swindled out of lots of money.
Earlier today I watched a video debate between Bill Gates and Socrates! And, it sounded just like what I’d imagine each would say. It doesn’t really matter that I know that they are fake, I’m still primed to believe them. At some point in the future (probably near future) I won’t be able to tell which argument I heard from the real Bill Gates and which one from the fake. Now, imagine a bot putting a link to something like that on a site you trust. And, following it up with some smart sounding things you agree with, and then doing that for months, and then continuing the discussion in DMs…all of those things have been happening already for years. They’ve been done very poorly, but are somehow still at least marginally successful. AI might give the ability to do it better and at scale.
And, I think our ability to tell the difference is going to rapidly deteriorate. Not only because the models will get better, but also because “legitimate” sources will have no choice but to adopt AI into their processes as well, continuing to blur the lines.
MarkW
May 7 2023 at 5:06pm
It doesn’t really matter that I know that they are fake, I’m still primed to believe them. At some point in the future (probably near future) I won’t be able to tell which argument I heard from the real Bill Gates and which one from the fake.
I guess I don’t get why you would spend so much time watching a Gates-bot debate a Socrates-bot (or any artificial simulations of thinkers) that you’d then forget what came from the real personages and what was simulation (and then be unhappy you couldn’t remember which was which) I mean, there’s certainly a novelty factor at the moment, but apart from that, what’s the value proposition? And why do you are you so sure this will become common practice (it sure doesn’t sound like a common pastime for ordinary people to me) that it will constitute any kind of threat?
Dylan
May 8 2023 at 7:59am
I didn’t mean that I’d watch a bunch of these videos, I’m pretty sure that will be the only time I watch Bill Gates vs. Socrates. What I meant is, I heard Bill Gates voice and image making Bill Gates like statements, or at least ones that didn’t sound totally wild. In a year or two if someone comes up to me and says “remember that time that Bill Gates said X?” and then quoted something from the debate, I’m just as likely to attribute it to the real Bill Gates as the fake one.
I’m not talking about AI generated debates specifically, what I’m saying is AI generated content of one type or another will be pervasive and so intertwined with regular content that it will be hard to make a clear distinction. For example, I used ChatGPT to help write a portion of this response. It’s been edited enough that none of the AI generated sentences are there. And I think the basic argument is mine, but maybe not.
I’m not sure of anything, particularly when it comes to predicting the future. I can see multiple possible worlds where AI leads to the disintegration of human society. And, I’m currently working with a startup that could contribute to that. But hey, I’m a techno optimist.
MarkW
May 8 2023 at 11:26am
what I’m saying is AI generated content of one type or another will be pervasive and so intertwined with regular content that it will be hard to make a clear distinction.
I guess I really don’t care, and I’m not sure why anybody would. The content is either persuasive or not, entertaining or not. I don’t see any reason to expect AI generated content to be especially persuasive or entertaining. I don’t object that you used ChatGPT to produce a rough draft. The only possible danger you’ve identified is a very special case when it comes in video/audio form and appears to be coming out of the mouth of a specific person. Note that merely falsely attributing a made up quote to real person is not new at all — people have been trying to add heft to memes by claiming they were said by Lincoln or Einstein or whoever for a long time (and you can find such claims debunked on sites like Snopes).
Craig
May 6 2023 at 12:42pm
“I think it is doubtful that an artificial mind will ever say like Descartes, “I think, therefore I am” (cogito, ergo sum), except by plagiarizing the French philosopher.”
Perhaps, but I might respectfully suggest, Professor, that you might be exhibiting a pro-human bias which is forgivable because ultimately I share it with you. When the chess engines were being designed we still wanted the very best chess players to be human. When Ken Jennings went up against Deep Blue, we wanted Ken Jennings to prevail. We want humanity to have some place, some pinnacle, that cannot be surpassed by machine.
I’d like to believe that to be the case and I hope that is the case, but ultimately I have to admit the possibility that I might be wrong and looking back I see the abilities of computers increasing by leaps and bounds whereas by comparison I tend to be in a relatively steady state.
“Training of ChatGPT-4 may have cost $100 million, consuming a lot of computing power and a lot of electricity. And the cost increases proportionately faster than the intelligence.”
And its what? 6 years old? When my children were 6 years old, their English still had grammatical mistakes and their math was nothing to write home about. ChattyG is fluent in very many languages. Passes bar exams, medical exams. You note that the cost is increasing proportionately faster than the intelligence and actually I have no particular insight into that, but while that might be the case today, that might not be the case tomorrow. I could foresee using AI to make AI itself better which could result in a rapid exponential bootstrapping of ability. Maybe.
“To the extent that AI does come close to human intelligence or discourse, the question is whether or not humans will by then be intellectually streetwise enough not to be swindled and dominated by robots or by the tyrants who would use them.”
AI is the wildcard and I could foresee a range of possibilities from the ‘Skynet’ scenario to a scenario where AI so enhances productivity that the fiscal catastrophe the US is facing melts away to results somewhere in between.
Personally I see it as the ultimate disruption.
What happens when ChattyG and its progeny get married to Boston Dynamics robots’ and their progeny?
An emergent new order?
Pierre Lemieux
May 6 2023 at 1:16pm
Craig: All that you mention is possible, I agree. As you point out, we have seen before “impossible” things become possible. We must be open to surprises.
Perhaps turkey hunting is the main hobby in Heaven, or perhaps we’ll be the hunted, or perhaps curling will be the name of the game. Perhaps Adam and Eve were created by machines. God knows what quantum entanglement will have changed on earth in 100,000 years time. But I fear there is no other way of thinking about the future than to start with what we know, however frustratingly imperfect that is.
Pierre Lemieux
May 6 2023 at 1:38pm
Craig: Or perhaps the whole of Heaven (the hole of heaven?) is a non-smoking, gun-free zone?
steve
May 6 2023 at 1:34pm
“The CNN poll, conducted March 8-12 among 1,045 Republicans and Republican-leaning Independents, found 63% of respondents believe Biden did not legitimately win the 2020 election, while 37% believe he did.
Of that 63%, only 52% say they think there’s “solid evidence” the election was stolen, while 48% say they’re going based on “suspicion only.”
If it were only 15% of both parties that would be a majorly positive sign but having a majority of one party in our 2 party politics is the issue. Anyway, today people all too often develop their ideas and beliefs in line with their political or religious or whatever they want beliefs. They just like the way the guy on Youtube looks while he says what they want to believe. AI just has the opportunity to be a force multiplier for those who want to perpetuate their versions of whatever they are selling.
I suspect you remember the Sowell quote.
“Why the transfer of decisions from those with personal experience and a stake in the outcome to those with neither can be expected to lead to better decisions is a question seldom asked, much less answered.”
The problem is that those with experience and a stake in issues will almost always be hampered by needing to provide nuance. They will occasionally be wrong and need to admit they were wrong. When you base your beliefs on your tribal affiliations, faith, etc you dont need nuance (which gets ridiculed) and you are never wrong, you just double down and are impervious to evidence.
Steve
Dylan
May 6 2023 at 2:18pm
Pierre, you’re going to need to keep up. That was the consensus on AI last week! This week everyone knows that open source is doing it both better and an order of magnitude cheaper than the big companies.
See this piece purportedly from inside Google.
Dylan
May 6 2023 at 2:48pm
My link was stripped from my previous comment. Trying again.
https://www.semianalysis.com/p/google-we-have-no-moat-and-neither
Pierre Lemieux
May 7 2023 at 12:38am
Dylan: Skimming quickly, the piece looks interesting, although I would not claim to understand much of it. SemiAnalysis has doubts:
As I suggested in my post, there are good reasons to generally prefer decentralized, “open source” enterprises to government monopolies. Just as there good reasons to fear a government monopoly on guns. Which is not to say that private, decentralised, competitive big-company gatekeeping is bad. The NYT remains more trustable with information than QAnon. The problem raised by Harari and reinterpreted by your humble servant remains troubling.
Dylan
May 7 2023 at 9:19am
I also don’t understand all of it, but I think I get the main points and this has also been something I’ve heard over the last month or so from others in the field. The surprising insight to most I think is that the larger models are not only not better right now, but in some cases appear worse (see Bing/Sydney).
Also, in case it was clear, this wasn’t meant as a criticism of your piece. I tend to agree with you both on the potential risks and that government regulation is likely to increase the risk, not solve anything.
Richard
May 7 2023 at 7:34pm
I’ve spent a lot of time plumbing the depths of both the outputs of our new crop of AI and the commentary resulting from its rollout.
I’m a biochemistry major with a minor in computer “science”, and my take on it is:
I dunno.
That’s not a good thing. I’ve been very worried about the impact of biotechnology, in particular advances in neuroscience, on our society. Now I see hockey-stick exponential advances in AI.
Societal advances are always much slower than what’s become a series of exponential advances in technology. Frankly, most societies are still trying to deal with the sociological changes created by the advent of birth control (1950s). No society on earth has really dealt with cell phones, the internet, CRISPR, &etc.
We have sown the wind, and now we reap the whirlwind. I don’t believe any of the major players will hold back from all balls to the wall full tilt boogie damn the torpedoes full speed ahead development “because it might not be safe!”
Regulation? Revolt? A Luddite repudiation? I think, myself, it’s far too late. I want my grandchildren to smell the same grass as I have, and be free to walk the same paths.
T Boyle
May 7 2023 at 7:54pm
It takes a real lack of imagination to not see what the largest, most heavily armed corporate monopolies in the world (i.e., governments) are going to do with AI.
mohsen
May 7 2023 at 9:55pm
it’s important to remember that AI can also be used for good purposes. For example, AI can help doctors diagnose diseases more accurately, support disaster relief efforts, and even fight climate change by analyzing complex data.
AI is making a positive impact on our world. The key is to ensure that AI is developed and used responsibly, with clear guidelines and regulations to prevent harmful uses. By focusing on the potential benefits of AI and promoting ethical development, we can help create a future where AI is a valuable tool that improves our lives instead of causing harm.
Lawrence Jordan
May 8 2023 at 2:14am
There must be a means to control but philosophically how can this happen when the same technology is the security of what is to be managed. Basically we as humans must look past our previous “thinking platforms” of dualism and see forward how we can integrate in a format without religious dualities as the paradigm to be integrated into a control but allowing we as humans to continue with creativity in all sectors.
David Seltzer
May 8 2023 at 3:59pm
Pierre: “I think it is doubtful that an artificial mind will ever say like Descartes, “I think, therefore I am” (cogito, ergo sum), except by plagiarizing the French philosopher.’
I mention this as the basis of wonderful humor. To wit. After a lovely meal in a fine Parisian bistro, the server approached Descartes and asked, “will you have dessert?”
He replied, “I think not,” and disappeared. When we stop thinking, our liberties will disappear.
Jose Pablo
May 10 2023 at 9:22pm
The High Priests of the status quo! Always so worried about tomorrow and always with so very little skin in the game. Preaching from their useless altars, living in constant fear.
I bet they were scared to death when humans discovered fire (an “invention” that surely hacked the OS of human civilization), they almost died when humans invented the printing press, not to talk about the jacquard machine and the locomotive: the human civilization sure would end with everyone unemployed and all the English cattle stock killed on the railways.
But even worse were Elvis Presley’s hips. Their movement scared to death, once more, these High Priests of the status quo … no doubt Elvis should be banned because he was, singlehandedly hacking the OS of human civilization.
And the worst was still to come: internet! .. Now that was it! now the OS of human civilization was surely hacked forever.
I guess that when you don’t have the imagination required to envisioning the next great thing, envy and resentment push you to oppose these inventions.
Maybe what should scare them most is “regulation”: a mostly useless (when not counterproductive) activity that has hacked the OS of human civilization to the point that humans are now totally addicted to it. Can’t live any single day without asking for some more.
… and yet, this idea of an FDA kind of testing is not without merit. But why just for “new technology”? Why not for “new ideas” that could potentially hack the OS of human civilization? After all some of these ideas have proven themselves extremely dangerous. More dangerous even than Elvis’ hips and much more than AI.
What about and FDA type of previous testing for:
Religion: hundredths of millions of deaths caused and counting
Nationalism: this late XVIII century invention that sure has hacked the OS of human civilization and caused countless deaths (and counting)
Politicians: should have been banned after the results of the first Phase I testing ever carried out on them
If we order the issues to be revised by this FDA kind of testing aimed at preventing the “hacking of the OS of human civilization”, by their relative potential impact, the list will go on and on almost forever before reaching AI
And still the most relevant thing for this “anti-hacking FDA” would be the system of incentives of these so wise guys “protecting” us. Sure Harari didn’t mention anything about incentives. Philosophers, you know, are so over this kind of mundane thing!
Comments are closed.