Melanie Mitchell on Artificial Intelligence
Jan 6 2020

 Artificial-Intelligence-197x300.jpg Computer Scientist and author Melanie Mitchell of Portland State University and the Santa Fe Institute talks about her book Artificial Intelligence with EconTalk host Russ Roberts. Mitchell explains where we are today in the world of artificial intelligence (AI) and where we might be going. Despite the hype and excitement surrounding AI, Mitchell argues that much of what is called "learning" and "intelligence" when done by machines is not analogous to human capabilities. The capabilities of machines are highly limited to explicit, narrow tasks with little transfer to similar but different challenges. Along the way, Mitchell explains some of the techniques used in AI and how progress has been made in many areas.

RELATED EPISODE
Rodney Brooks on Artificial Intelligence
Rodney Brooks, emeritus professor of robotics at MIT, talks with EconTalk host Russ Roberts about the future of robots and artificial intelligence. Brooks argues that we both under-appreciate and over-appreciate the impact of innovation. He applies this insight to the...
EXPLORE MORE
Related EPISODE
Amy Webb on Artificial Intelligence, Humanity, and the Big Nine
Futurist and author Amy Webb talks about her book, The Big Nine, with EconTalk host Russ Roberts. Webb observes that artificial intelligence is currently evolving in a handful of companies in the United States and China. She worries that innovation...
EXPLORE MORE
Explore audio transcript, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.

READER COMMENTS

Adolph
Jan 6 2020 at 7:17am

Great interview. Russ asks about measuring Mechanical Turk quality: there are companies that handle that. This podcast interviews the CEO of one such company.

john Weeks
Jan 6 2020 at 2:53pm

Dr. Mitchell is not accurately representing Musk’s concern with machine learning or AI.  Elon Musk is concerned that there are little to no regulations governing the use of machine learning algorithms in weapons of war.  The danger that he is lobbying against isn’t some far distant self aware artificial intelligence.  He is warning that the current state of machine learning is capable of producing low vulnerability, highly effective autonomous weapons and that we do not have oversight or regulations governing the manufacturing of these weapons.  The argument put forward here was a strawman, at least in this context.

Russ Roberts
Jan 6 2020 at 4:34pm

It seems his concerns are much broader. This is just from the first page of Google I looked at…

https://www.livescience.com/62239-elon-musk-immortal-artificial-intelligence-dictator.html

Dallas Weaver Ph.D.
Jan 6 2020 at 7:11pm

Her expressed concern about profit-driven institution: “I don’t really trust these big companies whose, their motive is profit, to do the right thing. ” implies that institutions driven by some set of beliefs other than profit would produce more trustworthy results.

 

Those institutions driven by nominal “good intentions” such as “from each according to their abilities to each according to their needs” or desires for “living space” or religious purity seem to have killed a hundred million people in the last century when they controlled the information flows to the people in their countries (for their own good, of course).   Observing history, an institution motivated by profit seems to be a lot less dangerous than one motivated by beliefs, especially for all people who don’t share that belief system.

 

I will take Google/Facebook any day over the Chinese government or Homeland Security having the same power over information.  Google will not take all my money (they want me to create more wealth so they can get more from me), but “true believers” will take my life.

 

 

Todd D Mora
Jan 9 2020 at 1:47pm

Dr. Weaver,

I agree with your statement and would add that profit driven organizations operate on a voluntary transaction basis thus they have little or no power to compel anyone to do anything. Conversely, many groups who have stated that their intents were to promote the “greater good” have ended up using force to coerce people to see the “benefits” of their enlightened position.

Chase
Jan 6 2020 at 10:11pm

This was an excellent interview, I found it very thought provoking. The insight that thinking about AI is a useful way to think about our own intelligence really resonated with me.

The discussion of the hype and limitations of AI and the ability for AI to amplify bias were well covered. I felt that the singularity discussion was a bit dismissive of the concept on the basis of the timeline and one specific scenario (Kurzweil’s). The ability for a machine intelligence to use surpass human level capabilities on some timescale is evident when considering the plausibility of whole brain emulation, as was discussed with Robin Hanson. Contrasting the limited AIs of today with the concept of emulation provides an interesting perspective on our own experiences and what it means to be human. I would be very interested in an update from Robin and to hear your views.

Todd Kreider
Jan 7 2020 at 8:13am

A few points about Ray Kurzweil and the Singularity part in the beginning:

1) Mitchell said: “But, in this cascading effect where it gets smarter and smarter and smarter, and as Kurzweil predicts, he predicts that by 2045, we’ll have intelligences that are a billion times smarter than humans.”

Kurzweil has also predicted that human intelligence will also be enhanced starting in the late 2020s through swarms of nanobots in the brain (although he has pushed that back to the 2030s) so that after 2045, the Singularity,  humans will over time be “trillions and trillions” smarter than today and there will be no distinction between augmented humans and A.I. and no distinction between reality and virtual reality.

2) Mitchell continued: “But for one thing, software does not show an exponential trend in any way that you want; and software is where AI is sort of at right now.”

Kurzewil somewhat disputed this in his response to Paul Allen’s 2011 essay, “The Singularity is Not Near”: “Grötschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later—in 2003—this same model could be solved in roughly one minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algorithms! Grötschel also cites an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008.”

 

Todd Kreider
Jan 7 2020 at 8:14am

3) Mitchell added: “And also it’s not clear that we do have exponential trends anymore in these hardware areas.”

Kurzweil has expanded upon roboticist Hans Moravec’s 1998 measurements of computer power in constant dollars over the twentieth century to show that the Law of Acceleration has continued to the present and that techniques like stacking should keep this acceleration on pace in the 2020s.

Graph of “120 Years of Moore’s Law”
https://en.wikipedia.org/wiki/Accelerating_change#/media/File:Moore's_Law_over_120_Years.png

Kurzweil emphasizes that Moore’s Law is only one period of the exponential increase in computer power not 120 years long as the title states.

https://en.wikipedia.org/wiki/Accelerating_change

Mark Bahner
Jan 7 2020 at 5:49pm

An initial comment:  At about 4:30, there is talk about the conundrum of why people might be trying to develop general AI, but be worried about general AI at the same time. Russ Roberts talks about computers writing better music than Mozart and other things.

But the key, to me, is that computers/robots do not require any of the things that humans need to stay alive. So computers/robots would experience no danger to themselves from biological or chemical weapons. They could poison *global* air, water, and food supplies with impunity.

I really think the first Terminator movie captured it perfectly:

It can’t be bargained with. It can’t be reasoned with. It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead.

Think about the meaning of that, in an entity that does not require clean air, or water, or food…or even sleep. And in an  entity can produce a replica of with all the capabilities of itself in days, hours, or even minutes. (Contrast that to humans needing 16-20+ years to produce replicas of themselves.) Now throw in the possibility that it’s 100+ times smarter than we are.

Mark Bahner
Jan 8 2020 at 11:35am

3) Mitchell added: “And also it’s not clear that we do have exponential trends anymore in these hardware areas.”

Kurzweil has expanded upon roboticist Hans Moravec’s 1998 measurements of computer power in constant dollars over the twentieth century to show that the Law of Acceleration has continued to the present and that techniques like stacking should keep this acceleration on pace in the 2020s.

I just posted on my blog a graph of data from the Wonderful Wikipedia article “FLOPS”.

The regression line on the graph indicates a decline in cost per gigaflop of 11 orders of magnitude in 42 years. If the human brain performs at approximately 10 petaflops, the cost of a computer that can perform the same number of calculations per second as the human brain will be about $10,000 by 2024 (down from approximately $100,000 in 2020).

Progress on dollars per gigaflop of computer power

Antonio J Gonzalez
Jan 8 2020 at 4:29pm

99% Invisible’s recent podcast on AI and the Eliza Effect might be of interest. It looks at AI from a design perspective but the portion of the podcast addressing Joseph Weizenbaum’s personality was really interesting.

https://99percentinvisible.org/episode/the-eliza-effect/

Mark Bahner
Jan 8 2020 at 9:30pm

That podcast has an observation from Melanie Mitchell:

Using language brings together all of our knowledge about how the world works, including our understanding of other humans and our intuitive sense of fundamental concepts. As an example, Mitchell presents the statement “a steel ball fell on a glass table and it shattered.” Humans, she notes, immediately understand that “it” refers to the glass table. Machines, by contrast, may or may not have enough contextual knowledge programmed in about materials and physics to come to that same determination. For people, she explains, it’s commonsense.

As I see it, the situation is that computers don’t yet have “commonsense” because in general, they have no senses (most importantly vision, hearing, and touch). And why don’t they have senses? The answer is because it has not made any sense to give computers senses when they were dumber than bacteria.

Consider that in the year 2000, the cost per gigaflop of computer power was approximately $1000, per the graph on my blog I referenced above. Therefore, a computer with capable of 10 quadrillion operations per second–about the same number of calculations per second as the human brain, give or take a couple orders of magnitude–would have $10 billion (with a “b”!) as recently as the year 2000.

I think the next 2 decades will see an astounding increase in the combination of computer intelligence with the senses of sight, hearing, and touch. By 2040, I expect most people in the U.S. will every day see scores of robots moving, seeing, hearing, and touching.

To give just one example, I predict that by 2040, more than 90 percent of lawns in the U.S. will be cut by autonomous lawnmowers, with people paying “by-the-cut” (i.e., for each week’s service) rather than people owning lawnmowers themselves. That’s because the autonomous lawnmowers will have excellent vision and hearing…as well as computer brains that can perform more calculations per second than a human brain.

Luke J
Jan 8 2020 at 10:49pm

A little shout-out to my Portland, OR. Two weeks, two Portand Econtalk guests representin’. We may be Sanders fans but at least we’re interesting Sanders fans.

Mitchell confirms my bias: AI isn’t worthy of fear. But humans using AI are capable of great harm. Thank God for AI to enrich the world. Dear God, keep my kids safe from some despot with an algorithm.

Daniel Barkalow
Jan 8 2020 at 11:08pm

I got interested in artificial intelligence by reading Douglas Hofstadter’s book in high school, only it wasn’t Gödel, Escher, Bach, it was Fluid Concepts and Creative Analogies. So this was an amazing interview for me to hear just because I first really got interested by reading about her research projects.

I like to say that there’s been practically no progress on creating intelligent computers since the early or mid 90s, but an enormous amount of progress at solving problems we thought required intelligence without needing it. And the tendency for people to still think the problems require intelligence and therefore that computers are getting smarter is what leads to lots of confusion. This goes from doing large calculations, through solving integrals, storing facts, searching for documents, playing chess, animation, translating text, giving directions, recognizing faces, recognizing questions, moving through the world, and so forth. And we haven’t come to understand anything about intelligence through any of it, because techniques that don’t involve thinking just turn out to be more effective at getting results than techniques that have a plausible relationship to intelligence are.

This even extends to Eliza: sure, people treat it as if it really understands, but people write in diaries as if they understand, too (“Dear Diary, today I…”, as if it were a letter to a person). It was acting like a talk therapist, and much of the point of talk therapy is to get the patient to work through their problems by providing a context for that and using active listening techniques to keep the patient doing it; on occasion, a therapist will offer some deep insight, but this isn’t necessary in order to be a useful experience for the patient, and a computer can do a great job of active listening without the distraction of understanding what the patient is saying.

Mark Bunch
Jan 9 2020 at 11:18am

Russ,

I was steered to your podcast by Jonah Goldberg, maybe a year+ ago, and have enjoyed/been provoked by every episode since.  This (the Melanie Mitchell episode) was among my very favorites, if not the absolute best so far. Who’d have thought an episode of a podcast named “EconTalk” would do a philosophical dive into AI, or refer to Douglas Hofstadter (my elder daughter, Georgia Eva, is in part named for his book), Marvin Minsky, Ray Kurzweil, John Searle, or Roger Schank (et al.). Your  wide-ranging interests are greatly appreciated on this front – your presentations add so much to my morning walks (and my daily life).  Thanks for all the gems!

-Mark Bunch

T N
Jan 10 2020 at 8:29pm

Mitchell states: “What else is there besides our neurons, our neurons firing, our memories?”

It is demonstrably false to reduce “intelligence”/consciousness to a mechanical/physical process: suppose a given person’s neurons fired in such a way that they thought 2+2=5.  What law has been violated?  Now, an objector may claim that such a person would not be “wrong” in any objective sense, but they would not be able to get along well in society.  But that just takes the question up a level where the same problem reappears: so such a person wouldn’t get along well.  So what?  The thing that is supposedly being denied—namely abstraction/objectivity—is being assumed in the argument that purports to deny it.  The theory that human thought is mechanical, cannot rest on premises that assume non-mechanical means.

This is further illustrated in Mitchell’s subsequent comment “It’s that we are so much more complex and evolved, if you will”.  It is fallacious to resolve problems with a mechanical theory of human thought with appeals to metaphysical presumptions of “evolution”: to claim that there is a goal to “evolve” toward, is to beg the question that the mechanical theory is supposed to answer.  Furthermore, merely appealing to complexity does nothing to solve the problem.  The person who believes 2+2=5 is very complex.  So what?

Kent J. Lyon
Jan 11 2020 at 6:05pm

Listening to this podcast, I thought about Emily Dickinson’s poem:

The Brain–is wider than the Sky–
For–put them side by side–
The one the other will contain
With ease–and You–beside

The Brain is deeper than the sea–
For–hold them–Blue to Blue–
The one the other will absorb–
As Sponges–Buckets–do–

The Brain is just the weight of God–
For–Heft them–Pound for Pound–
And they will differ–if they do–
As Syllable from Sound–

That is, I would suggest, both far more pithy and far more substantial than this podcast.
Or than virtually all of the vast corpus of science and philosophy on the topic of Intelligence (and Consciousness).

I also thought of Charles Broad’s comment about Quantum Mechanics:

“A philosopher who regards ignorance of a scientific theory as insufficient reason for not writing about it cannot be accused of a complete lack of creativity.”

Combined with Richard Feynmann’s comment on Quantum Mechanics: “…Nobody understands Quantum Mechanics”

Certainly no one knows what Consciousness, nor Intelligence, is. Yet far from engendering restraint, this state of affairs seems to engender the most remarkable dogma from our most intelligent thinkers.

Russ and Melanie seem well apprised of this state of affairs. At least they are not misled, nor deterred, by such as Daniel Dennett, who is convinced that Human cognition (and hence, Intelligence) is merely an illusion. For him, it’s Unknown Unknowns all the way down. David Hume’s revenge.

How can AI be developed when we have no idea what actual intelligence is? Or Consciousness? The whole effort seems a ridiculous conceit. Sure we can manipulate data and achieve remarkable things that, like every endeavor of humans, can lead to great benefit or to existential threats to the species.

Homo sapiens sapiens, the animal who knows that it knows, remains utterly confused by its own awareness.

Let me suggest a test for when Artificial Intelligence becomes equivalent to Human intelligence: When the Artificial Intelligence can collapse the Wave Equation and can both demonstrate that it has collapsed the wave equation and knows that it has collapsed the Wave Equation.

The great conundrum of Science, specifically, of Physics,is the fact, as Shroedinger pointed out in his cat-in-the-box thought experiment, that Human Intelligence (Conscious observation–the euphemistically entitled “Measurement Problem”) can collapse the wave equation. Science refuses to address this conundrum. Only Eugene Wigner, of all modern physicists, has been honest about this. (Or perhaps Roger Penrose, the Pythagorean). This implies that human consciousness interacts directly with the physical world. Niels Bohr banned this conundrum from polite scientific conversation and we are stuck with his perspective. Russ Roberts suffers mightily from this mind set, trained as he is in a mechanical-causal Economics. He is struggling to escape that mind set, but seems unable to break free from it entirely.

If mind interacts directly with the material world, then it must also be a material entity. How could this be? Penrose and Hameroff have presented a model of the human brain and nervous system as a plasma system (noting that all neurons connect with all other neurons via gap junctions that in effect create a continuous cytoplasm among neurons) that functions as a high temperature superconducting quantum computational system. Given the immense information processing power of quantum computation, such a view of the human nervous system would have to be valid to explain such things as savant capacities of such individuals as portrayed in Rain Man. Given the strange properties of quantum systems, of non locality, spooky action at a distance, and the fact that the collapse of the wave equation affects not only reality at the moment of the collapse of the wave equation, but the prior course or evolution of the wave function–there is a sort of quantum teleologic effect, the implications of the Brain as a quantum system are utterly remarkable.

Penrose postulates that conscious observation to collapse the wave equation works through modification of the gravitational field that destabilizes the wave equation. That however elides the problem. One cannot escape the fact that consciousness directly acts on the wave equation. This would have to be explained as consciousness acting through a consciousness field effect directly. That is, the human brain must have evolved in the setting of a ubiquitous Consciousness Field, much as the vertebrate eye evolved under the influence of an electromagnetic field. The Consciousness Field would be the constitutive component necessary to the emergent component of the evolution of the brain (Thomas Nagel points out the necessity of both Constituitive and Emergent aspects for the evolution of Consciousness in his book “Mind and Cosmos”).

These are ideas that are pretty much anathema to Modernity, and have implications that no one wants to consider. We exist in an age confined to what William Blake felicitously terms “…single vision and Newton’s sleep.”

Much more can be said, but I have said too much already.

T N
Jan 13 2020 at 4:17pm

While your position is one I’m inclined to agree with, it cannot be claimed that consciousness collapses the wave function; we don’t know.

On the other hand, when Daniel Dennett says he has no intelligence, I’ll take his word for it.

Kent Lyon
Jan 15 2020 at 12:28pm

What we do know is that quantum systems behave differently when they are under observation as opposed to when they are under observation. The “measurement” problem.Something is going on. Both Shroedinger and Einstein objected to Quantum mechanics on the basis of this problem.

T N
Jan 15 2020 at 7:47pm

Yes.  We know that measurement collapses the wave function, but measurement includes more than mere consciousness.  We cannot claim that it is consciousness specifically that does that.

Stephan Ware
Jan 12 2020 at 5:12pm

Since so much of the discussion dwelt on the nature of Human (“Natural” ?) Intelligence as a measuring stick and pre-condition against which AI ends up being evaluated and the time spent discussing the far end of “singularity” type achievements, to the point of surmising about how much humans value human life, certain emotions or certain works of art and whether AI ever could do that: I would have been interested to know Dr. Mitchell’s integration with or simply awareness of work done in cognitive psychology.

I’m not a psychologist, but in an obvious example for this venue: Robert Wright (Econ Talk Guest 10/2017) gives a very reasonable-sounding description of a deeply NON-deterministic version of human consciousness and self – where in contrast with the logic on which computing systems **exclusively** depend: in people performing tasks more complex than simple perception, more often than not appear to be using most of our “intelligence” for EX-post explanations of decisions and actions rather than using it to build methodically assembled conclusions driving a final decision. Any number of examples from prior shows probably makes Russ’s example of a higher-level intelligence in asking Siri about the impact of the minimum wage on unemployment a wonderful example: where the answer you get to that question from any human probably tells you a lot more about the mostly irrational desires of that person at the time you ask it than their level of “intelligence” about the topic. It’s just that more “intelligent” people can put a finer gloss on the same base desires rather than more “intelligent” people converging on a singular “right” answer. If anything examples about indicating more “intelligence” seems to create sharper diversion in conclusions rather than convergence.

Since simulating human intelligence appears to be something of a desired endpoint: I would have been interested to know her response to this model of “intelligence” as even a worthwhile goal. Beyond the amazingly complex (but simple to us) perceptions and generalizations humans can perform at around 18 or 24 months, if (to take Robert Wight’s implications fully) there is no real (“intelligent”) “self” at all, only a serious of instincts and habits determining how we can respond in possibly wildly divergent ways to different situations: what would be the point of even attempting to copy or mimic that? In this light, talking comparing AI with “human-level” intelligence reminds me of the quote: “writing about music is like dancing about architecture”. What is the point?

[link added–Econlib Ed.]

T N
Jan 13 2020 at 4:23pm

If the argument in your last paragraph were true, you would not be able to argue for it.

Stephan Ware
Jan 23 2020 at 5:06pm

Exactly. “A man convinced against his will [still has the same opinion]”. It seems rare someone changes their mind through “intelligent” argument, and perhaps gets worse the more “intelligent” your interlocutor. Russ seems to at least try, but still on the EconTalk drinking game I doubt anyone gets very drunk with the rule: If Russ says “you might have a point”, conceding to a guest some long-held view of his might not be right: you take a drink. If a guest does the same, you have to fill and empty your glass.

Kent Lyon
Jan 12 2020 at 9:58pm

I will add that no one seems to consider the possibility that human consciousness, or intelligence, represents a singularity, perhaps even a “naked singularity”. Human intelligence or consciousness may indeed not even be explainable mathematically or physically.  Indeed, it is not beyond possibility that every given individual human being represents a singularity…

Marilyne Tolle
Jan 21 2020 at 3:34pm

Melanie Mitchell says around the 1:10 mark:

“So, the idea there is that we have a superintelligent AI, it’s superintelligent, but at the same time it doesn’t figure out that human life is something we might want to preserve. That just seems crazy to me and it just seems like a misconstrual of the word ‘intelligent.'”

I think it depends on whether the AI system has epistemic rationality i.e. it holds “true” (to humans) beliefs about the world (e.g. human life is worth preserving) or instrumental rationality i.e. it makes sensible decisions given its goals/values, whether these are rational or not. That’s where value alignment kicks in.

Humans themselves display both types of rationality, and not always simultaneously.

Comments are closed.


DELVE DEEPER

EconTalk Extra, conversation starters for this podcast episode:

This week's guest:

This week's focus:

Additional ideas and people mentioned in this podcast episode:

A few more readings and background resources:

A few more EconTalk podcast episodes:


* As an Amazon Associate, Econlib earns from qualifying purchases.


AUDIO TRANSCRIPT
TimePodcast Episode Highlights
0:33

Intro. [Recording date: November 19, 2019.]

Russ Roberts: Today is November 19th, 2019 and my guest is computer scientist and author Melanie Mitchell. She is Professor of Computer Science at Portland State University and External Professor and Co-chair of the Science Board at the Santa Fe Institute. Her latest book and the subject of today's episode is Artificial Intelligence: A Guide for Thinking Humans. Melanie, welcome to EconTalk.

Melanie Mitchell: Thanks for having me.

0:56

Russ Roberts: So, this is a really superb overview of the history of artificial intelligence [AI] which doesn't take up too much of the book, but it is in there, which is very nice. More importantly, it's an overview of the current level of the capabilities of AI. It teaches the reader how artificial intelligence is actually used in many of its applications today, and along the way we learn about your assessment of where you think AI is going and how that might affect our lives. So, it's really--it's a wonderful book. I want to start off with a lecture that you referred to from Douglas Hofstadter when he was at Google and he, at that point--when was that lecture roughly?

Melanie Mitchell: I think it was around 2013 or so.

Russ Roberts: Okay. So, he was worried about the progress that AI had made in chess and in music, two areas that he had underestimated, he confessed, when he had written his very influential book, Gödel, Escher, Bach. And, he was terrified. He said that AI will make humans obsolete; we'll become relics, our children will be relics. And the part that was interesting about the story was two things. One, that Hofstadter felt that way; and second that the engineers at Google that he was talking to were puzzled. So, talk about those two reactions and what you make of them.

Melanie Mitchell: Yeah. So, the engineers--the meeting was really featuring Doug Hofstadter. They were coming to see him and hear what he had to say about AI. A lot of the engineers at Google went into the field because they had read his book when they were in high school, like many of us. That was and extremely influential book in AI. He was really a hero of many people. But, he got up and started talking about his fears about AI and his terror, not that we would have some malevolent, super-intelligence running the world and enslaving us, but more that intelligence itself would not be as profound as he thought it was.

He was worried that intelligence, AI might be achieved in computers via cheap tricks. And he, as you said, was very disturbed by how far AI had come starting maybe with IBM's [International Business Machines] Deep Blue system which beat Garry Kasparov at chess, and then progressing through Watson playing Jeopardy, and self-driving cars and speech recognition, all of that, everything. And it terrified him because AI was doing so well at these tasks. But the Google engineers, they got into AI because they were inspired by Hofstadter. They loved his books. And here he was saying AI is terrifying and that was exactly what they were trying to achieve. So, they didn't really understand at all.

Russ Roberts: And it's a very deep question that I'm sure we'll maybe dance around and maybe go delve into it, which is: How should we feel about that? Would it be a good thing or a bad thing if computers could write music that was better than Mozart, better than Beethoven, if it could write poetry that made us cry and create movies that excited us and inspired us right? Like you said, that's what these engineers are trying to do, that's their job; and yet somehow--maybe it's because of age or a different temperament or sometimes it's a religious outlook--there's something disturbing to some people about that.

Melanie Mitchell: Yeah, absolutely. I mean, what we value most about ourselves as humans, sort of what makes us special is our intelligence, our creativity, our ability to create music and literature and so on. So, I have mixed feelings about this. For one thing, I'm like those Google engineers. I got into AI because I was excited about the ideas in Gödel, Escher, Bach. I read it in college or just after college and thought 'I want to understand what intelligence is. That's the most fascinating question of all.' So, I actually went to work with Doug Hofstadter. I was a Ph.D. student in his group.

Back then he wasn't afraid of AI because AI wasn't doing very well. It wasn't threatening. Although we rejoiced in both that the programs that we built--we would rejoice in both their creativity and their dumbness, because their dumbness really showed how challenging the problem was.

So, I guess one of the reasons I wanted to write this book was to make sense of what was going on, because here was Doug Hofstadter, my former mentor saying how terrified he was: that was very surprising to me. Here were the Google engineers saying, 'I think we're going to have human-level intelligence within the next 30 years or so.' And me thinking, 'What? How could that possibly be true?'

So, I started looking into AI more broadly. I've been doing research in this field for decades but I'm in my own little silo of research, narrow research. And, I started looking more broadly and trying to figure out exactly what was going on in the field. So, that was really the impetus for writing this book.

6:24

Russ Roberts: Yeah. There's a sense at the beginning of the book, I really related to which was you confess that maybe you were too sanguine, too optimistic about the prospects of humanity in the future and you decided you'd look into it. And a lot of what I do on the program is interview people, and I've talked to Nick Bostrom, Rodney Brooks, Gary Marcus, Pedro Domingos, people who have differing views about where this future might go, and I'm trying to figure it out myself and you're going to help me today and your book helped me, and you'll help our listeners.

At one extreme, let's put on the table the idea of the singularity which we've mentioned before, associated with Ray Kurzweil. So, what is the--depending on your perspective it's an extreme view. For some, it's a beautiful thing; others, it's a dark thing. But, what is it?

Melanie Mitchell: The idea of the singularity is that, once AI reaches human-level intelligence, that because it's a computer--and computers are extremely fast and they can process data, huge amounts of data much faster than humans can and all that--that the AI will get smarter than humans very quickly. They'll be able to digest all of human knowledge and will be able to create even better AIs or maybe improve itself. I'm not exactly sure which. But, in this cascading effect where it gets smarter and smarter and smarter, and as Kurzweil predicts, he predicts that by 2045, we'll have intelligences that are a billion times smarter than humans.

Russ Roberts: Only a billion. He's so cautious.

Melanie Mitchell: Right. I think most people who are actually serious AI researchers roll their eyes when they hear about that kind of thing and they say you know, 'You know, first of all, Kurzweil's reasoning all has to do with his idea of exponential growth--that we have Moore's Law which says that computers are getting exponentially smaller and exponentially more powerful. But for one thing, software does not show an exponential trend in any way that you want; and software is where AI is sort of at right now.'

And also it's not clear that we do have exponential trends anymore in these hardware areas. And when he says 'a billion times smarter than humans,' he's implying that there's some intelligence metric that can be multiplied by a billion. And, intelligence to me is not just a single thing.

Russ Roberts: It's not a scalar, a one-dimensional digit. You do have to give him credit: he at least didn't use decimal points. He didn't say, 'It'll be 1,000,372,643.6 times bigger than human intelligence,' which would be really offensive.

Melanie Mitchell: Yeah. Well, I'm sure he's not done forecasting yet.

So, I think most people look who know anything about AI look at the current state of AI and say that this idea of the singularity is nonsense, to put it bluntly.

But, there are people who believe it, or there's people who believe in slightly less inflammatory versions that were really getting closer and closer to human-level AI--whatever we mean by that--and that will be there within the next 20 to 30 years. The people that I talk to in AI don't believe that. But, I know that the field is kind of split. And as you said, you mentioned Bostrom, you mentioned Gary Marcus, you mentioned Rod Brooks, Pedro Domingos--and it's strange when you talk to all these different people, they have vastly different views. Not only of where the field is going but actually where the field is right now.

10:48

Russ Roberts: And we're going to talk about that. Why don't we start with that--as a background observation, though, that your book doesn't literally delve into this, but it made me think about it, it's I think a crucial point. I think there's a fundamental misunderstanding of what knowledge is. I'm going to put two types of knowledge on the table and see what you think of this distinction. So, if I ask Siri, which I have to be careful because--

Melanie Mitchell: Don't mention that name--

Russ Roberts: I'm in the airplane mode on my phone right now, but I've noticed sometimes even in airplane mode, she's responsive. I don't want to interrupt the flow, here. But, if I ask her 'How tall is the Eiffel Tower?' or 'What's the capital of Brazil?' or 'How many home runs did Stan Musial hit in 1962,' she's fantastic at that. And it's instant. And she's probably almost never wrong. I don't know what she does about some facts that are slightly ambiguous; but most facts are just facts.

If you ask her--and I haven't asked her this, but if I did--if I asked her 'Does the minimum wage reduce employment?' she would not answer that question. She would pull up a bunch of websites and say, 'Here are some things I found.' Because it's not a yes-or-no question. It's not something you can have--you can have an understanding of it, but you can't have knowledge of the kind like the height of the Eiffel Tower.

And I think--just one crude way to make the distinction is data versus wisdom. The idea that a smart machine could cure poverty, that the problem with our attempts to cure poverty is we're not smart enough, I think is a fundamental misunderstanding of what the nature of poverty is. It is not something that is amenable to intelligence. It requires something much more complicated.

It's what Robin, I think it's Hogarth, I learned from David Epstein in his book Range and in our interview: It's a wicked problem. It's not a kind problem. It's got too much complexity around it to be--there's too many trade-offs. There's too much uncertainty.

So, I just think that, when people talk about the singularity--that machines will cure all disease and cure poverty and we'll live forever and that be incredibly happy because they'll know what happiness is too. It's as if that somehow a knowledge problem. I think it's just a fundamental misunderstanding. What do you think of that?

Melanie Mitchell: Well, I agree with you. I mean, you're talking about problems that even humans can't solve. So, that's even one step ahead of what I'm thinking about because I'm thinking about problems that humans can solve, but machines--because they have knowledge, let's say--but machines can't solve because data is not knowledge.

So, one example is, if I'm driving and I see something ahead of me in the road, how do I decide whether I should stop for it or not? Let's say it's a floating paper bag, or a herd of ducks--a flock of ducks, I guess--or a cardboard box, or a child's Lego set, or, you know, whatever it is. I have knowledge about those things. I know how they interact in the world. I know what would happen if my car crashed into them.

I have the ability to predict the future, the likely future, just of these very mundane kinds of things.

I'm not even talking about poverty, happiness, etc. But one of the problems with AI is that it doesn't have that broad knowledge of the world.

And I learned in writing this book, about self-driving cars that one of the problems they have is: What should they stop for? And the biggest source of accidents in self-driving cars, the experimental ones that are driving around, are people rear-ending them.

And, the reason for that is that they stop unexpectedly. They slam on the brakes, because they think there's something there that they should stop for that no human would stop for. So they're unpredictable. So, of course the human is at fault. You know--you are not supposed to follow that close . But people do. People expect cars to drive in a certain way. And, these self-driving cars don't have enough common sense, if you will. They don't have enough in the sense we're talking about to know what to do in these kinds of situations that are different from, say, what they've been explicitly trained on.

15:45

Russ Roberts: You have an example--you have a couple of examples in the book we'll go into. One of them, I love: it's a photograph. It's a soldier returning. I don't know what it is exactly, right? That's what's beautiful about it: it's just a photograph. Next, the soldier is down on one knee. She's got her hair tied back, so at first you might not notice that it's a woman, but it is a woman. She's in camo gear, but when I showed it to my wife it was a little dark in the room and she thought it was a dress. But it's camouflage gear--so she's clearly, she has a big military backpack on. She's clearly a soldier. She's stooping down to pet a dog. She has a lot of emotion on her face--to my eye, but it's a little hard to read that emotion. It's not--there aren't tears. It's not obvious. She's in profile. It's a little hard to see it, but I immediately see emotion, whether it's there or not.

The dog, you can see its tail is blurry. So the dog is wagging its tail, and next to the two of them is a balloon that says 'Welcome home.' So, as you point out we immediately see that, 'Oh, soldier coming back from war or back from duty seeing her dog.' And, it's a--we make an immediate emotional connection. How does the computer see it in today's level of AI?

Melanie Mitchell: Well, if the computer has been trained to recognize objects, it probably will recognize a person. You might say it's a man. It appears it's not that great about gender. It will recognize a dog. It might recognize a balloon. I'm not sure. But it won't be able to put pieces together. It certainly is not anywhere near where humans are at recognizing, like, emotions or the dog wagging its tail, kind of. It won't be able to put together the kind of story we put together when we're looking at visual data or hearing about something from a written story, because it doesn't have that world knowledge or that wisdom, if you will, about how the world works.

Russ Roberts: So, what I learned for your book--and we'll try to get into it because it's a little hard to do in a podcast without visuals--but the way that a computer could learn, and I'm going to put "learn" in quotes and we'll come back to that, too--but the way a computer could learn about that is it would look at a lot of photographs of faces in that similar setting. Look for things around the shape of the mouth of the person returning, maybe the eyebrows, maybe tears or other things. And then associate that with photographs that humans have labeled as sad, longing. We could imagine a bunch of adjectives.

So, that eventually it could "learn" that the right way to capture that photo. Which is currently, at the current level, is more like a man and a dog or maybe a soldier or a dog, or maybe a woman soldier and dog, to something more human, as the description which would be: Soldier returns home and re-encounters something that she loves and misses.

But the way it would learn about that isn't by reading Jane Eyre or a better example, I guess would be the Odyssey by Homer where Odysseus encounters his dog after, whatever it is, 20 years. It would be through a very mechanical process of association.

Melanie Mitchell: Yes, that's right. That would--using today's technology it would have to have maybe millions of photos, faces with different emotions. And they would all have to be labeled by humans as to what their emotions are. And then the computer would look at the pixels of the image. And, right now the most common approach is using these so-called deep neural networks, which learn from these labeled images, and then the input is an image, the output is some kind of classification out of some fixed number like sad, happy, longing. You could decide what your categories are.

And as you say, it's very mechanical. But I think that brings up another question which is: What are we humans doing to learn that, learn what we see? Are we not mechanical in some sense? What else is there besides our neurons, our neurons firing, our memories?

I think in some sense we are mechanical. And I've actually had a lot of arguments with people, including my own mom about this--who doesn't buy that. But that we're just--it's a matter of complexity. You know? It's that we are so much more complex and evolved, if you will--evolved to have certain kinds of emotions or faces be very salient to us, because that's so important in our lives, and this sort of social, sociality of humans: that there are certain things that we are in some sense evolved to learn.

Russ Roberts: I think you quote, is it Mitch Kapor? I don't know how to pronounce his last name.

Melanie Mitchell: Yeah.

Russ Roberts: Mitch Kapor says, basically artificial intelligence will never be, quote, "intelligent," until it goes through the life experiences that a human brain experiences and categorizes those.

And so it is possible that what we're really doing when we look at that photograph is exactly what you said: It's a mechanical process, some neurons fire, I remember the last time I saw someone who looks something like this. We don't understand that process very well yet. We might get better at it. And the interesting question to me is whether that's going to help make AI better or just tell us something about our brains. But, why don't you respond to that actually?

Melanie Mitchell: Yeah, it's interesting because going back to Ray Kurzweil and his singularity, if you actually read his books carefully, you see that he actually agrees with that statement. He says, 'Yes, you do have to be able to experience all these things.' But his solution is that within 20 years, we're going to have virtual reality that's indistinguishable from real reality and that's going to be used to train AI, so the AIs will actually go through a process of development in the way that we do, but perhaps using virtual reality to speed it up.

22:41

Russ Roberts: Let's go back and talk about Deep Neural Networks, because I hear that phrase a lot. It doesn't mean what it sounds like, as it turns out. It's a very clever marketing phrase, though, because 'neural' makes it sound like my brain and 'deep' makes it sound profound. So, what is it literally?

Melanie Mitchell: So, a neural network is a computer program that's inspired by the brain. Particularly, most neural networks these days are inspired by the way the visual system works where the visual system gets input--you know, light falls on your retina and then is processed in the brain through a series of layers of neurons. So the visual system is layered in a hierarchical way. A 'deep neural network' is a simulated, simplified version of that with these layers and 'deep' refers to how many layers there are.

So, a shallow network has a small number of layers; a deep neural network has multiple layers. And that's all that 'deep' means--is sort of how many layers of simulated neurons there are in the network.

So, the reason why 'deep'--I mean, deep neural networks, the idea has been around since the 1960s or 1970s, and people are experimenting with these things for a long time. But, people never had enough data to train them, and they never had enough compute power to make that training possible.

So, in the last decade, we have both huge amounts of data because of the World Wide Web and so on, and very fast parallel computers. So, it's come together to allow these networks to actually start to shine in certain tasks--in vision, in speech, in language processing--because of this convergence of big data and fast computer, computing power.

Russ Roberts: So, using that, and let's talk about the examples like recognizing handwriting or recognizing objects and identifying them correctly in a broad sense, like dog versus cat. One of the ways that happened was people had, as you say, access to lots of photographs all of a sudden through Flickr or Google photos or other databases of photos. But then we had to get them labeled so that the neural network could practice, learn when it made a mistake, and go back and re-weight--basically, fundamentally what the so-called learning that goes on is it's re-weighting the--

Melanie Mitchell: The connections between the simulated neurons.

Russ Roberts: That were given a certain shape of darkness of a pixel. It decides that that was more likely to make it a dog rather than a cat in this particular region, right? And so, that required a lot of those millions of photographs to be labeled. And that was done through--a lot of that was done through Amazon's Mechanical Turk which is something I'd heard of, I didn't know about. So, explain what that is and how that played a role, because it's really an amazing thing, bizarro thing.

Melanie Mitchell: Yeah. So, Amazon created this web-based platform where people who had some job they needed to be done that couldn't be easily automated were able to hire people online to do these jobs. Like, 'Here's a photograph. Tell me is it a dog or a cat? I'll give you a penny for that.'

Russ Roberts: And we're really good at that.

Melanie Mitchell: We're really good at that.

Russ Roberts: Yeah, no problem.

Melanie Mitchell: So, they called it Mechanical Turk, and this is a little bit obscure, but back many hundred years ago there was an AI hoax where somebody had built this chess-playing machine that had a puppet that would move the pieces and the puppet was dressed as a Turkish-Ottoman, I don't know what but some kind of Turk. And so Amazon--and it was actually a person inside, hiding inside. So it was a hoax. So, this is with Amazon, some genius at Amazon came up with this analogy which is, 'Okay, we have things, people that are doing these tasks that are too hard for AI and we're paying--anybody, can hire them. You can pay small amounts of money for simple tasks,' and they call it artificial intelligence because it's humans, right?

And this platform has grown. It's huge now. And in fact researchers use it all the time for getting people to label data, getting people to be in psychology experiments or social science experiments--all kinds of different tasks. So, you know, this idea that AI is going to put people out of a job is actually a little more complicated because AI now, or the lack of thereof, has created this huge set of very low-paying jobs for people who are on this platform.

Russ Roberts: What kind of money would a person make on this? You say it's penny a label.

Melanie Mitchell: Or 10 cents. You know, I don't know exactly.

Russ Roberts: Right. So, you don't know. So, anyway, they make some kind of money. And some people find this offensive because they don't make very much, but for some people it's a nice way to do something relatively mindless that brings in a little more money. For some, this is an indictment of the world we live in and for others it's like, 'Wow, this is cool.' We're going to leave that to the side. My question is: How do we know they're labeling them correctly? We can't use the computer to check them because that's the whole idea.

Melanie Mitchell: Right. So, this is a problem.

Russ Roberts: Honor system?

Melanie Mitchell: No. The honor system doesn't work. I think most people don't maliciously mislabel them, but sometimes they get lazy or they make mistakes because they're trying to do too many too fast. So, there's a lot of methods people look at having, like, an image being labeled by multiple people and taking a majority vote of the category. There's different methods for trying to verify these labels. But, now people are trying to do even more complicated tasks, for instance with natural language using Mechanical Turk. So, I might ask you, 'Here's two sentences. Tell me if the first one entails the second one or contradicts the second one.' This is like a task that they want computers to do so they need data; and it turns out that people will get those wrong quite a bit. The more complicated the task, the more tricky the whole Mechanical Turk thing is.

29:42

Russ Roberts: Yeah. I think that--of course, that's one of the challenges. But, I think it'll be useful at this point if you could summarize--and I apologize for making you do this on the spot--give us a somewhat summary of where we stand. So, what are the great--let me might try to make the list from what I read from what I read in your book and tell me if I leave anything off. So, a computer has beaten the best chess player in the world. A computer has beaten the best Go player in the world which was--

Melanie Mitchell: One of the best.

Russ Roberts: One of the best, which was a game that people thought might not be amenable to a computer doing because it's so open-ended. It's really gotten better at voice recognition, so I can talk to my assistant on my phone. It's pretty good at handwriting recognition. It's really good at certain crude image identifications that we're talking about. The self-driving car thing, it's really good--it's 90% of the way there, but as you point out, 90% of the way takes 10% of the time and the last 10% takes 90% of the time. So, we're not really close despite what I was reading two or three years ago that it was imminent--we're not really close to autonomous driving at what has been called Level Five where you could sit in the back and read a book and enjoy your music and have a glass of wine. Am I missing anything important that AI has accomplished in the last 20 years, say?

Melanie Mitchell: Oh. Well, I think there's a lot of rather specific tasks. I mean, one thing is machine translation--

Russ Roberts: Good one. I forgot that--

Melanie Mitchell: between languages. We also have a lot of applications in medicine with medical image analysis, getting medical data and trying to make sense of or make diagnoses from it. There's been a lot of applications in scientific data analysis. Yeah; I mean, it's kind of all over the place but each application is somewhat narrow and it's a particular--you have to sort of start from scratch with building a system that will do that application rather than having some more general AI that would be able to do many different things.

Russ Roberts: Yeah, none of it--correct me if I'm wrong--none of it is transferable. So, the computer that can play Go can't play checkers.

Melanie Mitchell: That's correct.

Russ Roberts: It hasn't, like, figured out board games.

Melanie Mitchell: Right. And it can't even play a variation on Go. I mean, there is some very small transfer[?]: There's a lot of work on transferring AI tasks, but I'd say for the most part the state-of-the-art systems are not very transferable.

Russ Roberts: So, let's talk about Watson for a minute. Watson which is the IBM computer that played Jeopardy and beat Ken Jennings, the longtime champion, and somebody else whose name I don't know. But that gives the impression that it knows a lot of things. It doesn't just know one thing. But, of course, it knows a lot of things very narrowly.

Melanie Mitchell: It has a lot of databases. Rather than saying it knows a lot of things, I would say it has the ability to look up things very quickly on Wikipedia and other big database sites. And, it was able to use some natural language processing to make sense of Jeopardy questions. It did really well.

Russ Roberts: It can make some jokes and puns.

Melanie Mitchell: Yeah, understand puns. But, it didn't seem like its knowledge was transferable in the way IBM touted it to be. Now they said, 'We're going to send Watson to medical school,' which is kind of a--people took that seriously, but it's just really a kind of a quip, and, 'We're going to have it'--

Russ Roberts: But they meant it seriously in the sense that it would--they didn't put a lot of medical knowledge into the database. That's just not what medical school is, unfortunately. Oh, that it were!

Melanie Mitchell: Right. Right. So, now Watson has lots of medical data, and it's supposed to be able to answer questions about the domain of medicine; but turns out that's very different than answering Jeopardy questions, and it didn't do as well.

It's a little bit hard to get out exactly what Watson can do now, but my understanding is that Watson that played Jeopardy no longer exists. That program has been completely changed into using deep learning and other current modern AI tools. Just the way that Google and Microsoft and all the other companies do. So, IBM now has what it calls 'Watson' which is just a platform of computing tools.

Russ Roberts: It's sad to think that the entity that--I'm being facetious here--the entity that defeated Ken Jennings is no more. And if you saw--no spoilers here--what Ex Machina plays on this, the movie, plays on this human relationship to AI. I don't know if Ken Jennings is sad that Watson--we give it a name, a human-ish name, right, Watson?

Melanie Mitchell: Right.

Russ Roberts: It's named after the founder of IBM, but you might think it's Sherlock Holmes's partner--which is ironic given that he was sort of the naive, not so smart one. He was the straight-line guy. But I don't think Ken Jennings--do you think he's sad? I don't think he's sad.

Melanie Mitchell: I don't know.

Russ Roberts: He wants a rematch, and he's gone.

Melanie Mitchell: Yeah. Yeah. I mean, I don't know if that Watson could be resurrected or not. Maybe it could. But it's not the same Watson that is being marketed for healthcare, for tax preparation, for legal advice, and so on. That's a completely different set of tools.

Russ Roberts: Ideally, it'd be smarter because it's had time to pass and get smarter.

Melanie Mitchell: Right. But as you say, data is not knowledge. Data is not intelligence.

36:01

Russ Roberts: So, in all these examples given what--and I think that was a fair summary of where we're at--my take and I suspect it's yours, I'll give you a chance to respond. My take is that almost none of that is what we would call, as human beings, 'intelligence.'

Melanie Mitchell: 'Intelligence' is one of those words that means different things in different contexts. It means different things to different people. Here we are sitting in Washington, DC, and I think a lot of the country, a lot of people in the country think, 'Oh, Congress--there there's no intelligence there.' But, when I go around giving talks about AI and I say, 'Well, computers aren't very intelligent yet,' people tell me, 'Well, human beings aren't very intelligent, either.' But they're using the term just very differently. Intelligence isn't just one thing. It's not a yes or no thing either. And I think one of the problems is we don't have a good sense of what intelligence is. We don't understand our own intelligence very well.

Our state of understanding the brain is still quite limited. Our understanding of human psychology is still rather limited. And I think intelligence is the one of those terms that's a placeholder for things we don't understand yet. It's kind of a phenomenon that we kind of have a general idea of what it is but we don't know specifically, and it's just waiting for more scientific advances to replace it with something more useful.

Russ Roberts: I think it was Rodney Brooks here on the program who quoted, I think it's Marvin Minsky, saying that these are things called 'suitcase words.' And 'intelligence' would be one of those things. We put some things in that suitcase when it's convenient. If it's not, we take it out.

But I guess what I had in mind is this idea of transference or connection. What I think of as human. Or, better yet, something beyond what was programmed into it. That would be even a narrower straightforward thing. As far as I can tell from what your overview of the field is in your book, computers can't teach themselves anything except what they've been programmed to learn. They can't go--quote, "to learn"--programmed to input transfer, translate an input into an output. They can't then add something to it. That would be at one measure of intelligence. Or at least I think it's a measure of intelligence.

Melanie Mitchell: Right. So, yeah; I mean, it's tricky to talk about this because, of course, if I train a computer program to recognize dogs and images, they can recognize dogs and images that I've never shown them. Right?

Russ Roberts: Fair enough.

Melanie Mitchell: So, that's sort of a generalization. But, probably if they haven't ever seen it, they can't recognize a dog in a cartoon, or they can't recognize a painting of a dog.

So, in AI, people talk about this notion of distribution, which is kind of a statistics idea that your data has a certain distribution. The dogs in your training data have a certain range of features that your system learns and if you show it a new thing that is within that range of features then it can recognize it, but if it's outside of that distribution, it won't be able to transfer its knowledge to that.

And that's something that we humans are able to do. One of the things that kind of surprised me: there's a huge focus in AI on this thing called 'transfer learning,' which is exactly what we're talking about. That is: learn one thing, learn to play chess, be able to transfer your knowledge to variations of chess or to checkers.

Russ Roberts: Read a CT [computed tomography] scan, then you can read an x-ray, then you could know what--

Melanie Mitchell: Yeah. And this is called transfer learning, and it's huge. But, transfer learning is exactly what we humans call learning. So, what these systems are doing is not learning in the human sense, because we assume that if you've learned something you can use that knowledge in a very new situation. That's still a challenge for AI.

40:14

Russ Roberts: We're in DC [Washington, D.C.] at the DC office of the Hoover Institution. But when I'm out at Stanford, I inevitably get sucked into the tech world. And they're very utopian there. They think tech can solve all problems--a lot of people do think that. And I find that very seductive, because I like to believe that to be true. I find it--there's something comforting about that.

But what I've learned in the last five years is how often the claims of these tech evangelists are overstated. I mentioned autonomous self-driving cars--way overstated. An extreme example would be Theranos, which turned out to be a fraud, but the idea that: 'One drop of blood, we're going to diagnose 70 diseases.' That, machine learning is going to solve all problems, or artificial intelligence.

There's an enormous amount of hype. Now, some of that hype comes from the media and you give a lot of examples in the book of headlines that were misleading or misdescribing studies that were much more modest. It's an inevitable human problem. And one of the things I learned from your book is just, again, to be so sensitive to that because I'm so--it's so suggestive.

Melanie Mitchell: Sure. I think hype has been a problem in AI since the very beginning of the field. People--intelligence--intelligent computers--that's such a--or not even just computers, but machines in general--that's such a long-held goal for humanity.

And I think the hype has gotten almost worse, the better AI works. One reason is that it's--there's a couple reasons. One is just whenever a technology becomes commercialized then there's this need to sell it; and so people sell it. I mean, that's just the nature of marketing. So, we've gotten a hype from the companies: IBM advertising Watson is one very salient example of that.

But also, I think people often, when we see an intelligence, an AI system like Siri, for example, we tend to anthropomorphize it. It has a name, it has a voice. It almost has a personality. We tend to give it more credit than it actually deserves for thinking or being intelligent or understanding, and that's a very human reaction. And that's also led to some of the hype, I think: is people actually believing in what they say about the intelligence.

Russ Roberts: Yeah. I gave my parents an Alexa to help them listen to music, which was a great decision. It beat the other solutions my brother and sister and I tried to come up with for them. But, it's like they have a boarder in their house. They're very close to her. They'll say things like, 'I couldn't believe it. Alexa knew about--'

Melanie Mitchell: Yeah. I mean, we saw this back in the 1960s when Joseph Weizenbaum created Eliza, which was a psychotherapist chatbot essentially. And it was the most simple program. It had a few templates. It had some keywords. It was supposed to be a particular kind of psychotherapist. So, if you said something about your mother, it would say, 'Tell me more about your mother.' And it had little templates like that.

And, people wanted to talk to it. People wanted to tell their deepest secrets. They really believed that here's finally something that, somebody that understands me, and is willing to listen to me, and really listen to me, because it would take what you said and like play it back and say, 'Tell me more about that?' and, 'What do you think about that?' and 'How do you feel about that?' And, Weizenbaum was horrified, and in fact he became an activist, an anti-AI activist, because of the way that people interacted with this program.

44:29

Russ Roberts: One of the challenges of the depth of these neural networks and other techniques is that there's a certain black box aspect to some of the knowledge that comes out of these systems, answers that come out of these systems. Some of those, sometimes we might not care and we just want the answer. We just want to know whether it's a tumor that's benign or malignant, or whether there's a tumor at all. And how it got to it, we don't need to understand.

But a lot of people are very troubled by this. One obvious example that you discuss in the book is bias that's built into AI answers because the data that the AI has been learning on is biased, because it comes from a set of human sources, whether it's those people categorizing the photos or just human language. There's a lot of issues about sexism in particular that I've seen and there's hope we can de-bias some of the stuff.

What's your feeling whether we should be concerned about that, and is there is going to be ways around that to help people understand? Like, we talked about this with Cathy O'Neil in our episode with her--issues of sentencing people to jail. This is not like, 'Oh, I want to know the height of the Eiffel Tower.' It's very serious stuff.

Melanie Mitchell: The idea of explainability is very tricky in AI. There's almost an inverse relationship between the success of a system and its explainability at least with these deep neural networks. The deeper--and that is like the more layers, the more neurons, the more connections in these networks--the better they tend to do because they're able to model the data more successfully. But, then it's hard to figure out what they did. You have millions of weights or billions even now, and no high level insight into why the machine is making the decisions it does.

So, that's a big problem, and a lot of people are working on ways to make the machines more explainable, or almost virtual microscopes that can have you go in, or you might--if you want to make analogy with neuroscience--little probes that can go in and figure out what this artificial brain is doing. But it's certainly an unsolved problem. And I think there's also a problem, like, 'What is an explanation? What counts as an explanation?' There's a philosophical problem, but it's also very real.

So, for instance, the European Union has this GDPR [General Data Protection Regulation] law on data, and one of the parts of law is that algorithms that make decisions that affect people's lives have to be able to explain their decision-making. But, what does that mean exactly? Does it mean I have to tell you all the values of the weights? Is that an explanation? Well, no. Of course not. No human understands that. But explanation is subjective. It depends like what the goal is and who I'm explaining it to and so on. So, that's I think a very unsolved issue.

Russ Roberts: That's interesting. I never thought about it, but of course human beings can't explain why they do what they do. We lie, we fool ourselves, we self-deceive. Right? The idea of saying, 'No; I know why I gave you this gift, paid you this compliment, shunned you'--there's a thousand reasons. And I don't know if we'll ever understand that about ourselves. But we could understand something about like you say the data, the weights of the mechanical system, [?].

Melanie Mitchell: You know, a lot of people have said, 'Well, humans can't explain their thinking either, so why should we make AI explain this thinking?' But I think that's actually a false argument. Because, humans can explain their thinking. We definitely aren't perfect at it, but we certainly have--you know, when a judge makes a ruling, they don't just say yes or no, but they write a long explanation for their ruling. They talk about how they took into account all the evidence, and all of that. And, when something really matters, like if I say, 'I'm going to sentence you to 20 years in prison because this algorithm said I should,' I think there must be a way to, or there has to be required to be a way to explain what evidence is taking into account.

Russ Roberts: Well, I think the challenge--you mentioned Washington, DC and the--if I was uncharitable, I'd call it a sausage factory--you know, government creating legislation. We don't literally get to see all of it. We get to see quite a bit of it now. We get to see votes: that's a start, which would be like the weights. But we understand those weights. We understand that this person voted for that and is accountable at the ballot box.

The AI is not accountable in the same way. I think it's the need for transparency and what transparency would mean is something we're going to have to talk about and figure out later.

Melanie Mitchell: Yeah. It's absolutely--it's on everyone's mind. But again, like all the other things it's not a solved problem yet.

Russ Roberts: Some of it, though, and you're suggesting, may not be solvable--using the current techniques in a way that would be amenable to evaluating whether it's, quote, "fair" or whether it's biased or those kind of issues.

Melanie Mitchell: Yeah, I think that's right. You know, I--it's--how to say this? The whole issue of bias is now very--worrying a lot of people. And, our world is biased. We know that and therefore the data that we produce and that we train the machines on is biased. It goes very deep.

So, for example, facial recognition performs worse on darker skin than lighter skin, and that's partially because of the biased data that's given, but also I found out that it's cameras themselves, the electronics and cameras are tuned better for lighter skin than for darker skin. So, that's--it's really going to be hard to de-bias algorithms with a biased society where it's so deep and almost invisible.

51:14

Russ Roberts: I want to take one more example from the book that I just loved, and we'll talk in a minute about how thinking about AI helps you with thinking about human beings. It was one of the examples that did that for me. You say, "It's time for a story," and the story is called "The Restaurant." It's a very short story. I'm going to read it:

A man went into a restaurant and ordered a hamburger, cooked rare. When it arrived, it was burned to a crisp. The waitress stopped by the man's table. "Is the burger okay?" she asked. "Oh, it's just great," the man said, pushing back his chair and storming out of the restaurant without paying. The waitress yelled after him, "Hey, what about the bill?" She shrugged her shoulders, muttering under her breath, "Why is he so bent out of shape?"

So, the question--it's a great story and you riff on it quite a bit in a very effective way to talk about language--how subtle language is and 'bent out of shape' doesn't mean he's physically contorted; the bill that he didn't pay is not a reference to the beak of a bird or legislation passed by a parliament or congress, etc. And when he said, 'Oh, it's just great,' he was being sarcastic. We know all those things instantly when we read the story. The way you summarize it and which I love is: "Did the man eat the hamburger?"

Melanie Mitchell: Yeah. That's actually--I kind of stole that idea from Roger Schank, the old time AI natural language person who had these little stories like that and asked those kinds of questions. And also John Searle, a philosopher who used that example of eating the hamburger to talk about whether machines could really understand anything.

So, right: I mean, that's kind of the idea that knowledge about--back to knowledge, and our knowledge about the world. We know the man probably didn't eat the hamburger. Even though it's not said in the story explicitly, we can read between the lines. But how do you get machines to do that? How do you get them to have the kinds of knowledge about the world that they could use to make sense of such a story? And it's very hard.

Russ Roberts: And of course there are people who misinterpret stories, don't get jokes. Language is hard for us, too. We're really good at a lot of it, most of us; but not all of us are, and all of us struggle at various times and misunderstand stuff.

Melanie Mitchell: Well, for instance, here's a self-referential issue. I was recently talking to the person who is coordinating the Chinese translation of my book. Okay? And--

Russ Roberts: Just use Google Translate.

Melanie Mitchell: Exactly. So, I talk about how translating this story into Chinese in the book; and now how are they--I'm not sure the Chinese translators are going to understand that story even though they know English pretty well. I mean, it's very idiomatic. So, how do you actually translate something like that without having the sort of cultural knowledge? Translation is really complicated.

Russ Roberts: So, it's easy. Just have the translators or the machine watch a couple million American movies and they'll know. They'll just know.

Melanie Mitchell: There you go. That's pretty funny because that's actually a strategy for AI common sense that is being undertaken--is, there's a common sense competition on watching movie clips and answering common-sense questions about the clips.

Russ Roberts: And are they going to get better, you think?

Melanie Mitchell: I don't know.

Russ Roberts: You have some very amusing examples in the book of where you take that story, translate it into a different language, then turn it back into English using the same technology. And of course things get mangled, things get burnt to a crisp in the translation.

Melanie Mitchell: Yeah.

55:04

Russ Roberts: One of things that your book forced me to do, and I'm curious how you felt writing it, it made me think a lot about what is distinctive about humans. And in the course of our conversation I've given a couple examples where I said, 'Well, humans can't do that either perfectly.' But we do a lot of things shockingly well. You make a big distinction which I like between easy things being hard and hard things being easy. Chess seems hard, but brute force and lots of computing power made some real progress there; but easy things like common sense are really hard.

Melanie Mitchell: Yeah.

Russ Roberts: Talk about what you've learned about yourself , for humans, in the course of writing the book.

Melanie Mitchell: So, I've learned how much of our intelligence is invisible to us : how all the time we're able to make generalizations and transfer what we've learned to new situations, and make abstractions, and metaphors, and so on in a kind of invisible way. We don't even know we're doing it.

And this is, I think, is one of the reasons that people misjudged how hard AI would be. We have all these people like Marvin Minsky back in the 1960s predicting that we'd have human level machines in 15 years from then, and it's still going on; and I think that we still don't recognize how much of our intelligence is below the surface.

There's been some approaches to common sense reasoning in machines by building in all the common sense. I talked about one example in the book called Cyc, C-Y-C, [pronounced like 'psych'] by Doug Lenat, where the idea was just to have this huge database of common-sense knowledge. Like, 'You can't be in two places at one time.' Things like that. But the problem is--

Russ Roberts: 'A penny saved is a penny earned.'

Melanie Mitchell: There you go. The problem is that we can't write it down because so much of it is unconscious.

So, now there's this big Grand Challenge from DARPA, the Defense Department agency that funds a lot of AI research, and the Grand Challenge is to create a machine that has the common sense of an 18-month-old baby by going through all the developmental stages that babies go through. And this is the perfect example of easy things are hard, because we have machines that can do all these fancy things like translate between languages and play Go, and etc., but one of the biggest grand challenges and huge amounts of money being put into it is: Create like an 18-month-old baby.

Russ Roberts: You gave a great example in the book of Charades, a game that a six-year-old plays effortlessly, would be incredibly hard for a computer.

Melanie Mitchell: Yeah. I'll give credit to Gary Marcus for that one. That's his example.

Russ Roberts: Here's a quote from Geoffrey Jefferson that you have in the book, a neurologist, and I want to hear your reaction to it. You probably reacted to it in the book, but do it here. You say, he wrote,

Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain--that is, not only write it but know that it had written it. No mechanism could feel (and not merely artificially signal, an easy contrivance), pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants.

What do you think of that quote?

Melanie Mitchell: I love that quote; and in fact that quote comes from Alan Turing's paper where he proposes what's now called the Turing Test. And I think he brings up an interesting question which is: How would we know? Say we're talking to a machine, and it's talking just like a human--we can't tell the difference. How would we know if it had all these qualities, that it was charmed by sex and all of that stuff? It's really difficult. How do I even know that you've gone through all these experiences or that have an inner life?

Russ Roberts: You don't.

Melanie Mitchell: I don't. It's the same old question. So, that's sort of--the Turing Test tries to get around that. But it turns out to be a little too easy: because of our human propensity to anthropomorphize, it becomes too easy to pass the Turing Test.

Russ Roberts: Yeah. My example is the robot that is a vacuum cleaner, but regrets it never was a self-driving car. We would call that a human experience. Of course, we're not going to be [?]--are emotions relevant at all? Some would say it has nothing to do with it.

Melanie Mitchell: Right. I think emotions are fundamental to thought: that's my opinion. But, I don't think we know enough to to say. There's this idea, kind of a classic trope in philosophy of mind, of the brain in the vat. And this brain in the vat, it has input, it has output. It's exactly like a brain. But, it doesn't have any experiences in the world. What's the difference between that and us? Are we just brains and vats that are some simulation that's playing out? That seems like reality? I mean, all of this stuff, all of these old, old philosophical questions, are still here. We still don't have good answers to them.

Russ Roberts: Yeah. We have once--and I tried to write an essay on this and I'll link to it, and I'm not going to try to get all the pieces right--but I think I'm channeling Harry Frankfurt and Rabbi James Jacobson-Maisels, in these two, in this thought. But, the idea is that we have wants; but we have wants about our wants, as well. So, I have a desire for ice cream, but I also have a desire not to want it too much.

We could program a computer to be rewarded by ice cream. Could it ever get to the point where it felt guilty about that, or uneasy about it, or, etc.? And it's not consciousness, some level of that level of desiring. Not just desiring but having desires about our desires.

Melanie Mitchell: Yeah. I think that's a big part of consciousness, is having awareness of our own awareness, having desires about our desires, having emotions about our emotions, etc.--all that meta- kind of stuff. Our intelligence, in humans it's been evolved for specific purposes, I think. And it's not necessarily true that any intelligence would have the same kind of purpose that we have, the same kind of architecture, desires, or goals or whatever. But, anything that's going to be in our human world, that's going to be driving around in our human world with other humans or being our virtual assistant or what have you, that thing is going to have to have some human-like qualities to be able to deal with us. So I think that whatever we build, if we want it to be useful to us it's going to have to have the kind of human understanding that we have.

1:02:43

Russ Roberts: Early on in your book you say something, it's almost--you say in passing, and it actually jarred me. It made me think a lot. You said, 'Google is an applied AI company.' And I was thinking: What an extraordinary thing this is, that this company that started off as a, quote, "search engine," a particularly lovely and useful thing, has transformed itself in this either ultimately terrifying way or extraordinarily exciting way into a million other things. Right? And that is exactly what it is: It's an applied AI company.

And it's unusual how much research is going on right now inside profit-driven companies, which I think is glorious mostly--mostly. I'm, something, a little uneasy about it because I'm worried about the feedback loops; but listeners know about that and we'll leave that alone. But the point is, is that a lot of fundamental--Ray Kurzweil works at Google. Which is nuts. It's not obvious that there's anything profitable about his vision. But he's a really smart guy, so they put him on the payroll. And I have a lot of friends who work there who are just really smart and they do sort of think-tank things within this unimaginably profitable company because they can afford to have folks like that that might not turn out to have practical applications. They don't really care; they like to be around them.

So, here's this strange company, and my question is--and they're not alone. They're not close to alone: What's going on in China? We had Amy Webb talking about that. It's going on at Apple. It's going on at Facebook. Etc. Is this--how scared are you about the implication of this for humanity? Is this a somewhat troubling thing, deeply troubling, or are you not troubled at all?

Melanie Mitchell: I would say somewhere between somewhat troubled and deeply troubled. There's just a few companies that have so much power--because they have so much data--that's one thing--and they have so much kind of control over what we see, what we think, what we do. I find that really troubling. But on the other hand, as you say, in the past it was unusual for big companies to participate in basic research, and that we're seeing a lot of that at these companies. And they're doing great work.

Russ Roberts: Yeah. Good for you, good for your students.

Melanie Mitchell: Yeah. My students all are working at big companies and, you know, working on really interesting problems, doing things they want to do and solving important problems, I think.

But, you know--I don't know. I don't really trust these big companies whose, their motive is profit, to do the right thing. And a lot of them aren't--are doing what I think is not the right thing. I mean, there's a lot of argument about what is the right thing.

But I think there's a lot of potential danger for AI, not that we're going to get super-intelligent, a billion times smarter than we are, AI that's going to enslave us, but more that were deploying AI that is not up to the task, that is not general or smart enough to be autonomous.

So, Pedro Domingos, who I think you said you interviewed, had a great quote that I put in the book, which is that, 'It's not that AI is too smart and going to take over the world. It's that it's too dumb and it's already taken over the world.' And I totally agree with that.

Russ Roberts: You give the example--it's really chilling--of two photographs. You look at it; and literally to the human eye--not just like, 'Oh, there's a hidden thing that you have to look for a while to see it,'--you can't see anything, at first glance for sure, maybe many glances. It looks like the same photograph. But a handful of pixels have been altered, and the algorithm misidentifies the photograph radically. It calls the school bus an ostrich.

Melanie Mitchell: Yeah.

Russ Roberts: The opportunity for human beings to use AI maliciously, malevolently--forget the kind of bias issues we talked about that are worrisome and troubling--but the opportunity for people to deliberately steer things in ways that would be destructive. I worry a lot about the next Presidential election and the one after that where the ability to create video and photographs that will be indistinguishable from other actual news[?] footage is going to be hard to resist for folks. Both the creators and the viewers.

Melanie Mitchell: Right. So, you're talking about a couple of things there. One is the sort of deep fake or fake media like videos. And even language now. We have these language generators that are quite convincing. And how to detect that something is real or fake. That's going to be harder and harder. So, that's one problem.

The other problem is the ability that humans have, especially if you know something about AI, to fool AI systems like facial recognition systems or object recognition systems or even language interpretation systems by subtly changing their inputs in targeted ways--so that those are called adversarial attacks, because an adversary can attack an AI system. And people have shown that it's actually not that hard to do. So, the systems are not reliable in that sense if humans are out to get them.

And the other thing you're talking about is the systems can fool us. So it kind of goes both ways. So, yeah: I think that the potential for malicious uses of AI systems is the thing that Hofstadter should be terrified about, not that AI systems are going to take away our humanity.

Russ Roberts: A lot of people--I think it's absurd and I find it almost offensive--they say things like 'Well, we just need to teach AI researchers more ethics. Make them take a course in ethics so that they'll know about the right thing to do.' That strikes me as the wrong way to solve this problem.

Melanie Mitchell: Yeah. Wrong in so many ways. For one thing, whose ethics? For another thing: ethics is a very complex conceptual thing, and computers have no concepts. They don't have--understanding ethics and being ethical is maybe equivalent to being intelligent in a way. I don't think that you can learn ethics on the side as at the same time you're learning self-driving, you're learning how to drive on a highway.

Russ Roberts: I was thinking more about your students that, when you teach them, you should make sure that they know to do the right thing. The sort of Google 'Do no evil.' And, there's a billboard on the 101 in the Bay Area which I love. It has the Google slogan, 'Do no evil, and they've crossed out that word 'Do.' It says--no, I'm not quoting the slogan right. What's the Google slogan?

Melanie Mitchell: 'Don't be evil.'

Russ Roberts: 'Don't be evil.' Don't be evil, as if that's enough. 'Just tell them and explain it to them, that it's bad to be evil.' And of course that's not enough.

Melanie Mitchell: Right.

Russ Roberts: But the billboard, actually, they've crossed out the word 'Don't' and put 'Can't'--'Can't be evil.' That strikes me--and it's obviously an ad for some piece of software or project that is going to put some kind of ethics built into it rather than just relying on the good-natured training of the programmers.

Melanie Mitchell: Yeah. Now, in academic computer science to be accredited as a computer science program you have to offer an ethics class. It's required.

Russ Roberts: Oh, phew! Now, we don't have to be worried anymore.

Melanie Mitchell: Right. So, I don't think that's going to solve many problems. It's more of a systemic issue about how our society is organized.

1:10:57

Russ Roberts: So, three of the smartest people in the world, at least on paper--Stephen Hawking, Elon Musk and Nick Bostrom, guest on this program--are worried about some aspect of AI run amok. Your book and all of our conversation so far has suggested either they're just simply wrong, or, it's so far away in the future, it's not the thing we need to be worried about. How would you react to their level of anxiety?

Melanie Mitchell: Probably both of those things. I think even though they're smartest people in the world, probably a lot smarter than I am, they don't understand intelligence. And I don't claim that I understand intelligence either, so maybe I'm wrong; but I think that the idea that they have is that you can have super-intelligent AI that is just missing--all it's missing is the sort of alignment as they call it with our values.

And so what we need to do is make sure that it's aligned with our values, as if the alignment with values is just this malleable switch that you can turn on and off, or the system could learn about values, rather than thinking that an intelligent system is actually a very complicated thing that sort of develops in a society, in a culture, that isn't just sort of created de novo, and would develop values through being embedded in the culture.

I mean, that's kind of my view, and I think that they have too simplified idea of intelligence. I wrote a New York Times op-ed about this recently and I was--in there I quoted from Stuart Russell who wrote a book called Human Compatible about aligning AI ethics with ours, or values. And he said, 'Well, what if we had a super-intelligent AI and we charged it with the problem of solving climate change, and it decided that the best way to reduce carbon would be to kill off all the humans.' So, the idea there is that we have a superintelligent AI, it's superintelligent, but at the same time it doesn't figure out that human life is something we might want to preserve. That just seems crazy to me and it just seems like a misconstrual of the word 'intelligent.'

1:13:39

Russ Roberts: Let's close and talk about the anthropology of your field. I'm an outsider. You know, I like talking to you. I like talking to people in the field. It's clearly an important thing. It's growing. For all of the pessimism that I've talked about that I've gotten out of your book, it's an incredible human achievement, what we've been able to do with AI to date. So, I certainly don't want to denigrate that. I think most of what it does is wonderful. But in your field, my perception--so tell me if I'm right or wrong--is that there's this sort of bifurcation between these optimists and--maybe you call them realists or pessimists? I don't know--about what's potentially going to happen and when.

Melanie Mitchell: Skeptics.

Russ Roberts: Skeptics, okay. So, give me a little bit of the lay of that land and what kind of reaction your book is getting because of that.

Melanie Mitchell: Yeah. I think there is kind of a split, and it's a little more nuanced than that. I think most people in AI would agree that we're pretty far from what we might call human-level AI, but disagree on what that actually is. What is human-level intelligence? That's a big, big question. And also disagree on, like, what should the field be aiming towards. Should it be aiming towards some general AGI, Artificial General Intelligence, or should it be focusing more on the kinds of narrow AI we have now? And also just how to get there.

There's this big fight that's been going on for at least the last 40 years probably about innateness versus learning. So, it's like the nature versus nurture debate. How much should we build into an AI system in terms of programming and knowledge, versus how much should we let it learn on its own? And there's problems with each approach. But that debate just goes on and on. So, I think the field is quite diverse in people's opinions. There's been some attempts to survey people in the field as to, like, when are we going to have human-level AI; and it's just like a uniform distribution essentially.

Russ Roberts: Whatever that means. What I love about this conversation is that I now know that that's not even a meaningful question, and that that survey is just a clickbait opportunity for somebody.

Melanie Mitchell: Yeah. That's right. That's right. We just don't have a clue.

So, to me what's exciting about AI is the insight that it can give to us about our own intelligence and about sort of the nature of intelligence in general. So, I get as excited by the failures as I do by the successes because that I think we can learn something from--maybe even more, learn more from the failures than we do from the successes. I love being able to talk to my phone and have it transcribe what I say. I love getting into my car and having it figure out a route for me to take. There's a lot of things that I really--I use Google Translate.

Now, Douglas Hofstadter--I hope he's not listening to this because if he is he'd be furious at me because he hates all these things--but I benefit a lot by AI and I think a lot of people do. And there's a potential to benefit even more. But there's also all these trade-offs and risks. So, it's like any other technology that's very successful. I think of, like, genetic engineering. There's some incredible potential benefits, but there's also huge risks; and we need people not just in AI or executives at big companies, but a lot of different fields thinking about this and talking about it, and giving different perspectives. So, I'm hoping that will happen.

My book, I've gotten a lot of good feedback. I've gotten a few people disagreeing with little things in the book, but overall I think people saying, 'Well, thanks. It's really opened up my mind about what AI is and how it works and what its limitations are and prospects.' So, that was my goal.

Russ Roberts: My guest today has been Melanie Mitchell. Her book is Artificial Intelligence. Melanie, thanks for being part of EconTalk.

Melanie Mitchell: Thanks so much.