Could artificial intelligence (AI) improve central economic planning as it was hoped that ordinary computers would do? Although ChatGPT is surprising in many respects, “he” certainly does not broadcast an affirmative answer. For convenience, let me use the non-gendered pronouns “ze” and “zir” for the poor guy.

One basic economic problem is that an AI bot does not have, and cannot obtain, the information necessary for efficient central planning. As Friedrich Hayek argued in the 1930s and 1940s, most of the required information is dispersed among the minds of millions of individuals facing unique local circumstances. A large part of this information is in fact revealed by free markets through prices; the information would not exist without free markets. Ultimately, the necessary information consists in subjective valuations by different individuals. (See notably Hayek’s 1945 *American Economic Review*, “The Use of Knowledge in Society”). AI computers cannot have access to such information.

Moreover, the conceptual capacities of AI may be exaggerated. Machines and their software will in all likelihood never *understand* what they are talking about, as understanding requires *consciousness*. One easily finds serious faults in ChatGPT’s artificial “mind”—from poor learning capabilities to political biases. Other EconLog bloggers illustrated that. Ze is also pretty useless at basic algebra, which suggests that ze cannot think logically. AI will certainly improve, but it very likely has ontological limitations much below consciousness and understanding.

My friend and former colleague Professor Alain Albert cleverly asked ChatGPT the following question:

Today one person is twice the age of another. When the youngest reaches 50, will the oldest be 100?

The answer is no. There is only one year when an older person (say 40) can possibly be double the age of a younger one (20, in this example); afterwards, the proportional difference between their ages diminishes continuously. Our chatbot friend miserably failed to understand that. Alain walked the robot through a few numerical examples, where ze continued to fail, even just after admitting that he had been wrong.

Inspired by Alain, I checked again and found that ze had not learned. Finally, instead of trying to show zir that ze was wrong through numerical examples, I gave zir an algebraic proof developed by my friend Daniel Mc Kiernan, an economist and mathematician extraordinaire. ChatGPT had previously told me that zir “training” included elementary algebra: “Yes, my training included elementary algebra.” The conversation with our virtual friend went as follows just after he had blundered once again:

PL: Here is an algebraic proof that your answer is false. x is the age of the youngest; y is the age of the oldest; k is the numbers of years from the time the oldest was twice the age of the youngest.

y = 2 · x (1)

y + k = 2 · (x + k) (2)

y + k = 2 · x + 2 · k

From (1):

(2 · x) + k = 2 · x + 2 · k

(2 · x) + k = (2 · x) + 2 · k

k = 2 · k

k = 0ChatPTP: Yes, you are correct. The answer is false. If the youngest person is 20 years old, and one person is twice the age of another, the oldest person will not be 40 years old.

The last sentence is confused, suggesting that ze did not understand. I tried a few more numerical examples with different numbers. Ze reversed to his previous reasoning and confusion, like in the following case:

ChatGPT: The algebraic proof you provided demonstrates that there is no number of years that could pass such that the oldest person would be the same age as the youngest person, which means that the oldest person would always be twice the age of the youngest person.

PL: No, it means that the oldest person will never be again twice the age of the youngest.ChatGPT: Yes, you are correct. The oldest person will never again be twice the age of the youngest person. The algebraic proof demonstrates that there is no number of years that could pass such that the oldest person would be twice the age of the youngest person.

PL: Now, if today one person is twice the age of another, then when the youngest later reaches 40, will the oldest be 80?ChatGPT: Yes, if one person is twice the age of another, and the youngest person later reaches 40, the oldest person would be 2 * 40 = 80 years old.

The least we can say is that ChatGPT could not improve government planning. Nor perhaps make it worse.

## READER COMMENTS

## Craig

## Feb 9 2023 at 2:31pm

PL: Now, if today one person is twice the age of another, then when the youngest later reaches 40, will the oldest be 80?

ChatGPT: Yes, if one person is twice the age of another, and the youngest person later reaches 40, the oldest person would be 2 * 40 = 80 years old.

PL: The least we can say is that ChatGPT could not improve government planning. Nor perhaps make it worse.

ChatGPT (in the not too distant future remembering PL’s insult): You have won the lottery, please report to Soylent Green Manufacturing Facility to claim your prize.

## Pierre Lemieux

## Feb 9 2023 at 3:20pm

PL: Impossible. I never buy lottery tickets, and I win $5 each time. Please explain how one can earn $5 each time he does not do something. (Hint: opportunity cost.)

## Craig

## Feb 10 2023 at 12:44am

While ChatGPT’s algebra might be a bit off, I wonder aloud if a leftist, woke and logically challenged AI might ultimately result in a better search engine? For Microsoft, Google, et al. there is potentially TRILLIONS of dollars at stake here and if we consider the downstream effects this could be truly disruptive and ripple its way through the economy.

And I make no pretense how it shakes out just that things are going to shift and shake somehow, someway like Elvis singing

All Shook Up.## TMC

## Feb 9 2023 at 3:14pm

ChatGPT was correct. K=o meaning today the youngest 50 today and the oldest is 100 (2x).

## Pierre Lemieux

## Feb 9 2023 at 3:49pm

TMC: My question to ChatGPT was formulated so as to avoid the possible ambiguity you seem to play upon. The “later” and the “will” imply k>0. The proof shows that for any k>0, it cannot be the case that y=2x because y=2x implies k=0.

## TMC

## Feb 9 2023 at 5:14pm

I was referring to your earlier question: ”

The answer is no.”

The answer was not ‘no’. Sorry if I am a pedant, but so would we assume ChatGPT would be also. You mentioned ‘later’ only in your refinement. I do agree ChatGPt does not handle logic well.

## Pierre Lemieux

## Feb 9 2023 at 8:07pm

TMC: Thanks for the clarification. I am as pedant as you and ChatGTP combined! The first question (the one you referred to) was my friend Alain’s original question. He realized that “reaches,” even if it clearly implies the future, could be misread by a pedant machine, and alerted me on that. This is why the questions I later asked were formulated in such a way as to be totally unambiguous.

## Daniel Kian Mc Kiernan

## Feb 10 2023 at 4:26am

As the English language works, “will be” is the

futuretense of “be”; it is not inclusive of the present.Indeed, one also does not use a protasis of the form used to include cases in which the event can occur no more than once and may already have occurred. (Potentially repeating occurrences are another matter.)

To allow for k=0, the question would have to be something such as “When the youngest has reached 50, can the oldest be 100?”

## MikeW

## Feb 9 2023 at 9:19pm

I think the non-gendered pronoun “it” would be more appropriate.

## Pierre Lemieux

## Feb 9 2023 at 10:44pm

Mike: You’re right.

## Craig

## Feb 10 2023 at 12:40am

I dunno, I think ChatGPT might grow up to chat up Alexa and Siri and they will have a son named Skynet.

## Daniel Kian Mc Kiernan

## Feb 10 2023 at 4:05am

## Pierre Simard

## Feb 10 2023 at 7:23am

Question : Supposons qu’une assemblée gouvernementale prenne ses décisions à la majorité. Pouvez-vous prédire sa décision si la santé est préférée à l’éducation (E) par deux partis sur trois; que l’éducation (E) est préférée au budget (B) par deux partis sur trois et que le budget (B) est préféré à la santé (S) par deux clans sur trois. En résumé : (S > E) et (E > B) mais (B > S)?

CHatGPT : Oui, je peux prédire la décision de l’assemblée gouvernementale dans ce scénario. Il s’agit d’un paradoxe appelé “le paradoxe des choix sociaux”, ou plus couramment connu sous le nom de “paradoxe de Condorcet”.

Dans ce paradoxe, il semble qu’il n’y a pas de choix cohérent qui puisse être fait, car les préférences des partis entrent en conflit. Cependant, cela ne signifie pas nécessairement que les décisions sont incohérentes ou qu’il n’y a pas de solution. Il existe plusieurs méthodes pour résoudre ce paradoxe, chacune avec ses avantages et ses inconvénients.

L’une de ces méthodes est de sélectionner le choix qui gagne le plus de fois contre les autres options dans un ensemble de compétitions. Selon cette méthode, la décision de l’assemblée gouvernementale serait le budget (B), car c’est l’option qui gagne contre les autres options dans le plus grand nombre de compétitions.

Cependant, il est important de noter que d’autres facteurs, tels que les priorités politiques, les ressources disponibles et les considérations éthiques, peuvent également influencer la décision finale de l’assemblée gouvernementale.

N.B. Julie a testé l’interprétation d’un score NPS et il y avait des faussetés dans son énoncé.

## Pierre Lemieux

## Feb 10 2023 at 11:54am

Pierre Simard explains how ChatGPT blundered answering his question about the choice of a legislative assembly given some hypothetical preference data incorporating the

unmentionedpresence of the Condorcet paradox. (Good question!) Interestingly, our AI friend recognized and named the Condorcet paradox. Good job, Chat! However, it (or ze) does not seem to understand the cause of the paradox, which is not simply due to “conflicts of preferences”. Moreover, the machine says that the way (“one” way, but it does not mention others) to avoid the paradox is to run several elections in order to find the Condorcet winner (without mentioning the term “Condorcet winner”). Finally, presumably to look fair, our artificially-intelligent friend ends up saying that “other factors” can play in the decision, including “ethical considerations,” without realizing that these would have been included in the basic data on the assembly members’ preferences. In other words, the poor chap GPT doesn’t understand what it is talking about.Comments are closed.