
Those who read Tyler Cowen’s and Alex Tabarrok’s Marginal Revolution blog regularly, as I do, know that Tyler is a big fan of artificial intelligence (AI). Partly due to his posts and partly due to rave reviews by friends on Facebook, I’m realizing that I need to use it more.
Having said that, I want to comment on a recent post by Tyler in which he linked to an analysis, done by OpenAI’s Deep Research tool, of the costs and benefits of USAID. Here’s what Tyler asked it to do:
What are the best sources to read on US AID, and its costs and benefits? I want serious analyses, based on evidence, data, and possibly economic models. How does the program fare in cost-benefit terms? Please try to look past the rhetoric on both sides of the debate, pro and con, and arrive at an actual assessment of the agency in net cost-benefit terms. A five to ten page paper should be fine, with full citations, in any style.
Notice that Tyler asked it to “look past the rhetoric.” Since rhetoric, as Deirdre McCloskey often reminds us, is the art of effective or persuasive speaking or writing, that is, the art of arguing,” it doesn’t make sense to ask AI to avoid rhetoric. To avoid rhetoric is to avoid making an argument. To assess costs and benefits is to argue.
Maybe, though, Tyler expects that AI will have the same mistaken idea about what rhetoric is that most of the public has. So maybe it’s not a problem. Although I think it is; see below.
Here’s what I noticed. Deep Research’s answer is an argument. And not only an argument but also one that is somewhat one-sided. Here for instance is how it deals with the idea that there may be downsides to some of USAID’s subsidies and interventions:
Democracy and Stability: The absence of USAID’s democracy programs is harder to game out, as changes in governance are path-dependent. In some cases, local forces for democracy might have prevailed even without external help (e.g. Eastern Europe’s desire to join the EU was a strong motivator). However, it is likely that progress would have been slower. Without technical support for elections and civil society, nascent democracies might have faltered or seen more contested processes. In places like Kenya in 2013, for instance, U.S. support to election commissions and peacebuilding helped avoid violence; without that, a repeat of the 2007 post-election violence could have occurred. On the other hand, one could argue that in certain countries, absence of U.S. political aid might have reduced suspicion of foreign influence and could have led to more organic change (a point critics raise, though evidence is scant either way). By and large, the counterfactual suggests that the world would not be more democratic had USAID never engaged – in fact, some gains in freedom and rights would likely be absent.
Notice that it doesn’t discuss the idea that USAID might have been used to overthrow governments. I don’t know if it’s true that USAID money was used to help overthrow Bangladesh’s government. This piece in The Times of India says that it might be true. But notice that Deep Research doesn’t even raise the issue.
Its dealing with other issues is similar. It takes a charge against USAID, vaguely suggests how it might be true, and then says that things are improving.
Also, it literally doesn’t mention some of the misuses of the money that the DOGE people have highlighted. Maybe the direction to avoid rhetoric was taken as a direction to avoid mentioning criticisms for which the critics stated their case passionately. So maybe Tyler shouldn’t have asked it to avoid rhetoric.
I’m not saying that the Deep Research approach is totally wrong. I’m simply pointing out the limits and expressing my skepticism. To his credit, Tyler’s mention of other sources means that he is not taking Deep Research as the last word on the subject either.
READER COMMENTS
steve
Feb 11 2025 at 7:45pm
The trans opera was funded by the State Department and not USAID. I think some of your comments about the AI are valid, but you seem to be taking stuff said by DOGE/Trump as completely factual. I am always suspicious when a group that clearly has a political bias is doing an investigation and selectively leaks findings so have found it worthwhile to confirm them. That goes for liberals, libertarians, church officials, military, whoever. YMMV.
https://www.factcheck.org/2025/02/sorting-out-the-facts-on-waste-and-abuse-at-usaid/
Steve
Monte
Feb 11 2025 at 8:42pm
In summary, ChatGPT stated: “Based on the analysis above, the net assessment leans toward the conclusion that USAID’s benefits outweigh its costs on the whole, though with important qualifiers by sector and context.”
There have been numerous reports of misallocation or diversion of USAID funds and resources. Consequently, I asked ChatGPT (based on its analysis) if it had cross-referenced the available data to ensure the accuracy of its conclusions. It gave the following response:
Craig
Feb 11 2025 at 10:55pm
“To his credit, Tyler’s mention of other sources means that he is not taking Deep Research as the last word on the subject either.”
That’s good because while useful to point the way at times, they do seem to hallucinate.
Daniel Kuehn
Feb 13 2025 at 1:23pm
I have my own doubts about AI for these purposes, but the fact that it provides you one answer that rules out other answers and the fact that the answers it rules out include things you might be persuaded by is not in itself a mark against AI. The whole point of AI is to be discriminating in its assessment. Unless you’ve established that it has made an error, I don’t see why the mere fact that it discriminates between different potential answers is a problem.
Let’s take your DOGE example. First, perhaps that is not a “misuse” of funds at all. You think it is because I guess you don’t like how it was used, but that doesn’t mean that it’s misuse that just means David doesn’t like it. Arguably, these arts outreach efforts support democracy and stability. Perhaps AI is correctly discriminating between a wrong view (your view) and a correct view. But even more importantly, AI might be ruling your example out and not reporting it because it is correctly identifying the fact that you are factually wrong to even call your example an example of USAID funding. After all, it was immediately reported after DOGE mentioned this that the grant did not come from USAID at all it came from the Department of State’s diplomatic and consular programs. You’re just one man, so maybe you can’t follow up on all these things to figure out what claims are true and what are not true, but that’s precisely why it might be nice to have an AI tool that can aggregate all of that information from sources with more correct information than your sources. This is a general point beyond your Colombian musical example. With every DOGE announcement of wasteful spending we get a subsequent barrage of fact checking showing how DOGE is misrepresenting and distorting facts. Gaza condoms. Politico subscriptions. etc etc, all of these have been shown to be misrepresentation of the facts by DOGE. Perhaps AI (like most Americans, I think) is correctly identifying DOGE announcements as an unreliable source.
You might not like any of this because you may assess all of this differently, but if you are going to criticize AI simply because it gives you a different answer or because it does not weight all answers equally, why seek out an independent perspective in the first place?
Daniel Kuehn
Feb 13 2025 at 1:26pm
Another way to put my point is that if you don’t want AI excluding things that it doesn’t think are reliable but that you think are reliable, then the end of the prompt should be something more like “give me answers that David Henderson thinks are reliable.” And then it would be a sort of filtered search that wouldn’t frustrate you so much. But I think the whole point is that you want to aggregate assessments of people that might know better than you, right?
Comments are closed.