Bryan asked his PhD students how the government should spend $1 billion most efficiently (in the Kaldor-Hicks sense). He posted the best answers here. I agree generally that subsidizing decisions to have kids would be a good use of the money. There were great comments, too, on subsidizing prediction markets for different things.
I would modify Fabio Rojas’s answer ever so slightly by saying that the subsidy should probably go to child-rearing rather than child-bearing. My impression is that the marginal non-parent is worried not about the hospital costs but the costs of child care. Child care subsidies would probably be more effective than lower out-of-pocket hospital expenses, but I could be wrong (if you’re a grad student thinking about what you’re going to study, notice that there’s a paper here).
So here’s my answer: subsidize the development of quantum and biomolecular computing, perhaps with large prizes (as Daniel Kuehn suggests). Why? Computing is a general-purpose technology that would make presumably just about everything we do much cheaper, including health care, energy development, retail, and all sorts of other things. It’s currently being developed by commercial enterprises, but even a speed-up of just a couple of years–or even a few months–might be more than worth it for the paltry sum of $1 billion. This would be about equal to 1/7 of the NSF’s 2013 appropriation.
Kaldor-Hicks: it’s close to a general-purpose “public good” technology. Readers of Kahneman and others know that our information-processing capacity is not so great. More information-processing power means better decisions.
Utilitarian: Bryan points out that we should consider the preferences of people who will appreciate the endeavor, perhaps in an aesthetic sense. Pure science offers a lot of nonpecuniary “isn’t this awesome?!” benefits. Bryan also mentions a preferential option for the poor given the diminishing marginal utility of wealth. Here’s where much more rapid, much cheaper information-processing technology really benefits the poor. Here’s why. People are poor in part because of very bad decisions. At the margin, better information-processing technology should benefit the poor more than it benefits the rich as it might help them overcome and better understand the costs of impulsiveness and lack of conscientiousness. I admit this remains speculative, but better-developed quantum computing would, I think, bring us closer to the day when we have Apps for Bayesian applications, continuous-time data analysis, and continuous-time updating of a dizzying array of probabilities.
Here’s one example. Sites like Match.com and eHarmony.com will develop better matching algorithms, leading to better matching on preferences and characteristics that matter and, presumably, less wasteful signaling in the marriage market and lower divorce rates. Less divorce means less poverty, less emotional devastation, and fewer social problems if divorce has negative effects on the next generation.
If you’ve gotten this far, you probably have all sorts of criticisms. So here’s one question for readers: why won’t this work?
And here’s a bonus question for readers, based on a conversation I had with Bryan about a year ago: In the short run, Facebook will probably lead to an increase in the divorce rate. In the long run, it will probably lead to a reduction in the divorce rate. Why?
READER COMMENTS
Daniel Kuehn
May 29 2013 at 3:22pm
“In the short run, Facebook will probably lead to an increase in the divorce rate. In the long run, it will probably lead to a reduction in the divorce rate. Why?”
1. Spouses will learn more about each other
2. Spouses will learn more about each other
GregS
May 29 2013 at 3:24pm
“In the short run, Facebook will probably lead to an increase in the divorce rate. In the long run, it will probably lead to a reduction in the divorce rate. Why?”
My short answer: In the short run, Facebook increases the chances that cheaters will be caught by their spouses. In the long run, this effect will be anticipated and will deter cheating in the first place.
Pre-social media “events” may eventually be disclosed; post-social media “events” will be discouraged.
Daniel Kuehn
May 29 2013 at 3:24pm
Now there’s a study in that too – duration of relationships as a function of the year that Facebook was available at the alma mater (it rolled out gradually at first).
Daniel
May 29 2013 at 3:32pm
Here’s one answer to the bonus. In short run, people will connect/reconnect people they wouldn’t otherwise have–people with whom they share more in common than their current spouses–and that will break up existing marriages. Long run, people will be with people who they are more compatible with (because of more shared interests, discovered through stuff like facebook, match.com, eharmony, etc.), and thus won’t divorce as often.
Jason
May 29 2013 at 3:42pm
Pre-modern marriage was likely about system redundancy and reducing complexity.
In pre-modern marriage random matching and fixed arrangements could work. Individuals’ bubbles may nary meet.
With easily distributed communication and ease of attaining necessities, both system redundancy and need for simplicity are dying.
Modern marriage is about love.
Love may be a necessity, but it may come from many small sources. Specialized love may be the future.
I love to play cards, but not with my wife. I love to cook, but not with my poker buddies.
Marriage may not last through specialization.
Politics Debunked
May 29 2013 at 6:04pm
re: “So here’s my answer: subsidize the development of quantum and biomolecular computing,”
It isn’t clear if that realm has currently objective prize criteria that could be set (and you suggest none). e.g. speed solving some particular problem isn’t yet an appropriate determinant of whether a particular path will succeed in the long run even if it did in the short run. In fact tech to just win the prize may take a detour to achieve that short term goal rather than take the appropriate long term path.
Without objective criteria it seems likely the prize risks being for the most politically connected groups that can succeed in regulatory capture of the prize committee. I’d suggested prizes, but tried to do so to minimize the potential for political distortions. It may be that the halo of government endorsement of an approach will steer some in the private sector along that path regardless of its technical superiority. (in part hoping for more government largess in the future, and in part since some managers and investors have misguided faith in government).
That research is already being done as you note and companies will make a lot of money on it. As an entrepreneur who has also worked in tech research at big companies: why the heck do you want to waste money and politicize tech research? There are other areas of research, perhaps there is some hot unknown area that is likely to have more impact that you aren’t aware of.
re: “if you’re a grad student thinking about what you’re going to study, notice that there’s a paper here”
I’d suggest a better paper (though I admit this is a topic I haven’t looked into so it may exist) would be to consider whether those who wouldn’t have children unless they were paid to do so are likely to have offspring more likely to be also consumers of government subsidies rather than innovators. I’d suggest there are better ROIs.
re: “Why?”
“Why” seems to be because you gave in to human tendencies towards central planning and seem to think you can judge which technology is best to pursue more than distributed experts in the private sector could if you instead found a way to outsource such decisions to the private sector. Although the question is in some senses “what would you do if you were a central planner”, many answers tried to skirt around that by finding ways to use the money to expand the role of the marketplace and minimize the role of central planning rather than taking it for granted.
The goal should have been to find ways to minimize central planning using the money rather than to give into its tempting siren song. It is natural that sharp people often have confidence in their ability to decide on something compared to letting a market do it.
As I noted in the post a post on this topic, Ronald Coase recently wrote in Harvard Business Review about many economists being out of touch with entrepreneurs and the business world. Free market tech entrepreneurs want the government kept well out of anything related to technology. It has accidentally produced some wins.. that researchers would have produced at private labs.. but there is no reason to increase its involvement.
Mark Bahner
May 29 2013 at 11:17pm
Here’s the way I look at it:
In 2015, computers will add the equivalent of 1 million human brains to the world. Nobody will be able to see the economic effects.
In 2024, computers will add the equivalent of 1 billion human brains to the world. World per-capita GDP will be increasing every year at a faster rate than has ever been achieved in human history (world per-capita GDP will be increasing by more than 7% per year).
In 2032, computers will add the equivalent of 1 TRILLION human brains to the world. World per-capita GDP will be increasing at 15% per year? 20%? More than 30%?
Why near-future economic growth will be spectacular
Your $1 billion is not going to change that basic scenario. The world (though not necessarily the U.S.) would be better off if the $1 billion was spent on the sorts of things that benefit the very poorest people in the world right now…like reducing malaria and diarrheal diseases.
David C
May 30 2013 at 1:11am
It’s hard for me to think of any industries where lack of computing power is a major constraint on productivity. In the areas that it is, the primary reason is that the computers are quite out of date. I’d say computers purchased on a budget ($500-$1000) became powerful enough for most people for everyday tasks about 10 years ago. For most people, the greatest processing power is used for video. Video game graphics are the biggest area where more processing power could be used, but that’s simply because developers continue to increase the complexity of their games’ graphics, not because such complexity is really crucial to the games’ enjoyment. Drug manufacturing has benefited greatly from increased computing power, but more power simply won’t help them very much. There are a few scientific experiments that require vast quantities of power to calculate, but most of those have limited impact on our day to day lives.
David C
May 30 2013 at 1:41am
Your example is an example of ways better computing algorithms could improve our daily lives. It’s a separate issue from processing power. I’m pretty sure match.com and eHarmony have plenty of processing power to run their algorithms, which are probably very simple.
Ken P
May 30 2013 at 2:51am
“In the short run, Facebook will probably lead to an increase in the divorce rate. In the long run, it will probably lead to a reduction in the divorce rate. Why?”
Short run: Spouses will see they have more options.
Long run: Spouses will have made choices from a wider range of options.
This fits with the Nobel winning work of Shapely and Gale. A major weak point in this perspective is that it assumes human mate preferences are static and that aggregate preferences suffice.
In the real world, “things change” (we’ve all given or been given that speech). Also in the real world, the heterogeneity of mate preferences is important. A spouse’s weakness in a particular area may detract from long term sustainability despite an aparrent short term higher aggregate value.
A major drawback with Match and other online dating sites is that they get courtship out of order by starting with higher brain compatibility.
Ken P
May 30 2013 at 3:10am
The problem with prize money is that it would not be immune from cronyism and group think. Breakthroughs typically emerge from perspectives not shared by the status quo.
Floccina
May 31 2013 at 10:20am
In the area of divorce Facebook provides access to more potential partners, which could lead to more divorce among the already married but better selection/matches for future marriages.
Mark Bahner
Jun 1 2013 at 12:12am
I predict the first mass-production completely computer-driven cars will be on the road within the next 10 years, and within 30 years, virtually every vehicle on every road will be computer driven.
Just in that one area, I think the changes will be so profound that the world will be hard to predict. I predict that virtually every Walmart, Target, Kroger, Food Lion, etc. etc. will be shut down, as goods will be delivered by automated delivery trucks from warehouses that are entirely automated (and probably unlighted and unheated):
The Future of Transportation
If the trends of the last 40+ years continue, within a decade a computer with the processing power of a human brain (1 petaflop) will cost $1000. And a decade after that, it will cost $1. That, as they say, will change everything.
No more people owning riding lawnmowers (because one computer-driven mower will mow all the lawns in the whole neighborhood. Houses that will be built in a couple days–or even hours–by dozens of even hundreds of robots running 24/7. All these things are just decades away.
P.S. And my Future of Transportation post didn’t even cover the distinct possibility of personal aircraft. A small airport near where I am right now probably has at most 50-100 flights a day taking off. I could easily see such an airport handling 1000s of flights a day with computer-driven aircraft. (Something that the people who bought houses near that airport never even dreamed of when they bought their houses.)
David C
Jun 1 2013 at 1:32pm
Mark, the Google car is another example of a device where additional processing power would provide little benefit. The vehicle costs around $150,000 and 70,000 of those dollars are devoted to a laser radar sensor system, not the computer. The primary difficulty with such systems is writing the code to interpret all that sensor information, and determine actions. As the code gets better, they’ll be able to reduce the number of sensors, and the costs of the car will go down.
Mark Bahner
Jun 2 2013 at 10:47pm
Humans don’t have LIDAR, but they manage to drive all the time. So LIDAR isn’t absolutely essential to driving. But computers aren’t nearly as good at pattern recognition as we are. But they will be soon.
And the advantages of computers over humans is that the computer never gets drunk, distracted, impulsive, sleepy, etc. etc.
This is a Wired article wherein it is claimed that Cadillac will come out with a mass-produced fully automated car by the end of this decade.
Wired article on autonomous cars
That same article has an IEEE prediction that 75% of the cars on the road by 2040 will be automated. My guess is that the percentage of miles driven by autonomous cars will be higher than that.
P.S. Also from that article: “The Google cars are based on very precise maps and they have sensing primarily based on a LIDAR technology,” he told Wired. “The cars that we tested on the route from Parma to Shanghai had no maps, and had sensing primarily based on cameras. In both cases, the cars have no help from the infrastructure.”
P.P.S. It’s easy to imagine the near future when there’s a video camera in every street light, feeding data on movement near the road to all cars within a mile or two away. That would mean that the car itself wouldn’t necessarily need to see the object that moved into the road. Instead, the information would be fed to the car when it was still a mile or more away.
David C
Jun 6 2013 at 1:16am
@Mark, I don’t understand the purpose of your comments. I argued that increased computer processing power won’t have significant impact on our lives. You then pointed to the Google car as something that would have a substantial impact. I then argued that computer processing power is not a significant reason the Google car is not yet widespread. Now you seem to be arguing that the Google car will one day be widespread, and that this will have a significant impact. I never challenged either point.
jpa
Jun 12 2013 at 4:12pm
If you wanted to increase computing power, you should focus your R&D budget on IO, not CPU. Most computer problems are bounded by IO problems (cost or speed). The best use of the billion dollars is if the government pre-committed to buying the first storage technology that reached certain specs that are on far left of the technology adoption curve. The reason why this is better than status quo is investors and storage vendors know the real money is in the fat part of the adoption curve. So everyone is making low investments in new storage tech commercialization until the mainstream adoption is within a 5 year window.
Comments are closed.