Tom Slee writes,

what I get from the Netflix prize is that there are probably significant limits to recommender systems. Even the smartest don’t do a whole lot better than the simple approaches, and a lot of work is required to eke out even a little more actual information from the morass of data.

This is from a much longer post, that I got to via Brad DeLong from Tyler Cowen.

The gist of it is that Netflix offered a prize to someone who can come up with a better algorithm for predicting movies that people will like. Slee gives all sorts of examples of suspect data, including users who have rated thousands of movies.

Slee and Cowen see this as a sign that recommender systems will fail. I totally disagree. My guess is that statistical models that recommend movies will succeed extremely well, but not when they are based on survey data.

Suppose that instead of ratings, you asked consumers to vote with dollars. For any movie, a consumer who owns the movie can post an “ask” price and a consumer who does not own the movie can post a “bid” price. To ensure that these prices are real, every once in a while, you would fill all the orders for a particular customer–that is, you would buy the consumers’ DVD’s at her asking prices and sell her DVD’s at her bid prices.

My guess is that this approach would generate better data than Netflix’s current process. Perhaps I am wrong about that. But the point is, if you had good data, meaning data that is based on revealed preference rather than survey ratings, my guess is that recommender systems would be quite powerful. It’s the garbage data, not the concept of statistically-based recommendations, that limits the ability of the Netflix system.

The larger point, of course, is that subjective survey data tends to be garbage. Which is one of my issues with happiness research.