A comment from Bill made me recall how I'd looked at markets as an algorithm to solve a compressed sensing problem before starting this blog. This is mostly a placeholder post for some musings.
The idea behind "compressed sensing" is that if what you are trying to sense is sparse, then you don't need as many measurements to "sense" it if you measure it in the right basis . A typical example is a sparse image that looks like a star field: a few points (k = 3) in mostly blank space (first image above). If you were incredibly lucky, you could measure exactly the three points (m = k = 3) and reproduce the image. However, information theory tells us that we need (see e.g. here [pdf]):
m > k log(n/k)
measurements. As what you are trying to measure gets more complex, you start to need all of the points (m ~ n) which is behind the Nyquist sampling theorem. You can think of the economic allocation problem as fairly sparse -- most of the time any one person is not buying bacon (note the diagram on the upper right if you are viewing this on a desktop browser). And the compressed measurement (the m's) happens when you read off the market price of bacon .
There are different algorithms that take advantage of the information provided by knowing your image is sparse. One of the least efficient algorithms is linear programming. Sound familiar? That's also a very inefficient way to solve the market allocation problem.
The algorithms that solve the sparse problem also have a tendency to fail if m is too low or if you add noise to the image (second image above). Additionally, the transition from failure to success can be fairly sharp -- referred to as a Donoho-Tanner phase transition by Igor Carron. Does this tell us something about market behavior? I don't know. As I said, these are just some musings.
 For natural images, this basis tends to be the wavelet basis. For something like our star field above, the Fourier basis works.
 Does a market create a basis for sparsifying an economic allocation problem?