Auction theory: Vickrey and early literature

Continued from the previous post, let me quote interesting parts from the editors' introductory summary.The following nicely illustrates the contribution of the pioneer of auction theory, William Vickrey.
Vickrey's seminal paper (Vickrey, 1961), mentioned in his 1996 Nobel Prize in economics, introduced the independent private value model, demonstrated equilibrium bidding behavior in a first-price auction, and then showed that truthful bidding could be induced as a dominant strategy by modifying the pricing rule: let each bidder pay the social opportunity cost of his winnings, rather than his bid. Finally, he showed in an example what would later be proven generally as the revenue equivalence theorem: different auction mechanisms that result in the same allocation of goods yield the same revenue to the seller.
Then, the authors explain a few important papers in the early literature of auction theory since Vickrey. The followings are my summary.

Wilson (1969)
  • (pure) common value
  • first analysis of equilibrium bidding with common values
  • demonstrated the importance to avoid (what would be later called) the winner's curse

Milgrom (1981)
  • common + private values
  • discovered the importance of monotone likelihood ratio property (MLRP)
  • showed that MLRP + conditional independence implies that
  1. bidders use monotonic bidding strategies
  2. a monotonic strategy satisfying the first-order condition constitutes an equilibrium

Milgrom and Weber (1982)
  • affiliated values: if one bidder has a high signal of value, it is more likely that the signals of the other bidders are high
  • showed that under affiliated values
  1. Vickrey's revenue equivalence result no longer holds when we introduce a common value element
  2. ascending auctions yield higher revenues than sealed-bid auctions

Milgrom, "Rational Expectations, Information Acquisition, and Competitive Bidding," Econometrica, 1981.
Milgrom and Weber, "A Theory of Auctions and Competitive Bidding," Econometrica, 1982.
Vickrey, "Counterspeculation, Auctions, and Competitive Sealed Tenders," Journal of Finance, 1961.
Wilson, "Competitive Bidding with Disparate Information," Management Science, 1969.


Combinatorial Auctions: Introduction

This book is a great collection of papers on a rapidly growing filed, "combinatorial auctions."

Let me quote a couple of useful sentences below taken from Introduction written by the editors, Peter Cramton, Yoav Shoham, and Richard Steinberg.
  • The study of combinatorial auctions thus lies at the intersection of economics, operations research, and computer science.
  • There are numerous examples of combinatorial auctions in practice. As is typical of many fields, practice precedes theory. Simple combinatorial auctions have been used for many decades in, for example, estate auctions.
  • Recently, a variety of industries have employed combinatorial auctions. For example, they have been used for truckload transportation, bus routes, and industrial procurement, and have been proposed for airport arrival and departure slots, as well as for allocating radio spectrum for wireless communications services.
  • Auction theory is among the most influential and widely studied topics in economics over the last forty years. Auctions ask and answer the most fundamental questions in economics: who should get the goods and at what prices? In answering these questions, auctions provide the micro-foundation of markets. Indeed, many modern markets are organized as auctions.


Frontiers of Science

I have been to Potsdam in Germany on Nov. 11 - 14 to attend 7th Japanese-German Frontiers of Science Symposium 2010 (link). It's a really interdisciplinary conference jointly organized by Alexander von Humboldt Foundation and Japan Society for the Promotion of  Science.

I was a invited speaker of the social science session titled "New Methods in Decision Making" (session list), and talked about "Recent Developments in Market Design and its Applications to School Choice" (slide). It was quite exciting to give a presentation to researchers from completely different fields, mainly from natural science. Although I didn't have enough time to cover the details of my own studies, many of them seem to get surprised to see how powerful and useful game theoretical tools are.

I also enjoyed the talks and discussions in other sessions very much. Most of topics were unfamiliar to me of course, but their frontier works looked truly exciting. This was a wonderful opportunity indeed! Many thanks to the organizers and participants :)


Kandori (1991)

Original article (link) posted: 01/10/2005

Kandori (1991) "Correlated Demand Shocks and Price Wars During Booms" RES, 58

The paper extends the analysis of Rotemberg and Saloner (1986) to the case of serially correlated demand shocks and derives the same counter-cyclical movement as in their case (i.i.d. case), provided the discount factor and the number of the firms satisfy certain relationship.
The key observation in Rotemberg and Saloner (1986) was that, if the sum of future profits is unaffected by today’s demand, firms must set the price relatively low when demand is high. The premise is clearly satisfied when the demand shocks are i.i.d.. This paper shows introducing Markov demand shocks also create the same situation in the following two cases. The first case is when the discount factor delta exceeds, but is close to (N-1)/N, where N is the number of the firm. It is shown that firms maintain a constant profit (which equals to the monopoly profit in the worst state) under all demand conditions. Therefore, the extent of the correlation in demand is irrelevant.
The second case arises when delta tends to unity while (1-delta)N is held constant. In this case, firms are enormously forward-looking and total future profit is mostly determined by the stationary distribution, which is independent of today’s demand position.

The result itself is not that surprising (comparing to the other papers by Kandori at least). However, he is amazingly good at selling his work, especially in the following two points;
First, he stresses the importance of Rotemberg and Saloner (1986) and their drawbacks as well. The motivation of extension of their paper becomes very clear and the reader necessarily gets interested in HIS work.
Second, his way of illustrating results is quite lucid and rigorous. Although, the results can be more or less expected to hold, it always is difficult to prove them in rigorously.
Those techniques should be useful for us. Let’s learn them by the papers by Kandori!

Interesting Papers that cite Kandori (1991)

Bagwell (2004) "Countercyclical Pricing in Customer Markets" Economica, 71
Bo (2001) "Tacit Collusion under Interest Rate Fluctuations" Job Market Paper
Harrington (2004) "Cartel Pricing Dynamics in the Presence of an Antitrust Authority" Rand


Decision Theory 301

This is complementary to the previous post, "Decision Theory 101 (link)." In Appendix A: Optimal Choice, the author (Professor Gilboa) concisely explains the flexibility of rational choice framework. I think that his argument is really important especially when we evaluate the recent developments of behavioral economics and consider its relationship with the traditional (or rational) approach.
For our purposes, it is worthwhile highlighting what this model (the consumers' problem: by yyasuda) does not include. Choices are given as quantities of products. Various descriptions of the products, which may be part of their frames, are not part of the discussion. The utility function measures desirability on a scale. We did not mention any special point on this scale, such as a reference point. Further, choices are bundles of products to be consumed by the consumer in question at the time of the problem. They do not allow us to treat a certain bundle differently based on the consumer's history of consumption, or on the consumption of others around them. Hence, the very language of the model assumes that the consumer does not care what others have, they feel no envy, nor any disappointment in the case when their income drops as compared with last period, and so on.
It is important to emphasize that the general paradigm of rational choice does not necessitate these constraints. For instance, instead of the n products the consumer can consume today, we may have a model with 2n products, reflecting their consumption today and their consumption yesterday. This would allow us to specify a utility function u that takes into account considerations such as aspiration levels, disappointment, and so forth. Or, we can use more variables to indicate the average consumption in the consumer's social group, and then the utility function can capture social considerations such as the consumer's ranking in society and so forth. Indeed, such models have been suggested in the past and have become more popular with the rise of behavioral economics. These models show that the paradigm of rational choice is rather flexible. Yet, the specific theory restricts the relevant variables to be independent of history, others' experiences, emotions, and other factors which might be among the determinants of well-being.


Decision Theory 101

Let me continue to quote some basics of decision theory (or economics) from the Gilboa's recent book, "Making Better Decisions."
In Appendix A: Optimal Choice, the author nicely illustrates the framework of decision theory and its key concepts such as axioms and utility function. The following might be especially helpful for those who are against or suspicious about the fundamental tool of economics, utility maximization.
A fundamental of optimal choice theory is the distinction between feasibility and desirability. A choice is feasible if it is possible for the decision maker, that is, one of the things that she can do. An outcome is desirable if the decision maker wishes to bring it about. Typically, feasibility is considered to be a dichotomous concept, while desirability is continuous: a choice is either feasible or not, with no shades in between; by contrast, an outcome is desirable to a certain degree, and different outcomes can be ranked according to their desirability.
We typically assume that desirability is measured by a utility function u, such that the higher the utility of a choice, the better will the decision maker like it. This might appear odd, as many people do not know what functions are and almost no one can be observed walking around with a calculator and finding the alternative with the highest utility. But it turns out that very mild assumptions on choice are sufficient to determine that the decision maker behaves as if she had a utility function that she was attempting to maximize. If the number of choice is finite, the assumptions (often called axioms) are the following:
1. Completeness: for every two choices, the decision maker can say that she prefers the first to the second, the second to the first, or that she is indifferent between them.
2. Transitivity: for every three choices a, b, c, if a is at least as good as b, and b is at least good as c, then a is at least as good as c.
It turns out that these assumptions are equivalent to the claim that there exists a function u such that, for every two alternatives a and b, a is at least as good as b if and only if u(a) ≧ u(b). (...) Any other algorithm that guarantees adherence to these axioms has to be equivalent to maximization of a certain function, and therefore the decision maker might well specify the function explicitly.