Computational Social Choice: Autumn 2010


Social choice theory is the study of mechanisms for collective decision making, such as election systems or protocols for fair division, and computational social choice addresses problems at the interface of social choice theory with computer science. This course will introduce some of the fundamental concepts in social choice theory and related disciplines, and expose students to current research at the interface of social choice theory with logic and computation. This is an advanced course in the Master of Logic programme. Students from AI, Mathematics, Economics, Philosophy, etc. are also very welcome to attend (contact me if you are unsure about your qualifications).

This page will be updated throughout the semester, with links to the slides used in class, homework assignments, and information on possible changes in the schedule. Please check it regularly.

Focus 2010: For this edition of the course, I plan to focus on voting theory, and I shall only give a fairly brief overview of other topics (such as fair division, judgment aggregation, mechanism design, and matching). Interested students will be able to further explore some of these other topics later on during (individual) projects with members of the COMSOC Group.

See also: previous editions of the course and the Computational Social Choice Seminar at the ILLC

Background: I'm planning to offer two 90 minute tutorials in the first couple of weeks of the course, one on complexity theory and one on game theory, for those not familiar with these topics. Other than that no specific prerequisite knowledge is required, except for "general mathematical maturity".

WeekDateContentHomework
17 September 2010

Introduction (4up). This introductory lecture has been about illustrating some of the problems we are going to address in the course, mostly by way of example. To find out more, you should also look through this paper:

You could also have a look at the COMSOC Website, or you could browse through the websites of the COMSOC Workshops held in Amsterdam, Liverpool and Düsseldorf.

On Friday there has been a tutorial on Complexity Theory (4up).

[see slide 27]
214 September 2010

CLASS CANCELLED (due to COMSOC-2010 in Düsseldorf)

 
321 September 2010

Voting Procedures (4up). In the first part of the lecture everyone presented one of the many voting procedures around. We managed to cover Plurality with Run-off, Veto, k-approval, Coombs, Cumulative Voting, Range Voting, Majority Judgment, Nanson, Copeland, Young, Dodgson, Slater, Kemeny, Schwartz, Ranked Pairs, and Bucklin.

In the second part of the lecture (covered by the slides) we have also seen STV, the family of positional scoring rules, and the Banks rule. We have discussed several desirable and undesirable properties of voting procedures, in particular the Condorcet principle and the complexity of computing the winner of an election. The following paper is very useful for looking up the definition of a fair number of voting procedures:

Here are the two papers discussing the computational complexity of reconising and computing Banks winners:

On Friday there has been a tutorial on Game Theory (4up).

Homework #1
(Deadline: 05/10/2010)
428 September 2010

Impossibility Theorems (4up). This lecture has been an introduction to the axiomatic method in social choice theory. We have first reviewed a number of basic axioms, i.e., natural and desirable properties that a voting procedure should satisfy, and then discussed two important impossibility theorems that show that certain combinations of some of these axioms cannot all be satisfied at the same time: Arrow's Theorem and the Muller-Satterthwaite Theorem. We have seen how to exploit the notion of a "decisive coalition" to prove theorems such as these. Have a look at the first proof in the paper by Geanakoplos for an alternative proof of Arrow's Theorem using a technique based on "pivotal voters" (note that Geanakoplos proves the classical variant of Arrow's Theorem for preference aggregation, rather than for voting as done in class). Myerson gives a proof of the Muller-Satterthwaite Theorem using the same technique as we have used to prove Arrow's Theorem.

While the above are archetypal examples for work in classical social choice theory, we have also briefly discussed questions in computational social choice raised by these examples. One of them is the fact that new applications for social choice mechanisms (e.g., in multiagent systems) will often call for changes to the basic formal framework; for instance, we may not just want to investigate the case where individuals vote by means of supplying a linear ordering of the set of alternatives. The other is the idea that it can be fruitful to try to fully model a social choice problem in a suitable logic. This is of interest in its own right, and it may also pave the way towards automatic verification of results in social choice theory, and possibly even towards discovery of new such results. Here is an example for a recent Master's thesis that achieves the latter for an area of social choice theory concerned with the problem of ranking sets of objects given a ranking of individual objects:

 
55 October 2010

Characterisation Results (4up). In this lecture we have seen three different approaches to characterising voting procedures. The first approach is the axiomatic method, which we have already used to derive impossibility theorems. Here the aim is to identify a set of axioms (i.e., properties) that uniquely identify a voting procedure (or possibly a family of voting procedures). As examples, we have seen May's characterisation of the plurality rule and Young's characterisation of the positional scoring rules. The second approach is based on, first, a notion of consensus (specifying a set of ballot profiles in which there is a clear winner) and, second, a notion of distance between profiles. The winners of an election are then the winners in the consensus profiles that are closest to the actual profile. We have seen two voting procedures that are most conveniently defined in this framework (Dodgson and Kemeny) and we have seen two others for which it is possibly to "rationalise" them in terms of consensus and distance (Borda and Plurality). The third approach, finally, is based on the idea that voting can be a mechanism for tracking the truth: we assume that there is an objectively correct choice to be made and each voter receives a distorted signal regarding this choice; the voting procedure then has to estimate the most likely correct choice given the observed ballots provided by the voters. The classical Condorcet Jury Theorem fits into this framework, and we have also seen how to characterise the Borda rule by choosing an appropriate "noise model". Here are two representative papers each for each of the three approaches:

Homework #2
(Deadline: 19/10/2010)
612 October 2010

Strategic Manipulation (4up). In this lecture we have introduced the problem of strategic manipulation: sometimes a voter can improve the outcome of an election for herself by misrepresenting her preferences. While in earlier lectures we had treated voting procedures merely as mechanisms that map a profile of ballots supplied by the voters to election winners, the notions of truthful voting and strategic manipulation provide the missing connection between actual preferences and ballots cast in an election. A voting procedure is called strategy-proof if it never gives any voter an incentive to not vote truthfully. The main theorem in this area of social choice theory is the Gibbard-Satterthwaite Theorem, which shows that no resolute voting procedure (that does not exclude some alternatives from winning to begin with) can be strategy-proof without also being dictatorial. We have also seen the the Duggan-Schwartz-Theorem, which establishes a similar result for irresolute voting procedures. Consult the following papers for accessible proofs of these results:

To adequately model incentives for potential manipulators in elections run using irresolute voting procedures we had to make assumptions on how to extend the preferences of a voter from a relation defined over individual alternatives to a preference order declared over nonempty sets of alternatives. In the second part of the lecture we have seen that this is an interesting problem in its own right, which has given rise to a small subarea of social choice theory, known as ranking sets of objects. We have covered some of the fundamental axioms formulated for this problem and proved the Kannai-Peleg Theorem, the seminal impossibility theorem in the field. The following survey article gives a very good overview of the area:
  • S. Barberà, W. Bossert, and P.K. Pattanaik. Ranking Sets of Objects. In S. Barberà et al. (eds.), Handbook of Utility Theory, volume 2. Kluwer Academic Publishers, 2004.

 
719 October 2010

Circumventing Manipulation (4up). In this lecture we have looked into ways of trying to circumvent the problems raised by the impossibility theorems discussed in the previous week, which show that any reasonable voting procedure may be subject to strategic manipulation. We have discussed three approaches. The first approach is to limit the range of profiles on which we expect our voting procedure to perform well. The best known such domain restriction is single-peakedness, which is a reasonable assumption in some cases and which constitutes a sufficient condition for strategy-proofness. The second approach is to alter (possibly only in very subtle ways) the formal framework in which we define voting procedures and the notion of strategy-proofness. Any combination of ballot language and class of preference structures gives rise to a new framework that may be worth analysing in detail. We have seen that if we assume that preferences can be modelled as (certain types of) utility functions, then we can obtain strategy-proofness for reasonable mechanisms (e.g., the Vickrey auction); and we have seen that for approval voting (which does not fit into the standard voting framework, as ballots are not linear orders), we can at least guarantee a weak form of strategy-proofness. The third approach is one of the, by now, classical ideas in computational social choice: it consists in looking for voting procedures that, while not immune to manipulation, are resistant to manipulation in the sense that manipulation is computationally intractable for the manipulators to perform. We have seen that several standard voting procedures do not enjoy this property, but STV does. We have also discussed the coalitional manipulation problem, in which computational intractability can often be achieved even for elections with small numbers of candidates (we have seen this for the Borda rule). Finally, we have discussed the issue of whether such worst-case hardness results provide sufficient protection in practice. Here are a few of the papers that were cited in class:

Homework #3
(Deadline: 02/11/2010)
826 October 2010

NO CLASS (EXAM WEEK)

Fix the topic of your paper by the end of the week.
92 November 2010

Voting in Combinatorial Domains (4up). The alternatives to vote for in an election often have a combinatorial structure: a collective decision regarding the instantiation of several variables needs to be made. Examples include elections of committees and referenda over possibly interrelated propositions. Voting for combinations of values explicitly is problematic, because of the prohibitively large number of alternatives, while voting issue-by-issue can easily lead to unrepresentative results. In this lecture we have explored some of the approaches that have been suggested to address this problem. This includes, in particular, the ideas of combinatorial vote and of sequential vote. Here are some of the papers cited during the lecture:

An important ingredient for most approaches to voting in combinatorial domains is the language used to represent the ballots (and/or the preferences) of the voters. We have discussed languages based on weighted goals as well as CP-nets. Here are some pointers for further reading on compact preference representation languages:

Homework #4
(Deadline: 16/11/2010)
109 November 2010

Information and Communication in Voting (4up). In this lecture we have discussed a number of problems arising in situations in which only partial information regarding the ballots of the voters is available. We have modelled this partial information as a profile of partial orders over the set of alternatives. The main problem we discussed is the possible winner problem, in which we ask, for a given alternative x, whether it is possible to refine the given partial ballots to yield complete linear ballots such that x is a winner for the election. We have also seen how this problem, as well as the necessary winner problem, relate to questions in preference elicitation, voting with general ballot languages, and election manipulation. On the technical side, we have discussed various complexity results for deciding whether a given alternative is a possible winner, for different voting procedures and different assumptions regarding the nature of the missing information. Next, we have discussed the problem of compiling the information inherent in a partial ballot profile so as to still be able to compute the election winners once the remaining ballot information arrives. The minimum number of bits required to do this, for a given voting procedure, is called the compilation complexity of that procedure.

Finally, we have briefly discussed the problem of measuring the communication complexity of voting procedures (the amount of information that voters need to communicate to allow us to compute the election winners) and how this problem relates to the complexity of compilation.

 
1116 November 2010

Further Topics in Voting (4up). In this final lecture on voting, we have briefly reviewed a number of additional topics, in particular weighted voting games, proportional representation, and electronic voting.

Homework #5
(Deadline: 30/11/2010)

Meet me by the end of the week to discuss your paper.

1223 November 2010

Guest Lecture on Judgment Aggregation by Umberto Grandi.

 
1330 November 2010

Two-sided Matching (4up). This lecture has been a brief introduction to two-sided matching: we have discussed several variants of the stable marriage problem, the Gale-Shapley "deferred acceptance" algorithm, properties such as fairness, strategy-proofness, and algorithmic efficiency, as well as applications of matching theory.

We have also discussed how to write a paper.

 
147 December 2010

Fair Division (4up). This has been an introduction to fair division. We have defined a number of criteria for assessing the fairness and efficiency of an allocation of goods, we have briefly discussed methods for solving the combinatorial optimisation problem of allocating indivisible goods, and we have discussed cake-cutting algorithms (i.e., methods for allocating portions of a single divisible good) in somewhat more detail. You can find more information and bibliographic references in my lecture notes on fair division.

We have also discussed how to give a talk.

 
1514 December 2010

Student presentations will take place on Tuesday (room G0.05) and Thursday (room A1.14). Please be on time (speakers should arrive 15 minutes in advance). Each speaker has 20 minutes, followed by questions. Guests are welcome. Here is the schedule (the papers on which these talks are based can be found at the bottom of this page):

TimeTuesdayThursday
09:30-10:00Kasper 
10:00-10:30TjeerdMaria
10:30-11:00HolgerTarek
11:00-11:15breakbreak
11:15-11:45LeoAnnemieke
11:45-12:15AdilHalldóra
12:15-12:45IrmaSylvia

Ilan's talk will take place on Wednesday 12 January 2011 at 15:00 in room A1.06.

 
1621 December 2010

NO CLASS (EXAM WEEK)

Final papers will be due on 7 January 2011.

Papers: During the second block you are supposed to write a paper about a recent result from the literature and present your findings in a talk at the end of the semester. The main aim of your paper (and of your talk) should be to make an important recent result, related to the topics covered in the course, accessible to your class mates (and to me). Apart from that, your paper should also have some modest original content. This could, for instance, be a new proof of an existing result, an improved presentation or formalisation of a particular model from the literature, pointing out an unexpected connection between two different lines of work, or casting an economics-related result in a fashion that is more accessible to "us" (people with a background in logic and/or AI). In case your chosen paper is about classical (as opposed to computational) social choice, you may also explore adding a computational perspective to that work. Any of the following papers would be a good starting point (and I'd also be happy to hear your own proposals). You should have settled on a topic by the end of October. So do discuss your interests with me early on.