Ariel Procaccia

I am an Associate Professor in the Computer Science Department at Carnegie Mellon University. I am also affiliated with the Machine Learning Department, and a member of the theory group and the ACO program.

I usually work on problems at the interface of computer science and economics. I write papers, teach courses, serve on various editorial boards and program committees, and, best of all, advise wonderful students and postdocs.

Current Highlights

spliddit2

Spliddit: Provably Fair Solutions

rovovote

RoboVote: AI-Driven Collective Decisions

handbook

Handbook of Computational Social Choice

News

I am the PI on a new grant from the National Science Foundation, entitled “Algorithms and Mechanisms for Kidney Exchange,” with co-PIs Avrim Blum and Tuomas Sandholm. Details are available through the award page on the NSF website.

Abstract:

Severe cases of renal failure require kidney transplantation. But the demand for kidneys is huge while the supply is quite limited. Even when a willing donor is found, several hurdles must be cleared before transplantation can take place. Enter kidney exchange, the idea that patients can exchange willing but incompatible donors. In its most basic form ? 2-way exchanges ? two patient-donor pairs swap kidneys, that is, the first donor donates to the second patient and the second donor to the first patient. However, exchanges along longer cycles and even chains are also taking place. In recent years several kidney exchange programs have become operational, building on the work of economists and computer scientists. And while significant progress has already been made, computer science has an even bigger role to play in kidney exchange research. Indeed, the theme of this proposal is that challenges in kidney exchange give rise to a wealth of exciting theoretical questions. Moreover, solving these problems matters: These problems are important to the design and optimization of real-world kidney exchange programs.

An overarching goal of this proposal is to narrow the gap between the theory and practice of kidney exchange. The proposed research will incorporate elements such as 3-way exchanges, weighted edges, and chains initiated by altruistic donors into existing work and develop new models that ? while still abstractions of reality ? are able to distill the essence of practical kidney exchange challenges. The project can therefore impact the evolution of kidney exchange programs, which are still in their infancy.

In more detail, the project focuses on two main research directions:

1. Dealing with incentives: Transplant centers care foremost about their own patients. Thus if the individual transplant centers cannot each be confident that their own patients will fare at least as well if they participate in the exchange than if they do not, then they may not join. Even more subtly, the transplant centers may join but “hide” their easier-to-match patients. The proposed research aims to tackle both of these challenges. The goal is to develop algorithms and analysis for exchanges that produce optimal or near-optimal solutions, while providing strong incentive guarantees for transplant centers to join.

2. Dealing with crossmatches: Crossmatch tests require mixing samples of the blood of potential donors and patients, and hence are only done after a matching is computed. Unfortunately, crossmatch tests are quite likely to fail, leading to the collapse of large portions of supposedly optimal exchanges. Optimization that takes crossmatches into account offers significant gains compared to the common practice today. The proposal contains plans to more broadly develop a full theoretical and algorithmic understanding of the integration of crossmatch tests into the optimization, and the fundamental tradeoffs involved.

 

I am the PI on a new grant from the National Science Foundation, entitled “Computational Social Choice: For the People.” Details are available through the award page on the NSF website.

Abstract:

The field of social choice theory deals with aggregating the preferences or opinions of individuals towards a collective decision; voting is a paradigmatic example. This rich space of problems has long been studied in economics and mathematics, leading to a slew of striking theoretical results. But real-world applications have been sparse. From the AI viewpoint, the study of computational social choice is seen by many researchers as a central component in the effort to build the foundations of multiagent systems. This research extends this theoretical work in computational social choice to enable people to make joint decisions. This project aims to put human decision making at the center of computational social choice, while leveraging the very approaches and techniques developed for voting in multiagent systems. The research plan is directly motivated by the not-for-profit website RoboVote.org, which enables people to implement whichever voting methods appear to be best based on analysis and empirical evidence. This project will realize the full potential of RoboVote for education, outreach, and societal impact with the aim of transforming the way people make group decisions in a wide range of applications.

The specific challenges that will be tackled are divided into two subsets, corresponding to the two fundamentally different types of polls currently modeled on RoboVote.

1. Subjective preferences: Preferences are subjective when the desirability of each alternative is a matter of taste. RoboVote aggregates subjective rankings by assuming that voters have latent utilities for the alternatives, and selecting an outcome that maximizes the sum of utilities by using the reported rankings as a proxy for those utilities. An immediate gap that must be addressed is that the approach does not extend to the case where the outcome is a ranking. A second, far-reaching challenge is to rethink the way voters express their preferences, in order to obtain more useful information regarding their actual utility functions while keeping the cognitive burden low. Finally, the project includes a study of the axiomatic properties of optimal aggregation methods in the foregoing framework.

2. Objective opinions: In this scenario, some alternatives are objectively better than others, but this objective comparison is not known to voters. The solutions deployed on RoboVote aim to handle worst-case noise. The research aims to create and test algorithms that improve upon naïve approaches, especially by building on synergistic advances in the design of fixed-parameter tractable algorithms.

 

New Scientist published a story about our EC’16 rent division paper, which will also be presented at the IJCAI’17 Sister Conference Track. (The story was subsequently picked up by the UK tabloid Metro.)

New Scientist just published a nice story about our IJCAI’17 paper with the catchy title “Why You Should Charge Your Friends for Borrowing Your Stuff” (short answer: because in a game where non-rivalrous goods are bought and shared on a network, equilibria are more efficient, in theory and experiments, when access costs are imposed).

Here are some other random topics I’ve weighed in on recently (not covered by other news):

* Aziz and Mackenzie’s breakthrough result on bounded envy-free cake cutting, in the Sydney Morning Herald and Quanta Magazine.

* AI and online dating, in the Boston Globe.

 

Congratulations to Swaprava Nath, who will join the Indian Institute of Technology, Kanpur, as Assistant Professor in July. I’ve hosted Swaprava as a postdoc since 2015; he is supported by the Fulbright-Nehru postdoctoral fellowship.

I am co-PI on a new 3-year grant from the Office of Naval Research, entitled “Provably Impartial Peer Assessment for Expert Hiring,” with PI Chinmay Kulkarni.

 

My former PhD student Nisarg Shah won the Victor Lesser Distinguished Dissertation Award, given annually for the best thesis in the area of autonomous agents and multiagent systems. His thesis is entitled “Optimal Social Decision Making.” He will present his thesis research in a plenary talk at AAMAS-17 in Brazil.

An opinion piece that I wrote was published today in Wired. It’s about computational social choice (although I don’t call it that) and a bit of politics.

I was interviewed by Tim Grant from the Pittsburgh Post-Gazette about RoboVote and voting more generally. Worth checking out, if only for the bizarre portrait photo with the hovering phones!

RoboVote (http://robovote.org), a not-for-profit voting website that provides optimization-driven methods for making collective decisions, has launched today! Check out RoboVote’s about page for (somewhat AI-centric) information about the website, its algorithms, and the theory underlying them, as well as info about the amazing team.

Press coverage (newest to oldest): Wired, Pittsburgh Post-Gazette, ExtremeTech, Daily Mail, Digital Trends, Marketplace, CMU homepage

I am the PI on a new 3-year grant from the Office of Naval Research, entitled “Robust Aggregation of Noisy Estimates.”

Abstract:

The goal of the proposed research is to develop a theoretically-grounded, robust approach to the problem of aggregating the (objective) opinions of small groups of people.

For example, consider a team of engineers who are evaluating a slate of prototypes, and must select one prototype for further development. Each alternative has a true quality — in this case, it can be measured by the number of units that would be sold if a certain prototype were selected.

The participants (which are referred to as voters) express their opinions about the relative quality of alternatives, typically by ranking the alternatives. Crucially, these rankings are noisy, that is, the voters may not be perfectly informed about the true quality of each alternative. Indeed, in the foregoing example, the engineers can only take educated guesses regarding the quality of each product prototype. The challenge is, therefore, to uncover the ground truth based on the noisy estimates provided by voters.

Addressing this challenge is a key component in the PI’s long-term agenda of transforming the way people make group decisions. While AI researchers have long argued that the field of computational social choice can be applied to the design and implementation of multiagent systems, it is quickly becoming apparent that the field’s “killer app” is helping people — not software agents — make joint decisions. To this end, the PI has been leading the development of a not-for-profit social choice website called RoboVote. The website’s primary mission includes the aggregation of objective opinions, but existing solutions are not completely satisfying.
The proposed research investigates a new approach to the design of robust aggregation methods, which draws on the notion of minimax estimators from statistics. In a nutshell, the main idea is to consider a collection of models that govern how random noise is generated, and construct aggregation methods that output an outcome that is as close as possible to the ground truth, with respect to the given collection of noise models and a given distance function.
The thrust of the proposed research is threefold:

1. Characterizing minimax estimators in the social choice setting, and providing analytical bounds on their performance.

2. Designing algorithms to compute the output of minimax estimators on given instances.

3. Using real-world data to guide the design of minimax estimators by identifying collections of noise models that fit the data; and to test the performance of minimax estimators in practice.

 

Our paper “Which Is the Fairest (Rent Division) of Them All” has been selected to receive the best paper award at the 17th ACM Conference on Economics and Computation, out of 242 submissions. The paper is joint work with Kobi Gal and Moshik Mash of Ben Gurion University, and my postdoc Yair Zick.

Abstract: What is a fair way to assign rooms to several housemates, and divide the rent between them? This is not just a theoretical question: many people have used the Spliddit website to obtain envy-free solutions to rent division instances. But envy freeness, in and of itself, is insufficient to guarantee outcomes that people view as intuitive and acceptable. We therefore focus on solutions that optimize a criterion of social justice, subject to the envy freeness constraint, in order to pinpoint the “fairest” solutions. We develop a general algorithmic framework that enables the computation of such solutions in polynomial time. We then study the relations between natural optimization objectives, and identify the maximin solution, which maximizes the minimum utility subject to envy freeness, as the most attractive. We demonstrate, in theory and using experiments on real data from Spliddit, that the maximin solution gives rise to significant gains in terms of our optimization objectives. Finally, a user study with Spliddit users as subjects demonstrates that people find the maximin solution to be significantly fairer than arbitrary envy-free solutions; this user study is unprecedented in that it asks people about their real-world rent division instances. Based on these results, the maximin solution has been deployed on Spliddit since April 2015.

Congratulations to Nisarg Shah, who has accepted a tenure-track faculty position in the Department of Computer Science at the University of Toronto! He will join U of T in fall 2017, after spending a year as a postdoc at Harvard’s Center for Research in Computation and Society. I have had the privilege of advising Nisarg since he joined our PhD program in fall 2011 (which is the same time I myself joined CMU); he is defending on June 28.

The Handbook of Computational Social Choice, which I have co-edited together with Felix Brandt, Vince Conitzer, Ulle Endriss, and Jerome Lang, has been officially published.

Description: The rapidly growing field of computational social choice, at the intersection of computer science and economics, deals with the computational aspects of collective decision making. This handbook, written by thirty-six prominent members of the computational social choice community, covers the field comprehensively. Chapters devoted to each of the field’s major themes offer detailed introductions. Topics include voting theory (such as the computational complexity of winner determination and manipulation in elections), fair allocation (such as algorithms for dividing divisible and indivisible goods), coalition formation (such as matching and hedonic games), and many more. Graduate students, researchers, and professionals in computer science, economics, mathematics, political science, and philosophy will benefit from this accessible and self-contained book.

A PDF of the book is freely available online. To open the document, please enter the password cam1CSC (strangely enough, this password can be disseminated legally, so you are encouraged to shout it from the rooftops). The link is here:

http://www.cambridge.org/download_file/898428

Alternatively, the book can be purchased through the Cambridge University Press website (http://www.cambridge.org/9781107060432), Amazon, and other retailers.

Congratulations to Yair Zick for accepting a faculty position at the National University of Singapore! He will join NUS as Assistant Professor of Computer Science in the summer. Yair has been a postdoc at CMU since April 2014, jointly hosted by Anupam Datta and me. Before that he completed his PhD at Nanyang Technological University, Singapore, where he worked with Edith Elkind.

Congratulations to Nika Haghtalab for winning a 2016-2018 Microsoft Research PhD fellowship. Nika is co-advised by Avrim Blum and me.

Together with Yonathan Aumann, Steven Brams, and  Jérôme Lang, I am organizing a Dagstuhl workshop on Fair Division. The workshop will take place on June 6-10, 2016.

New Scientist published a “one minute interview” with me about AI and fair division.

I received the IJCAI Computers and Thought Award at IJCAI’15 in Buenos Aires. A narrated video of my presentation at the conference is now available online.

I am the PI on a new NSF grant, entitled “Fair Division at Scale.” Details are available through the award page on the NSF website.

Abstract:

The mathematical study of fair division dates back to the 1940s. Nowadays the literature encompasses provably fair solutions for a wide variety of problems, many of them relevant to society at large. But, to date, very few fair division methods have been implemented. Building on its rich history, the field of fair division is poised to impact society on a grander scale, for example, via Spliddit (www.spliddit.org), a fair division website created by the PI. The overarching theme of this proposal is that the advent of fairness at scale gives rise to novel algorithmic challenges, revolving around the design of methods that are practical, widely applicable, and provably fair. Solutions to these challenges call for a synthesis of ideas from theoretical computer science and microeconomic theory.

The evolution of Spliddit plays a key role in the project: Insights arising from the research will be implemented to improve Spliddit’s publicly accessible methods. Moreover, new applications will be added in order to further widen the website’s reach. The project also includes plans to enhance Spliddit’s educational content through the creation of instructive videos, thereby making the website a valuable learning and teaching resource. Finally, the project contains plans to make a societal impact that extends beyond Spliddit. In particular, the PI will work with California school districts to develop a fair method for allocating unused space.

On a technical level, the thrust of the project is twofold. (i) Dividing indivisible goods: One of five applications currently available on Spliddit, the division of indivisible goods is a notoriously difficult problem from a fair division perspective. To obtain a provably fair method, Spliddit relies on the notion of maximin share (MMS) guarantee. While exact MMS allocations may be infeasible, the project explores several notions of approximation. The project also explores the feasibility boundary of MMS allocations. Foundational algorithmic challenges lie at the heart of these research directions. (ii) Sharing credit: Spliddit’s credit calculator fairly determines the contribution of each individual to a group project, using a method developed by de Clippel et al. Perhaps the method’s most compelling guarantee is impartiality: a player’s report does not affect her own share of the credit. Dividing credit for a scientific paper, with the goal of fairly ordering the authors by contribution, is an especially attractive potential application; but there is no guarantee that players will not be able to affect their position in the resultant ranking. The project aims to circumvent this obstacle via randomization and approximation.

A new version of Spliddit, featuring two new apps and improvements to the old ones, has officially launched!
Press coverage (newest to oldest): Metro, New Scientist #2, Quanta Magazine, The New York Times #2, New Scientist #1, Pittsburgh Magazine, GeekWire, NSF podcast, Fast Company, Pittsburgh Post-Gazette, Lifehacker, Slashdot, Gizmodo, Nautilus Magazine, The New York Times #1.

Congratulations to Nika Haghtalab for winning a 2015-2016 IBM PhD fellowship, with a statement entitled “Learning Theory for Societal Good”. Nika is co-advised by Avrim Blum and me.

I was named a Sloan Research Fellow, together with 125 other young researchers in chemistry, computer science, economics, mathematics, computational and evolutionary molecular biology, neuroscience, ocean sciences, and physics.

Congratulations to Junxing Wang for winning the best student paper award at EC-14, for our joint paper “Fair Enough: Guaranteeing Approximate Maximin Shares”.

The New York Times published a story about the rent division problem — they even implemented the algorithm of Francis Su, and illustrated Sperner’s lemma! The story includes a paragraph about Spliddit, and a rather inarticulate quote from me.

Congratulations to Nisarg Shah for winning a 2014-2015 Facebook PhD fellowship, with a statement entitled “Economic Foundations of Social Computing”.  There are 11 winners overall, 3 of whom are from CMU!

Please check back later for more fascinating news!

Contact Information

  •    arielpro@cs.cmu.edu
  •   
  •    Computer Science Dept.
  •    Carnegie Mellon University
  •    Pittsburgh, PA 15213
  •   
  •    Gates-Hillman Center
  •    Office #7002