The Moral machine (2015-)
Adoption of self-driving, Autonomous Vehicles (AVs) promises to dramatically reduce the number of traffic accidents. But some inevitable accidents will require AVs to make tradeoffs about potential risk, such as risk to pedestrians on the road versus risk to the passenger in the car. Even if these "moral dilemmas" are rare, defining algorithms to make such decisions is a challenge, since people may be uncomfortable with the idea of Artificial Intelligence making life-and-death decisions without human oversight. Experimental ethics can help manufacturers and regulators understand the psychological challenges that may undermine trust in driverless cars, and our ability to exercise oversight over their behavior. This may be a necessary pre-condition to the wide adoption of autonomous transportation.
Interactive: Moral Machine web site
Interactive: Results from the Moral Machine experiment
E. Awad, S. Dsouza, R. Kim, J. Schulz, J. Henrich, A. Shariff, J.-F. Bonnefon, I. Rahwan (2018). The Moral Machine experiment. Nature. 562(7729)
[Free view-only version; data and code] [Video: Nature summary] [Selected media: The New Yorker, Washington Post, The Economist, BBC, Nature News, Fast Company, Motherboard / Vice, Business Insider, The Guardian, Scientific American, WIRED, The Verge, Spiegel, Le Monde, Prospect]
J. F. Bonnefon, A. Shariff, I. Rahwan, (2019). The trolley, the bull bar, and why engineers should care about the ethics of autonomous cars. Proceedings of the IEEE, 107, 502-504.
E. Awad, S. Levine, M. Kleiman-Weiner, S. Dsouza, J. B. Tenenbaum, A. Shariff, J.-F. Bonnefon, I. Rahwan (2018). Blaming humans in autonomous vehicle accidents: Shared responsibility across levels of automation. arXiv preprint
R. Kim, M. Kleiman-Weiner, A. Abeliuk, E. Awad, S. Dsouza, J. Tenenbaum, I. Rahwan (2018). A Computational Model of Commonsense Moral Decision Making. In Proc. AAAI / ACM Conference on Artificial Intelligence, Ethics and Society (AIES).
A. Shariff, J. F. Bonnefon, I. Rahwan (2017). Psychological roadblocks to the adoption of self-driving vehicles. Nature Human Behaviour. Vol 1, October, 694–696.
[View-only open access version] [Selected Media: Summary by APA, MIT News Q&A, New York Times Magazine]
R. Noothigattu, S. S. Gaikwad, E. Awad, S. Dsouza, I. Rahwan, P. Ravikumar, A. D. Procaccia (2018). A Voting-Based System for Ethical Decision Making. Proc. 32nd AAAI Conference on Artificial Intelligence (forthcoming)
[Free pre-print] [Selected Media: The Outline, France 24, IB Times, Bloomberg]
J. F. Bonnefon, A. Shariff, I. Rahwan (2016). The Social Dilemma of Autonomous Vehicles. Science. 352(6293):1573-1576.
[Free pre-print] [Selected Media: New York Times (1), New York Times (2), Washington Post (1), Wall Street Journal, Time, Independent (1), Guardian, CBS News, LA Times, Forbes, Newsweek, CBC (Canada), ABC (Australia),Le Monde (FR), El Pais (ES), Science, Scientific American, New Scientist, PBS NOVA Next, PBS, Wired, Huffington Post, MIT News,Independent (2), Washington Post (2), Washington Post (3), BBC World Service (live interview), Huffington Post (live interview), New York Magazine, Popular Science, MIT Technology Review, MIT Spectrum, The Atlantic]
Machine behavior (2017-)
Machines powered by artificial intelligence increasingly mediate our social, cultural, economic and political interactions. Understanding the behaviour of artificial intelligence systems is essential to our ability to control their actions, reap their benefits and minimize their harms. Here we argue that this necessitates a broad scientific research agenda to study machine behaviour that incorporates and expands upon the discipline of computer science and includes insights from across the sciences. We first outline a set of questions that are fundamental to this emerging field and then explore the technical, legal and institutional constraints on the study of machine behaviour.
I. Rahwan, M. Cebrian, N. Obradovich, J. Bongard, J.-F. Bonnefon, C. Breazeal, J. W. Crandall, N. A. Christakis, I. D. Couzin, M. O. Jackson, N. R. Jennings, E. Kamar, I. M. Kloumann, H. Larochelle, D. Lazer, R. McElreath, A. Mislove, D. C. Parkes, A. Pentland, M. E. Roberts, A. Shariff, J. B. Tenenbaum, M. Wellman (2019). Machine Behaviour. Nature. 68, pages 477–486
[paper, free read-only version] [Video interviews with some of the authors] [Selected Media: Quanta (interview), Forbes, Tech Target, Media Lab on Medium, Technology Review, Information Age, ZDNet, Süddeutsche Zeitung, The Telegraph, Spiegel, El Pais]
Z. Epstein, B. H. Payne, J. H. Shen, A. Dubey, B. Felbo, M. Groh, N. Obradovich, M. Cebrian, I. Rahwan (2018). Closing the AI Knowledge Gap. arXiv:1803.07233 [cs.CY].
B. H. Payne, Z. Epstein, J. Shen, C. J. Hong, B. Felbo, A. Dubey, M. Groh, N. Obradovich, M. Cebrian, I. Rahwan (2018). TuringBox: An Experimental Platform for the Evaluation of AI Systems. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), demo track.
human-machine cooperation (2013-)
Since Alan Turing envisioned Artificial Intelligence (AI), a major driving force behind technical progress has been competition with human cognition (e.g. beating humans in Chess or Jeopardy!). Less attention has been given to developing autonomous machines that learn to cooperate with humans. Cooperation does not require sheer computational power, but relies on intuition, and pre-evolved dispositions toward cooperation, common-sense mechanisms that are difficult to encode in machines. We develop state-of-the-art machine-learning algorithms that cooperate with people and other machines at levels that rival human cooperation in two-player repeated games.
J. W. Crandall, M. Oudah, Tennom, F. Ishowo-Oloko, S. Abdallah, J.-F. Bonnefon, M. Cebrian, A. Shariff, M. A. Goodrich, I. Rahwan (2018). Cooperating with Machines. Nature Communications. Vol. 9, Article No. 233.
future of work: automation and labor (2015-)
As advances in robotics and artificial intelligence revive concerns about the impact of automation on jobs, a question looms: How will automation affect employment in different cities and economies? We use tools from complex systems and urban science to explore this question.
Video: EmTech Talk
Interactive web site: http://skillscape.mit.edu/
M. R. Frank, D. Autor, J. E. Bessen, E. Brynjolfsson, M. Cebrian, D. J. Deming, M. Feldman, M. Groh, J. Lobo, E. Moro, D. Wang, H. Youn, I. Rahwan (2019). Toward Understanding the Impact of AI on Labor. Proceedings of the National Academy of Sciences. 116(14):6531–6539
[Selected media: Forbes]
A. Alabdulkareem, M. R. Frank, L. Sun, B. AlShebli, C. Hidalgo, I. Rahwan (2018). Unpacking the polarization of workplace skills. Science Advances, 4(7):eaao6030
[Selected Media: Summary on Medium, AAAS, WBUR, Axios, Nature Middle East]
Frank, M. R., Sun, L., Cebrian, M., Youn, H., & Rahwan, I. (2018). Small cities face greater impact from automation. Journal of the Royal Society Interface. 15(139).
[Selected Media: Wired, MIT Technology Review, New Scientist]
deepmoji: machines guessing human emotion (2016-)
Can we improve the ability of machines to understand human emotions by using the information conveyed in emojis? We use Deep Learning techniques, with a novel pre-training approach using emojis, to outperform many state-of-the-art techniques in sentiment, emotion and sarcasm detection.
Interactive: DeepMoji web site
B. Felbo, A. Mislove, A. Søgaard, I. Rahwan, S. Lehmann (2017) Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm. Proceedings of the Conference on Empirical Methods on Natural Language Processing (EMNLP).
MyGoodness: a game for Effective Altruism (2017-)
There are over one million registered charities in the United States alone, and many more worldwide. How do you choose among them? MyGoodness is a simple game that helps you understand how you give. You make 10 giving decisions. Each decision is between two choices, and you tell us which you prefer. At the end of the game, we give you a summary of your ‘goodness’ and how it compares to others. You can share that feedback with whomever you would like.
This project is in collaboration with The Life You Can Save, a charity founded by philosopher Peter Singer to promote effective charitable behavior.
Interactive: MyGoodness Game
society-in-the-loop: the Algorithmic Social Contract (2016-)
Recent rapid advances in Artificial Intelligence (AI) and Machine Learning have raised many questions about the regulatory and governance mechanisms for autonomous machines. This is not about individual gadgets, but about complex, networked systems of humans and algorithms making decisions in business, government and the media. We need conceptual frameworks for designing new governance architectures for these human-machine social systems. In doing so, it is helpful to learn lessons about human cooperation and governance from political philosophy and cultural anthropology. It is also important to understand the limits of regulation, and whether over-regulation of adaptive systems may even backfire.
W. Shen, A. A. Khemeiri, A. Almehrezi, W. A. Enezi, I. Rahwan, J. W. Crandall (2017). Regulating Highly Automated Robot Ecologies: Insights from Three User Studies. In: Proc. Int. Conf. on Human-Agent Interaction (HAI}.
I. Rahwan (2016). Towards Scalable Governance: Sensemaking and Cooperation in the Age of Social Media. Philosophy & Technology. 30(2), 161-178
[View-only open access versions]
Selected Media: New York Times.
societal impact of climate change (2016-)
Public servants are often first responders to disasters, and the day-to-day completion of their jobs aids public health and safety. However, with respect to their individual psychological and physiological responses to environmental stressors, public sector workers may be harmed in much the same way as other citizens in society. We find that exposure to hotter temperatures reduces the activity of two groups of regulators—police officers and food safety inspectors—at times that the risks they are tasked with overseeing are highest.
N. Obradovich, R. Migliorini, M. P. Paulus, I. Rahwan (2018). Empirical evidence of mental health risks posed by climate change. Proceedings of the National Academy of Sciences. (in press)
[Selected media: Psychology Today, CNN, Pacific Standard, LA Times, Salon, Inverse]
N. Obradovich, D. Tingley , I. Rahwan (2018). Effects of environmental stressors on daily governance. Proceedings of the National Academy of Sciences. (in press)
[Selected media: CNN, US News]
Beeme is a massive immersive social experiment that directly draws inspiration from popular culture, performing arts, gaming and YouTube streaming culture. Many people have played an augmented reality game, but Beeme is reality augmented. In BeeMe an agent gives up their free will to save humanity - or perhaps to know whether humanity can be saved at all. This brave individual will agree to let the Internet pilot their every action. From their browser, the public will control the human avatar by suggesting actions and upvoting or downvoting others’ suggestions. The goal of the crowd is to save humanity from an evil AI, by the name of Zookd.
deep angel: Human detection of machine manipulated media (2018-)
Recent advances in neural networks for content generation enable artificial intelligence (AI) models to generate high-quality media manipulations. Here we report on a randomized experiment designed to study the effect of exposure to media manipulations on over 15,000 individuals' ability to discern machine-manipulated media. We engineer a neural network to plausibly and automatically remove objects from images, and we deploy this neural network online with a randomized experiment where participants can guess which image out of a pair of images has been manipulated. The system provides participants feedback on the accuracy of each guess. In the experiment, we randomize the order in which images are presented, allowing causal identification of the learning curve surrounding participants' ability to detect fake content. We find sizable and robust evidence that individuals learn to detect fake content through exposure to manipulated media when provided iterative feedback on their detection attempts. Over a succession of only ten images, participants increase their rating accuracy by over ten percentage points. Our study provides initial evidence that human ability to detect fake, machine-generated content may increase alongside the prevalence of such media online.
Web site: http://deepangel.media.mit.edu/
Groh, M., Epstein, Z., Obradovich, N., Cebrian, M., & Rahwan, I. (2019). Human detection of machine manipulated media. arXiv preprint arXiv:1907.05276.
quantifying the evolution of artificial intelligence (2018-2019)
In this work, we use bibliometric analysis to quantify the evolution of citation patterns in AI research. We investigate how AI researchers build on research from other fields, but also how other fields build on AI research. The trends highlight interesting patterns, such as: the diminishing reliance on psychology and cognitive science, the growing role of industrial research labs, and the observation that the social and behavioral sciences are not keeping up with the volume of AI research.
M. R. Frank, D. Wang, M. Cebrian, I. Rahwan (2019). The evolution of citation graphs in artificial intelligence research. Nature Machine Intelligence. 1, pages79–85
[paper, free read-only version] [Media: Kellogg Insight]
evolution of the social contract (2016-2018)
Political constitutions describe the fundamental principles by which nation-states are governed, the political and legal state institutions, the powers, procedures, and duties of those institutions, and the rights and responsibilities of individuals. How do these constitutions develop over long periods of time? What is the interplay between colonial history and global, time varying trends in determining the characteristics of a country's constitution? We explore these questions using new techniques of computational social science.
Companion Web site: http://constitutionology.unicefstories.org/
A. Rutherford, Y. Lupu, M. Cebrian, I. Rahwan, B. LeVeck, M. Garcia-Herranz (2018). Inferring Mechanisms for Global Constitutional Progress. Nature Human Behaviour, Vol 2, pages 592–599
[paper, free read-only version, early preprint]
measuring cooperation at scale (2016-2018)
We understand how cooperation emerges at the level of microbes all the way up to humans interacting in small groups. But do the same mechanisms explain cooperation between larger human agglomerations, such as nation states? Quantifying cooperation mechanisms at the global scale are fundamental to our ability to maintain peace and achieving international cooperation.
M. R. Frank, M. Obradovich, L. Sun, W. L. Woon, B. L. LeVeck, I. Rahwan (2018). Detecting Reciprocity at Global Scale. Science Advances, Vol. 4, no. 1, eaao5348.
Selected Media: Nature Middle East
A central idea in machine learning is that the data we use to teach an algorithm can significantly influence its behavior. So when we talk about AI algorithms being biased on unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it. This project, launched on April Fools' Day, highlights the role of data in algorithmic bias by pushing the idea to the extreme. When given an inkblot image (a kind of psychoanalytic test), a Deep Learning-based image captioning algorithm can see very different things, even sick things, if trained on the wrong data set. This project aims to stimulate public awareness and discussion of these issues.
Web site: http://norman-ai.mit.edu/
Selected media: BBC, CNN, NY Post, The Times, La Repubblica, Fortune, Fast Company, Wired (Germany), Rolling Stone, Vice, USA Today, Tech Crunch, The Telegraph, Fast Company
FACEBOOK GENDER DIVIDE (2017-2018)
We present the Facebook Gender Divide, an inexpensive, real- time instrument for measuring gender differences in Facebook access and activity in 217 countries. The Facebook Gender Divide captures standard indicators of Internet penetration and gender equality indices in education, health, and ecOnomic opportunity. We find that the tendency of countries to approach economic gender equality is negatively associated with a high Facebook Gender Divide. Our results suggest that online social networks, while suffering gender imbalance, may lower information access barriers for women and narrow the economic gender gap.
D. Garcia, Y. M. Kassa, A. Cuevas, M. Cebrian, E. Moro, I. Rahwan, R. Cuevas (2018). Analyzing gender inequality through large-scale Facebook advertising data. Proceedings of the National Academy of Sciences. 115 (27) 6958-6963;
SHELLEY: Human-AI Collaborative Horror Stories (2017)
To mark Halloween 2017, we presented Shelley: the world's first collaborative AI Horror Writer! Shelley is a deep-learning powered AI trained on 140,000 eerie stories from r/nosleep. Like Mary Shelley - her Victorian namesake - Shelley takes a bit of inspiration in the form of a random seed, or a short snippet of text, and starts creating stories. But what Shelley truly enjoys is to work collaboratively with humans. Starting October 25, and leading up to Halloween, she responded to the stories she would start every hour on her Twitter account, and she write the first AI-Human horror short story anthology ever put together! Check out the stories here.
Deep Empathy (2017)
Deep Empathy is a collaboration between the Scalable Cooperation group and the UNICEF Innovation Office to pursue a scalable way to increase empathy. We use deep learning algorithms to learn characteristics of Syrian neighborhoods after the war (for example, Aleppo, Syria), and uses these features to transform images of cities all over the world, simulating how they would look if they suffered disasters like those in Syria. We used these simulated images to investigate whether AI can induce more empathy.
Interactive: Deep Empathy web site
The Nightmare Machine (2016)
To mark Halloween 2017, we present the Nightmare Machine. Since centuries, and across geographies, religions, and cultures, people have tried to innovate ways of scaring each other. Creating a visceral emotion such as fear remains one of the cornerstones of human creativity. This challenge is especially important in an age in which we wonder what the limits of artificial intelligence are -- in this case, can machines learn to scare us? Towards this goal, we present Haunted Faces and Haunted Places: computer-generated scary imagery powered by deep learning algorithms.
Interactive: Nightmare Machine web site
crowdsourcing a manhunt (2012-2013)
People often say that we live in a small world. In a brilliant experiment, legendary social psychologist Stanley Milgram proved the six degrees of separationhypothesis: that everyone is six or fewer steps away, by way of introduction, from any other person in the world. But how far are we, in time, from anyone on earth? Our team won the Tag Challenge, showing it is possible to find a person, using only his or her mug shot, within 12 hours.
A. Rutherford et al (2013). Targeted social mobilization in a global manhunt. PLOS ONE 8 (9): e74628.
I. Rahwan et al (2013). Global Manhunt Pushes the Limits of Social Mobilization. Computer, vol. 46, no. 4, pp. 68-75.
Time-critical social mobilization: darpa red balloon challenge (2009-2016)
In 2009, DARPA launched the Network Challenge, to explore the roles the Internet and social networking play in the timely communication, wide-area team-building, and urgent mobilization required to solve broad-scope, time-critical problems. The challenge was to be the first to locate 10 moored, 8-foot, red weather balloons at 10 random locations in the continental United States. A team from MIT won by locating all balloons in under 9 hours. We helped analyze the factors behind the team's success. We then quantified the limits of this kind of mobilization, and introduced techniques for improving information verification in mass collaboration.
G. Pickard, W. Pan, I. Rahwan, M. Cebrian, R. Crane, A. Madan, A. Pentland (2011). Time-Critical Social Mobilization. Science. Vol. 334 no. 6055 pp. 509-512.
A. Rutherford, M. Cebrian, S. Dsouza, E. Moro, A. Pentland, and I. Rahwan (2013). Limits of Social Mobilization. Proceedings of the National Academy of Sciences, vol. 110 no. 16 pp. 6281-6286
V. Naroditskiy, I. Rahwan, M. Cebrian, N. R. Jennings (2012). Verification in Referral-Based Crowdsourcing. PLOS ONE 7(10): e45924.
H. Chen, I. Rahwan, and M. Cebrian (2016). Bandit strategies in social search: the case of the DARPA red balloon challenge. EPJ Data Science, 2016 5:20
DARPA Shredder Challenge (2012-2014)
The Internet has unleashed the capacity for planetary-scale collective problem solving (also known as crowdsourcing). However, the very openness of crowdsourcing makes it vulnerable to sabotage by rogue or competitive actors. To explore the effect of errors and sabotage on the performance of crowdsourcing, we analyze data from the DARPA Shredder Challenge, a prize competition for exploring methods to reconstruct documents shredded by a variety of paper shredding techniques.
N. Stefanovitch, A. Alshamsi, M. Cebrian, I. Rahwan (2014). Error and attack tolerance of collective problem solving: The DARPA Shredder Challenge. EPJ Data Science. vol 3, no 13, pages 1-27.
emotions & Influence in Social Networks (2012-2016)
Social networks shape our mood, emotions, and behavior. New unobtrusive sensing techniques (wearable sensors, mobile phones, social media), allow us to study these social dynamics in the real world with unprecedented detail, complementing old methods like lab studies and surveys. We discovered that while some behaviors spread on social networks like a virus (you catch the blues from your friends), other behaviors work in the opposite direction, where our behavior changes to complement the behavior of others. Moreover, our individual personality traits affect how these processes work. These findings inform potential interventions designed to improve societal well-being.
A. Alshamsi, F. Pianesi, B. Lepri, A. Pentland, and I. Rahwan (2016). Network Diversity and Affect Dynamics: The Role of Personality Traits. PLOS ONE, 11(4), e0152358.
A. Alshamsi, F. Pianesi, B. Lepri, A. Pentland, and I. Rahwan (2015). Beyond Contagion: Reality Mining Reveals Complex Patterns of Social Influence. PLOS ONE 10(8), e0135740.
A. Alshamsi, E. Awad, M. Almehrezi, V. Babushkin, P.J. Chang, Z. Shoroye, A.P. Tóth, and I. Rahwan (2015). Misery loves company: happiness and communication in the city. EPJ Data Science. 4(1), pp.1-12.
Cognitive Limits of Social Networks (2012-2016)
There is a wide cultural belief in the power of the Web and social media as enablers of collective intelligence. They help us spread information rapidly, and learn useful information from each other. But there are fundamental limits to the capabilities of those networks. Understanding these limits is essential to improving social media and allowing society to make the most of it.
M. Cebrian, I. Rahwan, A. Pentland (2016). Beyond Viral. Communications of the ACM. 59(4):36-39.
I. Rahwan, D. Krasnoshtan, A. Shariff, J. F. Bonnefon (2014). Analytical reasoning task reveals limits of social learning in networks. Journal of the Royal Society Interface. 11(93).
Promoting Cooperation (2012-2013)
How to best govern society and promote cooperation is a centuries-old debate: is cooperation best maintained by a central authority, or is it better handled by more decentralized forms of governance? Using mathematical models, we explore the tradeoffs between different mechanisms for cooperation. We studied how centralized cooperation-enforcement mechanisms may be undermined by corruption, and how decentralized enforcement can provide viable alternatives. We also showed that it is possible to promote cooperation by inducing peer-pressure in social networks, using interventions that amplify the power of peer enforcement mechanisms.
A. Mani, I. Rahwan, and A. Pentland (2013). Inducing Peer Pressure to Promote Cooperation. Scientific Reports. 3(1735) doi:10.1038/srep01735.
S. Abdallah, R. Sayed, I. Rahwan, B. LeVeck, M. Cebrian, A. Rutherford, J. Fowler (2014). Corruption Drives the Emergence of Civil Society. Journal of the Royal Society Interface. 11(93).
Mass-Scale Argumentation (2005-2011)
This research program aimed to lay the foundations for a world-wide web of annotated arguments, which would allow people to navigate arguments all over the Web. For instance, if you are unsure of the assumptions behind a statement made in an article or piece of news, you would be able to navigate evidence supporting it, as well as the counter-arguments that undermine it, even if these are located elsewhere on the Internet. We articulated this vision, and proposed some semantic annotation technologies that would facilitate it. Building on these ideas, others have grown a full research program to realize this "worldwide argument web" (see this recent survey for an overview).
I. Rahwan, B. Banihashemi, C. Reed, D. Walton and S. Abdallah (2011). Representing and Classifying Arguments on the Semantic Web. The Knowledge Engineering Review. Volume 26, Issue 4, pp 487-511
I. Rahwan (2008). Mass Argumentation and the Semantic Web. Journal of Web Semantics. Vol 6, No 1, pages 29-37.
I. Rahwan, F. Zablith and C. Reed (2007). Laying the Foundations for a World Wide Argument Web. Artificial Intelligence, Vol 171, No 10-15, pages 897-921
I. Rahwan, F. Zablith, and C. Reed (2007). Towards Large Scale Argumentation Support on the Semantic Web. In: Proceedings of 22nd Conference on Artificial Intelligence (AAAI). AAAI Press, California, USA, pages 1446-1451.
I. Rahwan, P. McBurney (2007). Guest Editors' Introduction: Argumentation Technology. IEEE Intelligent Systems, November/December, Vol. 22, no. 6, pages 21-23.
C. Chesñevar, J. McGinnis, S. Modgil, I. Rahwan, C. Reed, G. Simari, M. South, G. Vreeswijk and S. Willmott (2006). Towards an Argument Interchange Format. The Knowledge Engineering Review, Vol 21, No 4, pages 293-316.
Opinion aggregation (2007-2017)
Opinion aggregation on social media uses various mechanisms, such as "Likes" or thumbs-up/-down, which handle a single item at a time. In many domains (e.g., political discussion or AI ethics), we need to consider a very large number of possibilities, or to take into account the relationships between different claims and how they rebut one another through complex webs of arguments and counter-arguments. We study methods for aggregating opinions about such complex domains, the quality of the outcomes of different opinion aggregations methods, and whether strategic agents can manipulate those outcomes.
M. R. Frank, M. Cebrian, G. Pickard, I. Rahwan (2017). Validating Bayesian truth serum in large-scale online human experiments. PLOS ONE. 12(5): e0177385.
E. Awad, J.-F. Bonnefon, M. Caminada, T. Malone, I. Rahwan (2017). Experimental Assessment of Aggregation Principles in Argumentation-enabled Collective Intelligence. ACM Transactions on Internet Technology. 17:3.
[Paper] [arXiv preprint]
E. Awad, M. Caminada, G. Pigozzi, M. Podlaszweski, I. Rahwan (2017). Pareto Optimality and Strategy Proofness in Group Argument Evaluation. Journal of Logic and Computation. (in press)
[Paper] [Free link to access article]
E. Awad, R. Booth, F. Tohme, I. Rahwan (2017). Judgment Aggregation in Multi-Agent Argumentation. Journal of Logic and Computation. 27(1): 227-259.
R. Booth, E. Awad, and I. Rahwan (2014). Interval Methods for Judgment Aggregation in Argumentation. In: Proc. 14th Int. Conference on Principles of Knowledge Representation and Reasoning (KR), Vienna.
R. Booth, M. Caminada, M. Podlaszewski, and I. Rahwan (2012). Quantifying Disagreement in Argument-based Reasoning. In: Proc. 11th Int. Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), Valencia, Spain.
I. Rahwan and K. Larson (2011). Logical Mechanism Design. The Knowledge Engineering Review. Volume 26, No 1, pages 61-69.
S. Pan, K. Larson and I. Rahwan (2010). Argumentation Mechanism Design for Preferred Semantics. In: Proceedings of the 3rd International Conference on Computational Models of Argument (COMMA), Italy, pages 403-414.
I. Rahwan and F. Tohmé (2010). Collective Argument Evaluation as Judgement Aggregation. In: Proceedings of the 9th International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS), Toronto, Canada.
I. Rahwan, K. Larson and F. Tohmé (2009). A Characterisation of Strategy-Proofness for Grounded Argumentation Semantics. In: Proceedings of the 21st International Joint Conference on Artificial Intelligence (IJCAI), Pasadena, California, USA.
I. Rahwan and K. Larson (2008). Pareto Optimality in Abstract Argumentation. In: Proceedings of 23rd Conference on Artificial Intelligence (AAAI). AAAI Press, California, USA, pages 150-155.
I. Rahwan and K. Larson (2008). Mechanism Design for Abstract Argumentation. In: Proceedings of 7th International Conference on Autonomous Agents and Multiagent Systems (AAMAS). Estoril, Portugal, pages 1031-1038.
Computational models of argumentation & negotiation (2003-2013)
Computer scientists are interested in developing models of argumentation and negotiation processes, in order to facilitate conflict-resolution and deliberation among machines and between humans and machines. These models are often formalized using formal logical theories, or computer simulation, and can be tested using human subject experiments.
S. Dsouza, Y. Gal, P. Pasquier, S. Abdallah, and I. Rahwan (2013). Reasoning about Goal Revelation in Human Negotiation. IEEE Intelligent Systems. vol. 28, no. 2, pp. 74-80.
P. Pasquier, R. Hollands, I. Rahwan, F. Dignum and L. Sonenberg (2011). An Empirical Study of Interest-based Negotiation. Autonomous Agents and Multiagent Systems. Volume 22, Number 2, 249-288
I. Rahwan, M. I. Madakkatel, J. F. Bonnefon, R. N. Awan and S. Abdallah (2010). Behavioural Experiments for Assessing the Abstract Argumentation Semantics of Reinstatement. Cognitive Science. 34(8):1483–1502
F. Grasso, I. Rahwan, C. Reed, and G. R. Simari (2010) Introducing Argument & Computation, Argument & Computation, Vol 1, No 1, pages 1-5.
A. Belesiotis, M. Rovatsos and I. Rahwan (2010). Agreeing on Plans through Iterated Disputes. In: Proceedings of the 9th International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS), Toronto, Canada.
N. C. Karunatillake, N. R. Jennings, I. Rahwan and P. McBurney (2009). Dialogue Games that Agents Play within a Society. Artificial Intelligence. Vol 173, No 9-10, pages 935-981.
I. Rahwan, P. Pasquier, L. Sonenberg and F. Dignum (2009). A Formal Analysis of Interest-based Negotiation. Annals of Mathematics and Artificial Intelligence. Vol 55, Issue 3-4, pages 253- 276.
P. Pasquier, L. Sonenberg, I. Rahwan, F. Dignum, R. Hollands (2007). An Empirical Study of Interest-based Negotiation. In: Proceedings of 9th International Conference on Electronic Commerce (ICEC). ACM Press.
I. Rahwan, L. Sonenberg, N. R. Jennings and P. McBurney (2007). STRATUM: A Methodology for Designing Heuristic Agent Negotiation Strategies. Applied Artificial Intelligence, Vol 21, No 6, pages 489-527.
I. Rahwan, L. Amgoud (2006). An Argumentation-based Approach for Practical Reasoning. Proceedings of the 5th International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS), Hakodate, Japan
N. C. Karunatillake, N. R. Jennings, I. Rahwan and S. D. Ramchurn (2006). Managing Social Influences through Argumentation-based Negotiation. Proceedings of the 5th International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS), Hakodate, Japan
P. Pasquier, I. Rahwan, F. Dignum and L. Sonenberg (2006). Argumentation and Persuasion in the Cognitive Coherence Theory. In P. Dunne and T. Bench-Capon (Eds.), Proceedings of the 1st International Conference on Computational Models of Argument (COMMA), IOS Press, Amsterdam.
N. C. Karunatillake, N. R. Jennings, I. Rahwan and T. Norman (2005). Argument-based Negotiation in a Social Context. Proceedings of the International Conference on Autonomous Agents and Multiagent Systems (AAMAS)
I. Rahwan, F. Koch, C. Graham, J. A. Kattan and L. Sonenberg (2005). Goal-directed Automated Negotiation for Supporting Mobile User Coordination. In Modeling and Using Context: 5th International and Interdisciplinary Conference, CONTEXT, LNCS, Vol 3554, Springer
I. Rahwan, L. Sonenberg and F. Dignum (2003). Towards Interest-Based Negotiation. Proceedings of the 2nd International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), Melbourne, Australia
I. Rahwan, S. D. Ramchurn, N. R. Jennings, P. McBurney, S. Parsons and L. Sonenberg (2003). Argumentation-Based Negotiation. The Knowledge Engineering Review, Volume 18, No. 4, pages 343-375.