On greater participation in voluntary democratic processes by the privileged:
Smith, A., Schlozman, K.L., Verba, S., & Brady, H. (2009, 1 September). The Internet and Civic Engagement. Pew Research Center. https://www.pewresearch.org/internet/2009/09/01/the-internet-and-civic-engagement/.
Kasara, K., & Suryanarayan, P. (2015, July). When Do the Rich Vote Less Than the Poor and Why? Explaining Turnout Inequality across the World. American Journal of Political Science, 59(3), 613-627.
McBride, A.M., Sherraden, M.S., & Pritzker, S. (2004, October). Civic Engagement among Low-Income and Low-Wealth Families: In Their Words. Family Relations, 55(2), 152-162. 10.1111/j.1741-3729.2006.00366.x.
Uslaner, E.M., & Brown, M. (2005). Inequality, Trust, and Civic Engagement. American Politics Research, 33(6). 10.1177/1532673X04271903.
On weighting in polling:
Mercer, A., Lau, A., & Kennedy, C. (2018, 26 January). How different weighting methods work. Pew Research Center. https://www.pewresearch.org/methods/2018/01/26/how-different-weighting-methods-work/.
On social welfare functions:
This is a topic that I dig into in more detail in the “Philosophical questions” section and is obviously a rich area of research in economics and political philosophy. For a primer, see: Social welfare function. (2023, 17 March). In Wikipedia. https://en.wikipedia.org/wiki/Social_welfare_function.
On simulated deliberation:
Leike, J. (2023, 9 March). A proposal for importing society’s values: Building towards Coherent Extrapolated Volition with language models. Musings on the Alignment Problem (Substack).
Prior work similar to AI-as-representative:
Hidalgo, C. Augmented Democracy. https://www.peopledemocracy.com/.
Schneier, B. (2023, 10 March). Rethinking Democracy for the Age of AI. Schneier on Security (originally a speech at RSA San Francisco). https://www.schneier.com/essays/archives/2023/05/rethinking-democracy-for-the-age-of-ai.html.
Grandi, U. (2018, 19 June). Agent-Mediated Social Choice. ArXiv. https://arxiv.org/abs/1806.07199.
Kahng, A., Lee, M.K., Noothigattu, R., Procaccia, A., Psomas, C-A. (2019). Statistical Foundations of Virtual Democracy. Proceedings of the 36th International Conference on Machine Learning, PMLR 97:3173-3182. https://proceedings.mlr.press/v97/kahng19a.html.
The “Synthetic Party” in Denmark – background here: Xiang, C. (2022, 13 October). This Danish Political Party Is Led by an AI. Vice. https://www.vice.com/en/article/jgpb3p/this-danish-political-party-is-led-by-an-ai.
Hilbert, M. (2009, 8 May). The Maturing Concept of e-democracy. From e-voting and Online Consultations, to Democratic Value Out of Jumbled Online Chatter. Journal of Information Technology and Politics.
On the potential for finding better equilibria through rich understanding of participants’ preferences and dedicated research time: a quote from a friend who used to work at HM Treasury – ‘I think many of the places where I had the most “value-add” in government was where I was simply the one person putting in the time to do this search/optimization. Because I did so, I was able to find a solution that satisfied everyone’s preferences (in this case, “everyone” being the different government departments with equities in the policy area). If I hadn’t, the policy would simply have been dropped (or rather, internally vetoed)’.
On privacy-preserving machine learning/structured transparency as a path to reducing trade-offs between privacy and performance:
Trask, A., Bluemke, E., Garfinkel, B., Ghezzou Cuervas-Mons, C., Dafoe, A. (2020, 15 December). Beyond Privacy Trade-offs with Structured Transparency. ArXiv. https://arxiv.org/abs/2012.08347.
Bluemke, E., Collins, T., Garfinkel, B. & Trask, A. (2023, 15 March). Exploring the Relevance of Data Privacy-Enhancing Technologies for AI Governance Use Cases. ArXiv. https://arxiv.org/abs/2303.08956.
On the importance of deliberation to democracy
Habermas, Jürgen. 1996. Between Facts and Norms.
Cohen, J. (1989). Deliberation and Democratic Legitimacy. In A. Hamlin and P. Pettit, eds., The Good Polity. New York: Basil Blackwell, 17–34.
More general contemporary accounts of democracy which emphasize elements other than aggregation of individual preferences:
Kolodny, N. (2014, 16 December). Rule Over None I: What Justifies Democracy? Philosophy & Public Affairs. https://www.ceu.edu/sites/default/files/attachment/event/12390/kolodny-rule-over-none.pdf; Kolodny, N. (2014, 17 December). Rule Over None II: Social Equality and the Justification of Democracy. Philosophy & Public Affairs. https://www.ceu.edu/sites/default/files/attachment/event/12567/kolodny-rule-over-none-social-equality-and-justification-democracy.pdf.
Allen, D. (2023) Justice by Means of Democracy. University of Chicago Press.
Existing work on tech-enabled deliberation:
Pol.is. https://pol.is/home.
Bakker, M. et al. (2022, 28 November). Fine-tuning language models to find agreement among humans with diverse preferences. ArXiv. https://arxiv.org/abs/2211.15006.
Small, C. et al. (2023, 20 June). Opportunities and Risks of LLMs for Scalable Deliberation with Polis. ArXiv. https://arxiv.org/abs/2306.11932.
Landemore, H. Can AI bring deliberative democracy to the masses? https://www.law.nyu.edu/sites/default/files/Helen%20Landemore%20Can%20AI%20bring%20deliberative%20democracy%20to%20the%20masses.pdf.
Ovadya, A. (2023, 1 February). 'Generative CI' through Collective Response Systems. ArXiv. https://arxiv.org/abs/2302.00672?utm_source=substack&utm_medium=email.
Velikanov, C. & Prosser, A. (2017, April). Mass Online Deliberation in Participatory Policy-Making—Part I. In Beyond Bureaucracy. pp. 209-234. https://www.researchgate.net/publication/316433338_Mass_Online_Deliberation_in_Participatory_Policy-Making-Part_I.
(2020, 10 June). Innovative Citizen Participation and New Democratic Institutions. OECD. https://www.oecd.org/gov/innovative-citizen-participation-and-new-democratic-institutions-339306da-en.htm.
Lee, D., Goel, A., Aitamurto, T. & Landemore, H. (2014). Crowdsourcing for Participatory Democracies: Efficient Elicitation of Social Choice Functions. Proceedings
of the AAAI Conference on Human Computation and Crowdsourcing.
Aitamurto, T. & Landemore, H. (2016, May). Crowdsourced Deliberation: The Case of an Off-Traffic Law Reform in Finland. Policy & Internet 8(2): 174-196.
Argyle, L.P. et al. (2023, 14 February). AI Chat Assistants can Improve Conversations about Divisive Topics. ArXiv. https://arxiv.org/abs/2302.07268.
On the impact of choosing language carefully in political discussion (“viewpoint translation” or “political translation”)
Doerr, N. (2018). Political Translation: How social movement democracies survive. Cambridge University Press.
Levy, R. (2021). Social Media, News Consumption, and Polarization: Evidence from a Field Experiment. American Economic Review 2021, 111(3): 831–870.
Broockman, D. & Kalla, J. (2016, 8 April). Durably reducing transphobia: A field experiment on door-to-door canvassing. Science. Vol 352, Issue 6282, pp. 220-224. https://www.science.org/doi/10.1126/science.aad9713.
Bridging-based ranking
Ovadya, Aviv. (2022, 17 May). Bridging-Based Ranking: How Platform Recommendation Systems Might Reduce Division and Strengthen Democracy. The Belfer Center (Harvard Kennedy School). https://www.belfercenter.org/publication/bridging-based-ranking.
General background on recommendation systems:
Narayanan, A. (2023, 9 March). Understanding Social Media Recommendation Algorithms. Knight First Amendment Institute at Columbia University. https://knightcolumbia.org/content/understanding-social-media-recommendation-algorithms.
Milano, S., Taddeo, M., & Floridi, L. (2020, 27 February). Recommender systems and their ethical challenges. AI & Society. https://link.springer.com/article/10.1007/s00146-020-00950-y.
On vNM (rather than hedonic) utility:
Von Neumann–Morgenstern utility theorem. (2023, 17 August). In Wikipedia. https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem.
Impossibility results in social choice
The “Philosophical questions” section will explore this in greater detail. For now, two of the landmark results are:
Gibbard–Satterthwaite theorem. (2023, 12 August). In Wikipedia. https://en.wikipedia.org/wiki/Gibbard%E2%80%93Satterthwaite_theorem.
Arrow's impossibility theorem. (2023, 19 June). In Wikipedia. https://en.wikipedia.org/wiki/Arrow%27s_impossibility_theorem.
On connections between the alignment of government and the alignment of AI systems:
At some level, all principal-agent problems share core alignment challenges, and many commentators have noted the parallels between aligning AI and aligning various social systems (companies, governments, etc.). A few sources:
Danzig, R. (2022, January). Machines, Bureaucracies, and Markets as Artificial Intelligences. Center for Security and Emerging Technology. https://cset.georgetown.edu/publication/machines-bureaucracies-and-markets-as-artificial-intelligences/.
Davidad. (2022, 20 December). An Open Agency Architecture for Safe Transformative AI. LessWrong. https://www.lesswrong.com/posts/pKSmEkSQJsCSTK6nH/an-open-agency-architecture-for-safe-transformative-ai.
Yudkowsky, E. (2004). Coherent Extrapolated Volition. Machine Intelligence Research Institute. https://intelligence.org/files/CEV.pdf.
Impossibility results in fairness
Miconi, T. (2017, 11 September). The impossibility of “fairness”: a generalized impossibility result for decisions. ArXiv. https://arxiv.org/abs/1707.01195.
Hsu, B., Mazumder, R., Nandy, P., & Basu, K. (2022, 24 August). Pushing the limits of fairness impossibility: Who’s the fairest of them all? ArXiv. https://arxiv.org/abs/2208.12606.
Saravanakumar, K.K. (2021, 29 January). The impossibility theorem of machine fairness, a causal perspective. ArXiv.
Bias and fairness issues with machine learning systems. There is way too much literature here to do justice to, as evidenced by the several conferences and journals dedicated to the topic. A few especially relevant pieces:
Specifically on LLMs: Weidinger, L. et al. (2021, 8 December). Ethical and social risks of harm from Language Models. ArXiv. https://arxiv.org/abs/2112.04359.
On political bias: Feng, S., Park, C.Y., Liu, Y., & Tsvetkov, Y. (2023). From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models. Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, 1: 11737–11762. https://aclanthology.org/2023.acl-long.656.pdf.
Some famous practical failures:
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, 23 May). Machine bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
Google apologises for Photos app's racist blunder. (2015, 1 July). BBC News. https://www.bbc.co.uk/news/technology-33347866.
On internet voting
A few of the many pieces warning about harms:
Dill, D. et al. (2020, 19 March). Verified Voting Puerto Rico Veto Letter P.S 1314. https://verifiedvoting.org/verified-voting-puerto-rico-veto-letter-p-s-1314/.
Schneier, B. (2017, 10 March). Online Voting Won’t Save Democracy. The Atlantic. https://www.theatlantic.com/technology/archive/2017/05/online-voting-wont-save-democracy/524019/.
Appel, A. (2022, 27 June). How to Assess an E-voting System. Freedom to Tinker. https://freedom-to-tinker.com/2022/06/27/how-to-assess-an-e-voting-system/.
Park, S.,, Specter, M., Narula, N., & Rivest, R.L. (2021, 16 February). Going from Bad to Worse: From Internet Voting to Blockchain Voting. Journal of Cybersecurity. https://academic.oup.com/cybersecurity/article/7/1/tyaa025/6137886. https://people.csail.mit.edu/rivest/pubs/PSNR20.pdf.
Munroe, R. Voting Software. xkcd. https://xkcd.com/2030/.
A less pessimistic take: Buterin, V. (2021, 15 May). Blockchain voting is overrated among uninformed people but underrated among informed people. https://vitalik.ca/general/2021/05/25/voting2.html.
New identity verification systems:
India’s Aadhaar: Aadhaar. (2023, 24 August). In Wikipedia. https://en.wikipedia.org/wiki/Aadhaar.
A New Identity and Financial Network (Worldcoin whitepaper). Worldcoin. https://whitepaper.worldcoin.org/.
Buterin, V. (2023, 24 July). What do I think about biometric proof of personhood?. https://vitalik.eth.limo/general/2023/07/24/biometric.html.
Background on reflective equilibrium:
Quick introduction: Daniels, N. (2003, 28 April). Reflective Equilibrium. Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/reflective-equilibrium/.
Longer: Daniels, N. (2008, 12 January). Justice and Justification: Reflective Equilibrium in Theory and Practice. Cambridge University Press.
Specifically on the ability of AI to assist in seeking reflective equilibrium: https://www.lesswrong.com/posts/W7sEv69cQzW8D8SMr/the-human-ai-reflective-equilibrium
Example of trusting voting guide: Angelos, J. (2021, 19 September). Germany’s no-emotion voting guide surges despite campaign of personalities. Politico. https://www.politico.eu/article/germany-election-2021-rational-voting-wahl-o-mat-survey/.
Liquid democracy
A general overview: Liquid democracy. (2023, 24 July). In Wikipedia. https://en.wikipedia.org/wiki/Liquid_democracy.
An example in practice: Hardt, S. & Lopes, L.R. (2015). Google Votes: A Liquid Democracy Experiment on a Corporate Social Network. https://www.semanticscholar.org/paper/Google-Votes%3A-A-Liquid-Democracy-Experiment-on-a-Hardt-Lopes/9daab5ea181f3ec2ef13de85c3fdae238b15dfdc.
Futarchy
Hanson, R. (2013). Shall We Vote on Values, But Bet on Beliefs? Journal of Political Philosophy. http://hanson.gmu.edu/futarchy2013.pdf.
No posts