Democracy on Mars 5: Philosophical Questions
Note: this is just a short preview of a full-length post that remains under construction. It is also the fifth post in a series. The earlier posts are here, here, here and here.
In this section, I will explore how the capabilities described in the previous posts [TK links] intersect with political and moral philosophical considerations. These issues include:
The limits of popular sovereignty. New tools could enable an unprecedented level of informed public input. Most conceptions of democracy emphasize the importance of elements other than popular sovereignty, such as universal human rights, minority rights and individual freedoms (speech, assembly, etc.). How, if at all, should new capabilities alter the interplay between these desiderata?
The relationship between interpretability and performance. A linear regression is generally easier to understand than a neural network, but often offers worse predictive performance. Similarly, simple, legible mechanisms such as ballots and laws written in natural language are easier for participants to grasp than the more technologically sophisticated proposals that the Martians might use, but they may perform far worse with regard to attributes such as government efficiency, accountability and responsiveness.1 What sort of trade-offs should we be willing to accept between these traits?
The ideal degree of statism/delineating public and private spheres. Improvements to the subtlety and scope with which governments can pursue their mandate, paired with more reliable ways of tying goals to the public will, opens the door to a much more expansive state – one that mediates and sanctions many transactions we currently consider private. What forms of this, if any, are desirable?
The balance between universalism and localism. Historically, which authorities sat with which level of government (national, regional, local, etc.) largely depended on logistical limitations – such as how large an organization could grow while still functioning – and the contingent evolution of institutions. The Martians will have access to systems that enable consistent, centralized enforcement of norms at unprecedented scale and/or much more fine-grained, hyper-local decision-making. They could also reinterpret “local” to refer to communities defined by factors other than geography, enabling people to opt into different social contracts. AI systems could represent political identity in overlapping, continuous, high-dimensional ways, allowing each individual to be governed by a unique combination of rights and obligations.2 Some moral factors, such as the desire to enforce equal access to human rights, pull towards universalism, while others, such as allowing self-determination, pull towards localism. What balance might the Martians choose?
Enfranchisement and equality. Systems that don’t require voters to have a firm grasp of policy issues open the door to the enfranchisement of people below voting age (a common rationale for not granting these people the right to vote is that they do not yet have sufficient knowledge to participate in an informed way). Similar tools could represent the interests of animals, natural features (rivers, forests, etc.) and future generations.3 These systems could also expand enfranchisement possibilities in other ways, such as enabling people who have partial membership in a community, such as non-citizen residents, to practice “scoped voting” (submitting input that influences some but not all policy decisions). These paths create tricky questions about how to measure political equality.
Interests and representation. Should systems advocating on a person’s behalf optimize for the user’s immediate desires or their long-run well-being? For their experiencing or their remembering self? For what they want based on their current level of knowledge, or what a more enlightened version of them would want? These questions relate to well-established issues in moral philosophy and philosophy of identity, but may take on a new practical relevance with the advent of personalized, automated representation.
Overall, the elimination of prior logistical constraints means that the Martians will have to be deliberate about choices that the world previously made for us, and often there is no obviously correct answer.
The lower interpretability of ML systems is not as clear as it might initially seem because there are many varieties of interpretability. Better interfaces might make ML tools feel far less alien, and improved testing and benchmarking could increase reliability and user confidence.
To some extent we have this already: when people accept employment or join a co-op, they receive new rights and obligations. But emerging technology could enable this at a far larger scale.
Obviously identifying the interests of animals, people who don’t yet exist, and inanimate objects would be a noisy process involving a lot of extrapolation and assumptions – and, unlike human children, they cannot express themselves in natural language – but it might be better than nothing. Projects such as “Say Hi to the River” are exploring the use of LLMs for these purposes.