by Theodore Scaltsas
We are all accustomed to thinking of wellbeing in Aristotelian terms, assuming the agent’s choice (proairesis) for the preferences and actions that constitute their wellbeing. The agent chooses what is good for them and performs the relevant actions. Accordingly, autonomy is built into our conception of wellbeing.
However, AI Governance cannot promise wellbeing. AI, itself, will make holistic choices within society of what is good for every agent, decide for the agent, and then the agent can pursue AI’s decisions. If AI is well designed for humans, managing humans in society will result in every human having access to health services, education, work and accommodation within society. In this respect, although AI will take away the agent’s choice, in return, it will offer them what we might call social prosperity with respect to access to personal and social goods in society.
There are two questions that arise:
- Can we justify sacrificing wellbeing with autonomy of choice for social prosperity?
- Shall we have the opportunity to decide democratically, if we prefer prosperity to wellbeing?
Learn more about the Ancient Philosophy Today
Ancient Philosophy Today: DIALOGOI provides a forum for the mutual engagement between ancient and contemporary philosophy. The journal connects interpretative work in ancient philosophy to current discussions in metaphysics, epistemology and ethics, and assesses the continuing relevance of ancient theories to current philosophical interests and debates.
About the author
Theodore Scaltsas is a Professor of Ancient Philosophy, Emeritus, University of Edinburgh. In recent years, he has been studying AI and its promise of Superintelligence in the AI-Governance of society. His research interests lie in understanding human Wellbeing and human Wisdom.
Keep in touch
Interested in Philosophy Studies? Sign up to our mailing list for the latest on Edinburgh University Press’s books and journals, events and special offers.