Artificial Intelligence has made the leap from theoretical research to becoming a practical field of study linked to engineering, computer science and robotics with applications in scientific discovery that could help tackle critical global issues. It has become more prominent in our everyday lives too, from chatbots on websites to the idea of Insight As A Service (IAAS) being offered by libraries, AI has become a buzzword.
We talked to Dr Mariarosaria Taddeo about the evolution of AI research and the influence philosophical study has on the development and governance of AI technologies.
There are various definitions of AI, how do you define AI?
Luciano Floridi defines AI as a growing resource of interactive autonomous and self-learning agency, which can be used to perform tasks that would otherwise require human intelligence to be executed successfully.
I agree completely with this definition. There are two important elements to it. The first is that AI is a form of agency which can learn and act autonomously in the environment. The autonomous and self-learning parts are really crucial when we think about the ethical implications of AI.
The second part concerns the nature of intelligence we are referring to when we talk about Artificial Intelligence. This has nothing to do with the feeling, emotions, intuitions and intentionality; these are characteristics of human intelligence. AI is intelligent insofar it performs tasks that would require intelligence if a human were to perform them. Artificial intelligence imitates or act as if it were intelligent, but there is no human-like intelligence to be assumed in reality when considering machines.
Why are we seeing a revival of AI research now?
Recent work in AI, machine learning and even some models of artificial neural networks, builds on research that was done in the late 50s, 60s and the 70s. The scientific community had the conceptual understanding back then but lacked the necessary the computational power and data to develop and train the models. Over the past decades, computer have become much more powerful. At the same time, in recent years, the amount of data needed to train AI has become available. We all know about the Big Data Revolution. These data are the ‘source’ that AI needs to learn and improve.
Computational power and Big Data are the developments that have reignited research in AI in the past few years.
What are the societal implications of AI?
AI is going to become a key element in how societies function. We already see AI being used to perform tasks which can be tedious or dangerous. But that’s only a marginal part of the potential areas of deployment. More and more AI is used for job recruitment, medical diagnosis, to support education, to improve cyber defense and security of nationally critical infrastructures. AI will be completely embedded in the reality in which we live. This has key implications for our societies, both at individual and group level.
First, AI is autonomous and can make decisions which can have an impact on individuals and groups of individuals. We need to make sure that we understand how these decisions are made, for example to ensure there are no biases in the process and that there is accountability when AI makes or suggest a decision. If something goes wrong, who's going to be held responsible for the suggestions that AI gave us? We need to decide which decisions we want to delegate to AI. Is it fair, for example, to delegate the decision of granting parole in a criminal process to AI? Is it acceptable to delegate offensive actions on the battlefield to an AI system?
Second, the more we delegate tasks to AI the more likely it is that we give up expertise. We might want to rely on AI for medical diagnosis, because this makes the process more precise and efficient. However, we still need doctors to be able to perform diagnosis, to read a CT scan, for example, so we can still understand when AI makes a mistake or can take over when AI doesn't work.
Third, AI is a transformative technology for our societies. It will be distributed horizontally in household and every Institution, for as many services and tasks as you can imagine. When considering this scenario, we need to keep in mind that AI is not just a reactive technology, it is proactive and will collect data about us and our environment and digest this data to extrapolate information, but also to nudge us into making our own decisions. Imagine a scenario in which AI systems suggest which book to read, what food to eat, with whom we interact, where to go for holidays, and which job to apply for. This constant suggesting and nudging might be supportive of human flourishing but also risks hindering our self-determination. Crucially, AI will have a direct impact on the people we become. This also means, when AI becomes sufficiently pervasive, it could shape public opinion and what kind of societies we develop. It is important to ensure that we understand which is the best way (and whether) to trust AI systems. Trusting AI, as well as digital technologies, correctly will be key to developing governance of these technologies without hampering innovation1.
1 Taddeo, Mariarosaria. 2017. ‘Trusting Digital Technologies Correctly’. Minds and Machines, November. https://doi.org/10.1007/s11023-017-9450-5
How is AI and philosophy intertwined?
In the early days, we undertook research on what is known as Symbolic AI. So, there was a lot of work in philosophy of mind and cognitive science, which was strongly related to research on AI. This remain an important area of research.
These days, philosophy is arguably even more crucial, because it sheds light on the conceptual changes that AI prompts. This is essential to identify the ethical and the governance problems that AI poses and how to solve them.
Consider, for example, the need to understand in which way AI is autonomous? How do we assign the responsibilities for the actions performed by machines when machines are so complicated and distributed, as in AI systems? Do we need new conceptual models for understanding moral responsibilities when it comes to AI? What are the values that shape the governance of this technology? These are theoretical questions, which require philosophical analyses, and whose answer will have a direct impact on the way we design, develop, and use AI in our societies.
Philosophical research has already had an impact on the governance of technologies. A good example would be articles published in the Springer Journal Minds and Machines, which have shaped the European Ethics Guidelines for Trustworthy AI. The nature of AI is a fundamental topic of research which has and will continue to have an impact on the development of this technology.
Relevant articles: