The poster child of AI—an autonomous system that can flexibly participate in a human-like way across a variety of contexts, talking, reacting, planning actions, and especially engaging in ethical reflection about its own behaviour—is a long way off, if it is achievable at all. Nevertheless, there is an increasing reliance today on some limited form of AI to support social, commercial, and political decision-making. Half of all patents related to AI have been filed in the last five years.[i] It is important to ask whether this turn to AI is a good thing.
The question can clearly be considered from different points of view. Ellen Broad’s book is most helpful for drawing attention to the sheer range of issues involved. It assumes no prior understanding of AI technologies, but looks at many facets of the overall question, particularly the moral side: is this good for society as a whole, and not just for governments, business, or other elite groups? It acknowledges that the matter is very complex and still very much undecided, as there are positive and negative features of AI to balance. Indeed, it does not make a hard boundary around AI, but covers systems of automation more generally. It is a down-to-earth introduction to an area of applied ethics, and since it doesn’t get bogged down in theorising, might be a useful point of entry for readers new to the area.
The book is in three parts. The first looks at issues around collecting data about people—how data uses people. What is collected? Why? How accurate are the assessments based on it? Who authorises it? Are there biases in the collection or use of the data? What does it really tell you? Are there things it is better not to know? These questions are all oriented towards the ethics of ‘measuring’ people, and raise matters of transparency, bias, privacy, authority and control, and visibility. None of the questions is easily answered.
The next part flips the perspective and considers humans as collectors and processors of data—how people use data—addressing issues of openness and fairness, intelligibility and reliability, and design and diversity. It looks at the roles and attitudes of programmers and users in applying software to automation systems, particularly data-intensive ones. The difficulty of even explicating some of these concepts, such as fairness, in the information technology domain shows how urgent a society-wide debate about the application of algorithmic decision systems really is. Surprisingly, there is little reference in the book to the field of information and computer ethics, such as the Information Ethics Group at Oxford University, which has been taking a more theoretical approach to these questions that could prove conceptually enlightening.[ii] Theory is not the focus of Broad’s book, but it would have been helpful to mention such work among resources to pursue further—a feature which is unfortunately lacking, though the notes provide some incidental pointers.
The final part of the book enters the realm of policy and legislation. The role of government is examined from two perspectives: government as regulator, and government as example. The government can make rules about how automation is to be used or constrained, and it can adopt (ideally) good practices in its own implementation of automated systems. Good and bad examples of each are discussed, but Broad argues that the difficulties are hard to overcome, because in the end the real problem with automating parts of our social, economic and political lives is that it at best reflects, and at worst amplifies, existing problems.
The serious biases found in automated systems do not emerge because of those systems, but because of real features of the environment into which the systems are supposed to fit. Machines that learn, readily learn human bias and values, good and bad, but they are not in a position to engage in ethical reflection about them. They are not that sort of automata—the type of general AI that is still a dream.
[i] https://www.wipo.int/edocs/pubdocs/en/wipo_pub_1055.pdf. Accessed February 2019.
Comments will be approved before showing up.