Computer Automation: Political and Ethical Concerns for the 21st Century

Photo: Then One/WIRED

In the early 20th century, one of the great unsolved mathematical problems of the time was to find out whether there could exist an algorithm which, given a mathematical function as input, could produce as output a definite answer in a finite amount of time. In 1936, a young British mathematician named Alan Turing published a landmark paper concluding, to the dismay and surprise of his contemporaries, that no such algorithm would be found. While Turing's negative answer to the question was recognized as monumental by mathematicians, what was more significant for society as a whole is how Turing's proof included the invention of the theoretical basis for what we now understand as the modern day computer.

Nowadays, computer algorithms dominate contemporary life in the form of computer automation. Trains, airlines, banking, food services, and even jails have implemented automated systems to increase overall efficiency and remove the role humans play in carrying out inane and even dangerous tasks. Global Trading Systems LLC's (GTS) Jan. 26th, 2016 purchase of Barclays Plc's business at the New York Stock Exchange meant not only a major corporate merger but more importantly that nearly all securities transactions would be governed by the automation of computer programs. Gone are the days of the Chicago Board of Trade's open-outcry stock trading, only to be supplanted by the decisions of commodity, trade, and risk management systems of the kind offered by Bloomberg or GlobalView. Drones have played an increasingly prominent role in the military, the service industry, and in recreational use. While many of the drones used by, for instance, the US military have a human making the final decisions, it will soon be technologically feasible for drones to be operable entirely by automated means. Amazon is considering using drones to mail all items under five pounds (the majority of Amazon items). In short, drone technology could save transportation costs, allow us to photograph and scientifically study geographical locations otherwise unreachable, and even save lives.

It seems then that the utility of computers is unquestionable: they are reliable, immune to the cognitive biases that plague ordinary human reasoning, and are capable of organizing and analyzing large swaths of data better than any human. The 17th century philosopher Gottfried Leibniz even wished for societies to be governed by computers with the goal being that such computers would resolve any disputes about knowledge and ensure for intelligent and unbiased judgements about legal and social issues.

Not everyone agrees with Leibniz's optimism. Many see growing movement toward the automation of society as a cause for deep concern. Movies such as Terminator and 2001: A Space Odyssey cast a bleak portrait of a world governed by computers representing the worry that such computer algorithms would lead to unwieldy and possibly even apocalyptic results. Others, such as the philosophers Nick Bostrom, David Chalmers, and researchers at the Machine Intelligence Research Institute, have taken these concerns seriously and have devoted considerable amounts of time developing ways to avoid AI from harming us. Many see the 2008 financial crisis to have been the result not only of reckless market speculation and poor regulation but the result of an unreasonable trust in mathematical models and that it would be even more foolhardy to entrust computers with such models. Others such as philosopher John Searle argue that computers can't even be properly considered intelligent and so we have every reason to be concerned about their abilities to solve problems requiring human-level intelligence.

James Weatherall thinks otherwise and argues in his book The Physics of Wall Street: "models are not to blame for our current economic ills...ideas that could have helped avert the recent financial meltdown were developed years before the crisis occurred" (p. 7). Weatherall thinks that the use of mathematical models, such as the Black-Scholes volatility formula, combined with computer automation, will enable us in the long term to make better economic decisions that are not only profitable but beneficial to society as a whole. In a 2012 paper, Armstrong and Sotala suggest that there is divergence of academic opinion as to when, and if, computer automation will reach the point of achieving human-level intelligence (i.e. artificial intelligence). Their paper surveys 257 predictions about artificial intelligence made by experts, spanning the 1950s to 2012, and concludes that the predictions of professional computer scientists and philosophers were ultimately no better off at prediction than laypersons: “expert predictions are not only indistinguishable from non-expert predictions, they are also indistinguishable from past failed predictions” (p. 17). Perhaps then there is not as much cause for concern as technological progress may suggest.

However one sees the debate, there are a few things to consider. Firstly, if functionalism is true, and it is possible that we can program computers, in the form of robots, to be conscious and have experiences, then to what extent are we willing to grant them equal rights as autonomous beings? Secondly, to what extent is the use of computer automated tasks ethical? Lastly, how should governments create legislation to respond to the increased use of self-driving cars, drones, and stock-trading algorithms?

An answer to the first will depend on the extent at which we are willing to believe in our scientific theories: if they do end up suggesting that computers can have experiences such as pain and happiness then we may need to take seriously the ethical demands that arise. An answer to the second question will depend on how much we trust our ability to adequately formalize, in computer code and architecture, human reasoning itself. And an answer to the third will ultimately hinge on how we educate future generations of policymakers and citizens on computer science and various ethical theories. In any case, solving problems of computer automation will require distinctly philosophical discussion since these problems require a clarification and analysis of our concepts of justice, morality, and consciousness rather than merely scientific experimentation.

More importantly, it will require an abandonment of the naive Enlightenment ideal that technological innovation, such as the advent of general artificial intelligence, will unconditionally liberate us from our moral and social ailments.

Adrian Yee

Adrian is a fourth year studying Philosophy at the University of British Columbia.

comments powered by Disqus