Maya Ganesh – An Ethnography of Error

This post has been written in relation to, and as a subset of, a body of work –an ‘ethnography of ethics‘ – that follows the emergence of the driverless car in Europe and North America. An ethnography of ethics is an acknowledgment of the need for a “thick” reading of what ethics means – and does not mean – in the context of big data: how it is constituted in relation to, and by, social, economic, political and technical forces; how it is put to work; and what its place is in a moment when autonomous vehicles and artificially intelligent computing receive significant interest and support. I argue that ethics is not necessarily an end-point or outcome, but is a series of individual and system-level negotiations involving socio-technical, technical, human and post-human relationships and exchanges. This includes an entire chain encompassing infrastructure, architectures, actors and their practices, but is more than its constituent parts. Thus, what is emerges as ethics is a culture around the maintenance, role and regulation of artificial intelligence in society.

There are 48 synonyms for error according to the Roget’s English Thesaurus. Error, as a category, is as big as, and keeps defining, its opposite, which is, perhaps, not even an opposite, but is more like another part of. Error is a twin, the Cane to the Abel of accuracy and optimisation. Rather than cast error out, or write it off, I want to write it in, and not just as a shadow, or in invisible ink, as a footnote, or awkward afterthought.

Lucy Suchman is a feminist theoretician who thinks about what it means to be with and alongside technologies. She asks about “the relation between cultural imaginaries -that is, the kind of collective resources we have to think about the world – and material practices. How are those joined together?” (2013). In that vein I want to think about what it means to be in close relationships and working with machines that, in a sense, rely on human judgment and control for optimisation.

I believe it may be important to think through error differently because of how increasingly complex it is to think about responsibility and accountability in quantified systems that are artificially intelligenti. How do you assign accountability for errors in complex, dynamic, multi-agent technical systems?

Take the case of the recent Tesla crash, the first death of a human being in a driverless car context. In May 2016, an ex-US Navy veteran was driving a car and watching a Harry Potter movie at the same time. The man was a test driver for a Tesla semi-autonomous car in autopilot mode. The car drove into a long trailer truck whose height and white surface was misread by the software for the sky. The fault, it seemed, was the driver’s for trusting the auto-pilot mode. The company’s condolence statement clarifies the nature of auto-pilot (Tesla 2016):

When drivers activate Autopilot, the acknowledgment box explains, among other things, that Autopilot “is an assist feature that requires you to keep your hands on the steering wheel at all times,” and that “you need to maintain control and responsibility for your vehicle” while using it. Additionally, every time that Autopilot is engaged, the car reminds the driver to “Always keep your hands on the wheel. Be prepared to take over at any time.” The system also makes frequent checks to ensure that the driver’s hands remain on the wheel and provides visual and audible alerts if hands-on is not detected. It then gradually slows down the car until hands-on is detected again.

Herein lies a key idea that runs like a deep vein through the history of machine intelligence: that machines are more accurate and better than humans in a wide variety of mechanical and computational tasks, but that humans must have overall control and responsibility because of their (our) superior abilities, because of something ephemeral, disputed, and specific that we believe makes us different. Yet, we are allowed to, and even expected to, make mistakes.

For machines, error comes down to design and engineering, at least according to Google. Early in its history the Google driverless car was a little too perfect for humans; it follows the rules perfectly – exactly what they are programmed to do. Humans however, break the rules: they make mistakes, take short cuts, and break rules (Naughton, 2015):

Google is working to make the vehicles more “aggressive” like humans — law-abiding, safe humans — so they “can naturally fit into the traffic flow, and other people understand what we’re doing and why we’re doing it,” Dolgov said. “Driving is a social game.”“It’s a sticky area,” Schoettle said. “If you program them to not follow the law, how much do you let them break the law?

The Tesla crash outcome follows a certain historical continuity. American scholars Madeleine Elish and Tim Hwang (2014) show that in the history of cars and driving in America, human error tends to be cited as the most common reason for accidents; the machine is not flawed, it is human error in managing the machine. In the 1920s-30s when a number of crashes occurred, ‘reckless driving’ rather than poor design (of which there was a lot back then) was blamed for accidents (Leonardi 2010) There has been a tendency to “praise the machine and punish the human” say Elish and Hwang. So, the machine is assumed to be smart but not responsible, capable but not accountable; they are almost minds” as Donna Haraway famously said of children, AI computer programs and non-human primates (1985).

One of the other ways in which error and accountability are being framed can be understood through the deployment of the “Trolley Problem” as an ethical standard for driverless car technology. In this, responsibility for accuracy and errors is seen to lie with software programming. The Trolley Problem thus also determines what appropriate driving is in a way that has never quite been outlined for human drivers.

The Trolley Problem is a classic thought experiment developed by the Oxford philosopher, Philippa Foot (originally to discuss the permissibility of abortion). The Trolley problem is presented as a series of hypothetical, inevitably catastrophic situations in which consequentialist (or, teleological) versus deontological ethics must be reconciled in order to select the lesser of two catastrophes. In the event of catastrophe, should more people be saved, or should the most valuable people be saved? In short: how can one human life be valued over another?

Making this difficult decision is presented as what artificial intelligence will have to achieve before driverless cars can be considered safe for roads; the problem is that software have not yet been programmed to tackle this challenge. If machine learning intelligence is to be relied on to solve this problem, it first needs a big enough training database to learn from. Such a training database of outcomes from various work-throughs of the Trolley Problem have not been made. Initiatives such as MIT’s new Moral Machine project are possibly building a training database of human- level scenarios for appropriate action.

However, the Trolley Problem has since fallen out of favour in discussing ethics and driverless cars (Davis 2015). Scholars such as Vikram Bhargava, working with the scholar Patrick Lin, have already identified limitations in the Trolley Problem and are seeking more sophisticated approaches to programming decision-making in driverless cars (2016). The Trolley Problem, and other ethical tests based on logical reasoning, has been one of the ways in which ethics has been framed: first, as a mathematical problem, and second, as something that lends itself to software programming.

There has been a call to look at the contexts of production of technology for greater transparency and understanding of how AI will work in the world (Crawford, 2016; Elish and Hwang 2016). Diane Vaughn’s landmark investigation and analysis of the 1986 Challenger space shuttle tragedy gives us some indications of what the inside of technology production looks like in the context of a significant error. In this, Vaughn names the normalisation of deviance as the culprit for the design flaw, rather than malafide intent (Vaughn, 1997).

The normalisation of deviance refers to a slow and gradual loosening of standards for the evaluation and acceptance of risk in an engineering context. The O rings on the rocket boosters of Challenger that broke on that unusually cold January morning in Cape Canaveral, Florida, did so despite considerable evidence of its questionable performance in low temperature conditions. The space shuttle’s launch date was also repeatedly delayed for this very reason. Yet, in what is possibly one of the best resourced space research organisations, NASA, how was this vital information overlooked? The normalisation of deviance is as much an organisational-cultural issue as it is about the technical details. Vaughan’s detailed ethnography of the managerial, technical and organisational issues that led up to the Challenger disaster presents a valuable precedence and inspiration for the study of high-end technology production cultures and how errors, crises and mistakes are managed within engineering.

Design or use-case? Intuition or bureaucracy? Individual or organisation? The sites of being and error-ing only multiply.

This ethnography of error comes up against a planetary scale error that queers the pitch. Australia is located on tectonic plates that are moving seven centimetres north year; so, the whole country will move by five feet this year. This may not mean much for human geography but it means something for the shadow world of machine-readable geography: maps used by driverless cars, or driverless farm tractors, are now going to have inexact data to work from (Manaugh 2016). It’s difficult to say how responsibility will be assigned for errors resulting from this shift.

References

Bhargava, V (forthcoming) What if Blaise Pascal designed driverless cars? Towards Pascalian Autonomous Vehicles. in Patrick Lin, George Bekey, Keith Abney, and Ryan Jenkins (Eds.), Roboethics 2.0. MIT Press.

Crawford, K (2016) Artificial Intelligence’s White Guy Problem. The New York Times. http://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html?_r=0 retrieved July 25, 2016

Crawford, K. and Whittaker, M (2016). The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term . Symposium report. https://artificialintelligencenow.com/media/documents/AINowSummaryReport_3.pdf Retrieved October 2, 2016.

Davis, L.C (2015) ‘Would you pull the trolley switch? Does it matter?’ Lauren Cassani Davis in The Atlantic, October 9, 2015. Retrieved October 1, 2016 http://www.theatlantic.com/technology/archive/2015/10/trolley-problem-history-psychology-morality-driverless-cars/409732/

Elish, M and Hwang, T (2014) Praise the machine! Punish the human! The contradictory history of accountability in automated aviation. Comparative Studies in Intelligent Systems – Working Paper #1 Intelligence and Autonomy Initiative1. February 24 2015. Data & Society. Accessed http://www.datasociety.net/pubs/ia/Elish-Hwang_AccountabilityAutomatedAviation.pdf Retrieved September 23, 2015.

Elish and Hwang (2016) An AI Pattern Language Published by the Intelligence & Autonomy Initiative of Data & Society. http://autonomy.datasociety.net/patternlanguage/ Retrieved October 5, 2016

Foot, P (1967) The Problem of Abortion and the Doctrine of the Double Effect. Oxford Review, No. 5. Included in Foot, 1977/2002 Virtues and Vices and Other Essays in Moral Philosophy.

Haraway, D (1990) Primate Visions: Gender, race and nature in the world of modern science. Routledge

Leonardi, P (2010) From Road to Lab to Math: The Co-evolution of Technological,Regulatory,and Organizational Innovations forAutomotive Crash Testing. Social Studies of Science 40/2; 243–274.

Manaugh, G (2016) Plate Tectonics Affects How Robots Navigate. Motherboard http://motherboard.vice.com/en_uk/read/plate-tectonics-gps-navigation retrieved October 2, 2016

Orlikowski, W.J (2000) Using Technology and Constituting Structures: A Practice Lens for Studying Technology inOrganizations. Organization Science, Vol. 11, No. 4 (Jul. – Aug., 2000), pp. 404-428.

Naughton, K (2015) Humans Are Slamming Into Driverless Cars and Exposing a Key Flaw, Bloomberg Technology News, December 17, 2015: https://www.bloomberg.com/news/articles/2015-12-18/humans-are-slamming-into-driverless-cars-and-exposing-a-key-flaw retrieved February 5, 2016

Spector, M (2016) ‘Obama Administration Rolls Out Recommendations for Driverless Cars’, Wall Street Journal, September 191, 2016. http://www.wsj.com/articles/obama-administration-rolls-out-recommendations-for-driverless-cars-1474329603 Retrieved October 1, 2016

Suchman, L (2013) Traversing technologies: Feminist research at the digital/material boundary. From video and transcript of a talk at the University of Toronto at the colloquia series Feminist and Queer Approaches to Technoscience: http://sfonline.barnard.edu/traversing-technologies/lucy-suchman-feminist-research-at-the-digitalmaterial-boundary/

Tesla (2016). A Tragic Loss. Blog post on Tesla website. https://www.teslamotors.com/blog/tragic-loss retrieved June 2016

Vaughan, D. (997) The Challenger launch decision: Risky technology, culture and deviance at NASA. University of Chicago.

i I follow the definition of artificial intelligence proposed by Kate Crawford and Meredith Whittaker, that it is a “constellation of technologies comprising big data, machine learning and natural language processing” as described in the recent symposium AI Now: The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term. Symposium report available here: https://artificialintelligencenow.com/media/documents/AINowSummaryReport_3.pdf Retrieved October 2, 2016.
Advertisements

4 Comments

  1. Thanks for this!

    I was particularly interested in the final comment about the Earth itself, or rather the Australian plate, moving out of sync with the data on it. Seems unbelievable that such an oversight would not be planned for – but then this data and the maps drawing on it are not centralised, which is interesting in itself.

    I wonder if you had thought to – or plan to – engage with the theories of error from media studies and the arts? The book ERROR, edited by Mark Nunes. It has a range of interesting perspectives on ‘writing into/with error’.

    It’s interesting how the valourisation of error in new media cultures – in its form as ‘revealing error’, which shows the operational structure of machines – does not flow through to the context of driverless cars. Driverless cars are precisely a technology we would want to be error free.

    Just writing this now, I think about driverless *drones*, and how the ethics you’re exploring here apply there, but then there is a difference – because who we blame when a car goes wrong, the person in charge of the car is a public person, whereas the driver of a driverless drone is protected by the state’s secrecy.

    Like

  2. Hi Maya, I remember reading article about human error in the Santiago de Compostela train crash in 2013. I’m not sure if it was this one – http://www.ciras.org.uk/case-studies/case-studies-container/case-study-4-chris-langer-santiago-de-compostela/ – but it makes a very similar point.

    Blame tends to be place on the last link in a chain or errors, and that this last link is almost always human. The 2013 case is interesting because a decision had been made to turn off one of the automatic safety systems, allowing for the driver error to become fatal, yet that decision is not considered the ultimate cause of the crash.

    The ways that risk and responsibility are distributed, individualised and collectivised, in technical and artificially intelligent systems is really interesting. For example, could blame be passed to the individual or individuals who provided the training example the a machine-decision is based on?

    Like

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s