Nicolas Malevé – Machine pedagogies

As a starting point, I would like to describe a few steps of the concrete process of training in a typical machine learning task1: the creations of annotations to be used by a computer program that will learn to classify images. A worker connects to the Amazon Mechanical Turk (AMT)2 and selects a task. In our example, she selects an image annotation task3. She faces a screen where a label and its definition are displayed. When she confirms she has read the definition, she is shown another screen where the label is followed by different definitions. The workflow is regularly interrupted by such control screens as her requester suspects her to work without paying enough attention. When she clicks on the right definition, a list of 300 square images is displayed from which she has to select the ones corresponding to the label. When she decides she has selected all the appropriate images, she clicks “next” and continues to her new task. The list of images she has to choose from contains “planted” images. Images that are known to the requester to correspond to the label. If the worker misses the planted images, her task will be refused and she won’t receive the 4 cents the requester pays for it. At least three workers will review the same 300 images for the same label and the images selected by a majority of them will be included in the dataset. The worker will not be notified if her selection matches (or doesn’t) another worker’s selection. She works in isolation and anonymously.

The images and their labels are then grouped in classes of objects. A learning algorithm is fed with these data and trained to associate a label and a series of images. It will be shown a series of images containing both matching and non-matching objects. It will be “rewarded” or “penalized” whenever it detects appropriately in the images the object corresponding to the label. Every interpretation that doesn’t correspond to the truth stated in the training set will be considered an error. It will be retrained multiple times until it finally matches the most successfully the images according to the ground truth4. It is a very mechanistic approach to training. The machine is rewarded when behaving properly5 and reinforces the kinds of associations that lead it to produce the satisfying answer. It is expected from it to exhibit the proper behavior, not to create a rich internal representation of the problem it needs to solve.

The more the algorithm behaves as expected, the more it is granted a human quality. It becomes intelligent, a “thinking machine”. The surge of neural network based algorithms this last decade reinforces this tendency. The neural net model is inspired by the communication between the neurons through the synapses observed in the brain. The algorithm doesn’t only show an “intelligent” behavior, it also works at the image of the human brain. The greater its success, the greatest the demand for more data and therefore more human annotations. While the algorithm acquires the status of an intelligent entity, the AMT worker is increasingly assimilated to the machine. Frantically responding to the platform’s request, she is routinely executing tasks that are too costly to implement algorithmically and is increasingly assimilated to machines. Cheaper than an algorithm, she becomes a process available through an API.

What strikes me in this process is the relationship between learning and alienation. The agencies of the human worker and the algorithmic agents are both reduced and impoverished. The human worker is insulated (from his co-workers and from the algorithm he is preparing the “intelligence”), his margin of interpretation is narrowly defined and the indecent wage forces him to a tiring rhythm of work. The algorithm is trained as an animal in a lab, receiving signals to be interpreted unequivocally and rewarded or punished according to the established ground truth it cannot challenge. If the training/teaching of machines implies a reflexion about liberating practices of pedagogy, where should we look for inspiration?

This question lead me to examine a series of principles expressed in The Pedagogy of the Oppressed, the seminal book of Paulo Freire. Freire, trained as a lawyer, chose to work as a secondary school teacher, and later became the minister of education of Pernambuco, before he had to escape Brazil after the military coup. The book was written in Chile, in 1968, a few years before the election of Salvador Allende.

For Freire, it only makes sense to speak of pedagogy if it includes the perspective of the liberation of the oppressed (Freire, 1969). As a marxist, Freire sees his pedagogical method as a way for the oppressed to learn how to change the conditions under which they can transform a world made by and for their oppressor. A first very important concept developed by Freire is what he calls the “banking” pedagogy. The oppressor imposes a world in which only the members of a certain class have access to knowledge or are born to acquire it6. The others merely have the right to assimilate passively a never ending recital: Lima is the capital of Peru, two and two make four, etc. The learners are considered empty entities where their master make the “deposit” of fragments of knowledge. The empty oppressed is filled with the oppressor’s content. But the master is not interested that the oppressed may productively use this knowledge for the improvement of his/her condition. What the learner learns in such a scheme is to repeat and reproduce. The knowledge “desposited” by the oppressor remains the oppressor’s property. The pedagogy proposed by Freire is in total opposition to this idea. For him, the oppressed never comes “empty” of knowledge and the first stage of the educational process is to make the learner realize s/he has already produced knowledge even if (and even more so) this knowledge doesn’t count as such in the traditional pedagogical framework.

This leads to a second point. The humanity of the subject with whom s/he engages in a pedagogical relationship is not taken for granted. The subject comes alienated and dehumanized. The category “human” is a problematic one and it is only through the process of learning that humanization takes place. And what counts in the process of humanization is precisely to get rid of the oppressor the oppressed hosts inside him/her. The oppressed is made of the oppressor and has internalized his world view. Freire insists regularly on the fact that a teaching that would fail in the process of helping the learner to free oneself from the oppressor’s world view, and merely let him acquire more power through knowledge will ultimately fail in creating a revolutionary subject. It would risk to create better servants of the current oppressor or, worse, new and more efficient oppressors.

The third book’s striking point is the affirmation that nobody is a liberator in isolation and that nobody liberates him/herself alone. Liberation through pedagogy always happens when the learner and the “teacher” are mutually liberating each other. There is no idea a priori of what the liberation pedagogy should be. Both entities are learning the practices that will lead to freedom from the relationship itself.

I would now like to use these three principles (“banking” pedagogy, the internalized oppressor and mutual liberation) to revisit the methods of learning used in machine learning. And use these principles to articulate prospective questions.

For Freire, the relationship between the learner and the teacher is considered as a situation of mutual liberation. If we apply this to machine learning, we need first to acknowledge the fact that both the people who teach machines and the machines themselves are entrapped in a relationship of oppression where both are loosing agency. To free algorithms and trainers together, both need to engage in a relationship where an iterative dialog is possible and where knowledge can circulate. This should lead us to examine with great scrutiny how this relationship is being enframed and scripted. Usually for instance, the data collection and the “ingestion” of the data by the algorithm are two distinct processes separated in time and space. Making it impossible for a dialogical relationship to happen. How then to reconnect both processes and make machine learning become a dialogical process from the start?

For Freire, one should not take for granted that a learner is “human” when s/he enters a pedagogical relationship. S/he will follow a process of humanization when the relationship unfolds. This resonates, although in a distorted manner, with a certain discourse in Artificial Intelligence that softly erodes the human/machine divide as the algorithm learns. What is different though is that Freire insists on maintaining the human/non-human demarcation. What he proposes is to base the distinction not on an a-priori ontological quality of the beings but on their trajectory of liberation. What would matter then for us is how much human and machines are able to fight their alienation.

The core of the learning practice should be found in a form of reflexivity where one would follow a process of humanization through which she manages to extract and get rid of the oppressor inside. We could then ask: “what kind of machine reflexivity can trigger human reflexivity and vice versa?”. And also how this cross-reflexivity can help identify what constitutes the oppressor inside.

This leads us to a third Freire’s idea: the banking principle, according to which the oppressed is considered as an empty entity where knowledge should be stored and repeated. This represents a complete erasure of what the learner already knows without knowing it. What does the trainer doesn’t know s/he knows? What does the algorithm doesn’t know it knows? What they both ignore, if we follow Freire, is their own knowledge. And to which extent this knowledge unknown to them is the knowledge of their oppressor or their own.

To answer these questions they have only one choice: to engage in a dialog where two reflexivities are teaching each other the contours of their alienation and at the same time how to free themselves from it.

References

Bradski G, Kaehler A (2008) Learning OpenCV, Sebastopol:O’Reilly Media, p461.

Freire P (1970) Pedagogia del oprimido, Mexico:siglo xxi editores.

Irani L (2015) Difference and dependence among digital workers: The case of Amazon Mechanical Turk, South Atlantic Quarterly, 114 (1), pp. 225-234.

Kobielus J (2014) Distilling knowledge effortlessly from big data calls for collaborative human and algorithm engagement, available from http://www.ibmbigdatahub.com/blog/ground-truth-agile-machine-learning [accessed 10 October 2016]

1The examples in this text focus on supervised learning. See https://en.wikipedia.org/wiki/Supervised_learning Ideally the ideas discussed here should be nuanced and extended when applied to other forms of machine learning.

2Amazon Mechanical Turk is a “meeting place for requesters with large volumes of microtasks and workers who want to do those tasks” (Irany & Silberman, 2013). A requester, in AMT terminology, is a business that publishes a task for workers, human providers in AMT terminology, to complete. See The requester best practice guide, http://mturkpublic.s3.amazonaws.com/docs/MTURK_BP.pdf

3 This example is inspired by one of the largest image annotation processes, ImageNet, a database of images for visual research that offers tens of millions of sorted and human annotated images organized in a taxonomy. ImageNet aims to serve the needs for training data of computer vision researchers and developers. See http://image-net.org/

4“a baseline set of training data labeled by one or more human experts” (Kobielus, 2014).

5 “When a mouse is running down a maze to find food, the mouse may experience a series of turns before it fi nally fi nds the food, its reward. That reward must somehow cast its
influence back on all the sights and actions that the mouse took before finding the food.
Reinforcement learning works the same way: the system receives a delayed signal (a re-
ward or a punishment) and tries to infer a policy for future runs (a way of making deci-
sions; e.g., which way to go at each step through the maze).” (Bradski and Kaehler, 2008 )

6See Freire’s insistence in addressing this question as a political problem rather than an ontological one in his discussion with Seymour Pappert: http://www.papert.org/articles/freire/freirePart2.html

Advertisements

4 Comments

  1. I enjoyed the flow of the text – the processual machine like beginning and the shifts from ‘she’ to ‘it’.
    I think the question of “How then to reconnect both processes and make machine learning become a dialogical process from the start?” is really fascinating, in particular because the notion of a *start* or for that matter *end* of anything is complex and entangled. This relates well to your point that “that nobody is a liberator in isolation”. How can this sense of start and finish be rethought?… in terms of an intervention or step or fork perhaps? Or how Haraway talks of ‘subject shifters’ and difference between refraction and diffraction?
    In your discussion of learning – it made me think of something that is looked at in primate studies – how apes can learn to use certain tools, but what they are less good at is teaching and empathising with others about what they know. This creates a kind of pedagogical disconnect. Can this be compared to machine learning at the moment?
    Nice last line of Brecht’s Kriegsfibel or War Primer book (the notion of a primer is also interesting in terms of your discussion) that reads: “Learn to learn and try to learn for what”
    (If you’re interested remixed version of War Primer by artists Brommberg and Chanarin that i assisted on and you can download free epub of: http://www.broombergchanarin.com/war-primer-2/)

    Like

  2. hi Nicolas —

    I’m excited that you’ve taken on a gap that I elided in my own paper. boiling it down to the “relationship between learning and alienation” is brilliant, and applying Freire is excellently provocative. but I’m not sure about considering a single machine as equivalent to a human subject. particularly since, as you show, machine intelligence is never really circumscribed at the level of the machine, but is a set of social relations that includes humans. I suppose this ties in to your question for me about the “emptiness” of an untrained algorithm as well as the status of “training” in ML as being a matter of pedagogy or not. but to me it seems more in line with habituation than education. looking forward to further discussion.

    Like

  3. Hello Nicolas! Reiterating what others said in that I really enjoyed your use of Freire as a framework to deconstruct machine learning, it raises a series of fascinating questions for me which I hope we will have plenty of opportunities to discuss over the coming days. I am interested in the continued necessity for manual labour in this system – as a concept, what might occur when this process of training can be automated, and the human trainer is no longer required? Your final point brings to mind the possibilities to create a performative interface that activates this proposed dialog. Being aware of your own work as an artist/developer, I am especially interested to know if/how you might be exploring these ideas in your practice.

    Like

Leave a Reply to dvyng Cancel reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s