read

I gave a talk this morning at Berkeley City College to kick off an event about outsourcing, adjunctivism, and higher education culture. I wanted to talk about labor and technology -- not just about MOOCs, essay-grading software, and other education technologies in the news and in classrooms today, but about some of the history of automation in education as well. (Of course, I can't help but invoke literature and film in doing so:  the history of robots there too.) One important observation made (I think) by UC Berkeley's Miguel Altieri: both outsourcing and automation are symptoms of the move to restructure and privatize education to serve more closely the interests of corporate capitalism.

Robots and Education Labor


There’s a notion, one that’s becoming more and more pervasive I think, that the “real reason” you go to school is to gain “skills” so you can get “a good job.” The purpose of education — at the K-12 and the college level: job training and career readiness. 

As such, there’s a growing backlash against the liberal arts and particularly against the humanities. The President sneers at art history majors. Florida governor Rick Scott suggests we charge those who study philosophy or anthropology higher tuition. GOP gubernatorial candidate Neel Kashkari here in California wants to make tuition free for students majoring in science, technology, engineering or math, in exchange for a small — and unspecified — percentage of their future earnings.

My background, I confess, is in the humanities. I write about education technology for a living, but I have no formal academic training from a School of Engineering or a School of Business or a School of Education. What I do have is an interdisciplinary undergraduate degree and a master’s degree in Folklore. I’m a PhD Candidate (#4life) in Comparative Literature, an academic dropout. I have a love of storytelling and many years’ training in thinking about narratives, culture, and power. That makes me wildly unemployable in some people’s eyes, I guess; just as likely, it makes me incredibly critical and as such wildly dangerous — in some people’s eyes. (Or at least I like to imagine I'm dangerous.)

And no doubt, it’s the literary scholar, the comparativist in me that compels me to invoke Karel Čapek to open a discussion about education, labor, and technology. 

In 1920, Čapek — or his brother, Josef at least — coined the term “robot” for the play Rossum’s Universal Robots or R.U.R. The word comes from the Czech “roboti” which meant “serf labor." “Drudgery,” another translation offers. Or, via Wikipedia, “the amount of hours a serf owed his master in a given day.” 

The robots in Čapek's play aren’t the metallic machines that the word conjures for us today. They’re more biological, assembled out of a protoplasm-like substance. 

In the play’s first act, a young woman named Helena, daughter of an industry mogul, comes to the island factory where the robots are built. She’s there as part of the League of Humanity, there to argue that the robots have souls and that they should be freed. The robots, which work faster and cheaper than humans (actually, the robots aren’t paid at all), have quickly taken over almost all aspects of work. They remember everything, but they do not like anything. “They make great university professors,” says Harry Domin, the general manager of the robot factory. 

As the play progresses, as robots dominate all aspects of the economy, the human birth rate falls. The robots stage a rebellion, a global rebellion as, unlike humans, robots share a universal language and recognize the universality of their labor struggles. Hoping to stop the economic and political crisis, Helena burns the formula for building robots. The robots kill all the humans, save one man in the factory who still works with his hands. 

But then the robots discover that without the formula Helena has destroyed, they cannot re-produce. "The machines are turning out nothing but bloody chunks of meat.”

Čapek’s play was translated into over thirty languages and performed all over the world. The success of the play came no doubt because in the 1920s it struck a nerve with regards to fears of automation, industrialization, war, and revolution. The play demands the audience consider what is happening to our humanity as a result.

Indeed that’s the question that robots always raise: what is happening to our humanity? As we mechanize and now digitize the world around us, what happens to our labor, our love, our soul? Are we poised to find ourselves reduced to "bloody chunks of meat"?

Certainly there are those who argue that automation will make the world more efficient, that automation will save us — or at least save the corporations money. There are those that insist that automation is inevitable. “We are entering a new phase in world history—one in which fewer and fewer workers will be needed to produce the goods and services for the global population,” write Erik Brynjolfsson and Andrew McAfee in their book Race Against the Machine. “Before the end of this century,” says Wired Magazine’s Kevin Kelly, "70 percent of today’s occupations will…be replaced by automation.” The Economist offers a more rapid timeline: “Nearly half of American jobs could be automated in a decade or two,” it contends.

The technologies that Brynjolfsson and McAfee and Kelly and others point to might be new: self-driving cars and military drones and high speed trading and mechanized libraries and automated operating rooms. But their arguments about the coming age of robots are not, as Čapek’s play — almost one hundred years old — reminds us. And as Rossum’s Universal Robots highlights as well, the hopes and fears about the implications of labor-saving devices are in many ways inextricably connected to our hopes and fears of labor itself.

Which jobs will be automated? And why? You can’t answer those questions by simply saying “oh, it’ll be the jobs that are the easiest to automate.” Automating surgery, automating warfare isn’t “easy.” We pursue the automation of certain jobs because they are routine-heavy, sure. We automate tasks at first, but then work changes around that. We pursue the automation of others because simply they are jobs we don’t want to do, or don’t want humans to have do. We seek to automate some workplace environments because we want efficiency. In other settings, we seek a more programmable, controllable — perhaps a more docile — workforce. 

As such, a robot work-force is often embodied and gendered in ways that reflect how we value work, whose work is valued.

This makes the automation of education particularly interesting — and, I think, particularly troubling. 

You often hear politicians and venture capitalists and education technology entrepreneurs insist that they don’t want to replace teachers; they just want to enhance their capabilities through new technologies. (Well, you do hear some folks explicitly say we’re going to replace teachers with technology, don’t get me wrong. But most don’t state it so boldly.)

Adaptive learning software. Automated grading tools. Proponents of these tools want to be able to assess and engineer the classroom so that it works more efficiently. While decrying schools as being based on a “factory model” of production, their efforts sometimes look like an attempt to just update the pacing of the assembly line, so that students can move through it “at their own pace.”

Again, they might boast cutting-edge technology, but the politics and the business and the ideology that underlie these endeavors are not new. Indeed, the drive to create “teaching machines” occupied researchers and businessmen for much of the 20th century.

We can trace some of the early efforts to automate education back before Čapek coined the term “robot.” To a patent in 1866 for a device to teach spelling. Or a patent in 1909 for a device to teach reading. Or a patent in 1911 awarded to Herbert Aikens that promised to teach "arithmetic, reading, spelling, foreign languages, history, geography, literature or any other subject in which questions can be asked in such a way as to demand a definite form of words ... letters ... or symbols.”

We could trace these efforts to automate education back to a machine developed by Sidney Pressey. Pressey was psychologist at Ohio State University, and around 1915 he came up with an idea to be able to use a machine to score the intelligence tests that the military was using to determine eligibility for enlistment. Then World War I happened, causing a delay in Pressey’s research. 

Pressey first exhibited his teaching machine at the 1925 meeting of the American Psychological Association. It had four multiple-choice questions and answers in a window, and four keys. If the student thought the second answer was correct, he pressed the second key; if he was right, the next question was turned up. If the second was not the right answer, the initial question remained in the window, and the student persisted until he found the right one. A record of all the student's attempts was kept automatically.

Intelligence testing based on students’ responses to multiple-choice questions. Multiple-choice questions with four answers. Automated grading and data collection. Sound familiar?

Harvard professor B. F. Skinner claimed he’d never seen Pressey’s device when he developed his own teaching machine in the mid-1950s. Indeed, he dismissed Pressey’s inventions, arguing they were testing and not teaching machines. Skinner didn’t like the multiple choice questions — his teaching machine enabled students to enter their own responses by pulling a series of levers. A correct answer meant a light would go on.

A behaviorist, Skinner believed that teaching machines could provide an ideal mechanism for what he called "operant conditioning.” Education as mechanized behavior modification. "There is no reason why the schoolroom should be any less mechanized than, for example, the kitchen,” Skinner said.

He argued that immediate, positive reinforcement was key to shaping behavior — and all human actions could be analyzed this way. But Skinner argued that, despite their important role in helping to shape student behavior, "as a mere reinforcing mechanism, the teacher is out of date.”

I’ll ask again: why do we seek to automate certain tasks, certain jobs? Whose work is “out-of-date”? 

For us, Skinner’s teaching machine might look terribly out-of-date, but I’d argue that this is the history that still shapes so much of what we see today built with fancier components, newer hardware and software. Self-paced learning, gamification, an emphasis on real-time or near-real-time corrections. 

No doubt, education and education technology both draw so heavily on Skinner because Skinner (and his fellow education psychologist Edward Thorndike) have been so influential in how we view teaching and learning, in how we view testing, in how we construct schooling. This isn’t something that just affects K-12 schools either.

When we talk about the types of jobs that will be automated, you could argue that education will be safe. Education is, after all, about relationships, right? You could maintain that education is based on an ethic of care, and humans remain superior to machines when it comes to caring for other humans.

But as the efforts of Skinner and Thorndike and others would suggest, that isn’t the only way, or even the predominant way, to view the “work” that happens in education. Rather than seeing learning as a process of human inquiry or discovery or connection, the focus becomes instead on “content delivery.” And “content delivery,” by george, can surely be automated.

Let me turn again to popular culture and to science fiction. The latter in particular helps us tease out these things: the possibilities, horrors, opportunities of technology and science. I turn to science fiction because novelists and playwrights and researchers and engineers and entrepreneurs construct stories using similar  models and metaphors. “Robot” appeared first in science fiction, let’s remember. I refer to science fiction because what our culture produces in the film studio or at the writer's desk is never entirely separable from what happens in the scientist’s lab, what happens at the programmer's terminal. We are bound by culture. And there are some profoundly important — and I would add, terribly problematic — views on teaching and learning that permeate both science fiction and technological pursuits.

The view of education as a “content delivery system,” for example, appears as readily in ed-tech companies’ press releases as it does on the big screen. Take The Matrix where Keanu Reeves delivers one of his finest lines as a computer injects directly into his brainstem all the knowledge he needs:

"Whoa I know Kung Fu."

This desire —  based much more on science fiction than on science — for an instantaneous learning pill is something that many in ed-tech pursue. MIT Media Lab founder Nicholas Negroponte, speaking at this year’s main TED event, predicted that in the next 30 years, we’ll literally be able to ingest information — we’ll swallow a “learning pill” that will deliver knowledge to the brain, helping you learn English, for example, or digest all of Shakespeare’s works. That’s some seriously speculative bullshit.

But preposterous or not, this long-running plot line in science fiction dovetails neatly with renewed promises of efficiency through the automation of education. And it connects to a long-standing goal of artificial intelligence in education: the creation of AI-backed machines that claim to automate and “personalize” lesson delivery and assessment. And now, we need look no further than the recent MOOC craze, with its connections to the AI labs at Stanford — that’s where Udacity co-founder Sebastian Thrun and Coursera co-founders Andrew Ng and Daphne Koller work — and at MIT — that was where edX head Anant Agarwal worked.

Of course, the AI in MOOCs remains incredibly crude. These courses use mostly multiple choice assessments, for example. They rarely seem to pay attention to signals from our data — all those emails announcing it’s Week 5 of a MOOC "so keep up the good work," when you’ve never logged in once, all those emails suggesting you sign up for more MOOCs — maybe you’d like a class on sports marketing or horse biology or another topic completely unrelated to any other class you enrolled in.

But in 2012, the Hewlett Foundation sponsored a contest to get some of the best minds in the world of machine learning — a revealingly named subsection of AI — to design a programmatic way to automate, not the grading of multiple choice tests or the selection of free courses, but the grading of essays. 

The foundation offered a $100,000 bounty for an algorithm that would grade as well as humans. And it was a success; the bounty was claimed. Not only is it possible to automate essay grading, researchers contended, robots do this just as well as human essay-graders do.

Robots grade more efficiently. Robots — unlike those pesky human English instructors and graduate students — remain non-unionized. They do not demand a living wage or health care benefits.

A computer can grade 16,000 SAT essays in 20 seconds, I've heard cited, and if teachers don't have to worry about reading students’ written work they can assign more of it. They can also have more — massively more —  students in their classes.  Hundreds of thousands more, even.

Not everyone is sold on the idea that robot graders are the future. Many critics argue that it's actually pretty easy to fool the software. MIT’s Les Perelman, a long-time critic of automated essay grading, has written a piece of software in response called the Basic Automatic B.S. Essay Language Generator (or Babel) designed to auto-generate essays that can fool the robot graders. Fooling them can happen because robots don't "read" the way we do. They do things like assess word frequency, word placement, word pairing, word length, sentence length, use of conjunctions, and punctuation. They award a grade based in the scores that similar essays have received.

When you argue that automated essay-grading software functions about as good as a human grader, you’ve revealed something else, as writing professor Alex Reid has argued: “If computers can read like people it's because we have trained people to read like computers.”

Let's look closely at the human graders that these robots in the Hewlett Foundation competition were compared to, for example.  The major testing companies hire a range of people to grade essays for them. You’ll often find “help wanted” ads on Craigslist for these sorts of positions. Sometimes you’ll see them advertised as one of those promised “work from home” jobs. 

From Todd Farley’s book Making the Grades: My Misadventures in the Standardized Testing Industry:

“From my first day in the standardized testing industry (October 1994) until my last day (March 2008), I have watched those assessments be scored with rules that are alternately ambiguous, arbitrary, superficial, and bizarre. That has consistently proven to be the result of trying to establish scoring systems to assess 60,000 (or 100,000 or 10,000) student responses in some standardized way. Maybe one teacher scoring the tests of his or her own 30 students can use a rubric that differentiates rhetorically (a 3 showing ‘complete understanding,’ a 2 showing ‘partial understanding,’ and a 1 showing ‘incomplete understanding’), but such a rubric simply never works on a project where the 10 or 20 scorers all have different ideas of what ‘complete’ or ‘partial’ means.”

“…I have watched the open-ended questions on large-scale assessments be scored by temporary employees who could be described as uninterested or unintelligent, apathetic or unemployable.”

“That, I’m afraid, is the dirty little secret of the standardized testing industry: The people hired to read and score student responses to standardized tests are, for the most part, people who can’t get jobs elsewhere.”

What might this tell us about the labor — its perceived value, its real value — that goes into the multibillion dollar assessment industry? What might this tell us about the impetus behind its automation? What does this tell us, more broadly, about the everyday labor that goes into grading?

Of course, those hired to grade standardized tests aren’t “the everyday.” They don’t have a personal relationship with the students they're grading. These graders are given a fairly strict rubric to follow -- a rubric that computers, no surprise, follow with more precision. Human graders are discouraged from recognizing "creative" expression. Robot graders have no idea what "creative expression" even means.

But what does it mean to tell our students that we're actually not going to read their papers, but we're going to scan them and a computer will analyze them instead? What happens when we encourage students to express themselves in such a way that appeases the robot essay graders rather than a human audience?  What happens when we discourage creative expression and instead encourage responsiveness to an algorithm?

Robot graders raise all sorts of questions about thinking machines and thinking humans, about self-expression and creativity, about the purpose of writing, the purpose of writing assignments, the purpose of education, and the ethics of education technology and the work that robots are being trained to do in education.

This isn’t just about writing assessment, of course. There’s the Math Emporium at Virginia Tech where some 8000 students take introductory math in a computer-based class with no professors. There are tutors — academic labor of a very different sort.

What happens to pedagogy? What happens to research? What are the implications for academic labor? What what are the implications for our students? What are the implications of automating the teaching and learning process? Why do we want the messy and open-ended process of inquiry standardized, scaled, or automated? What sort of society will this engineer? What will all of this artificial intelligence prompt us to do about human intelligence? What will it drive us to do about human relationships?

In 1962, Raymond Callahan published Education and the Cult of Efficiency, a dry book but an important one, an examination of the early twentieth century obsession to make schools run more like businesses, to bring “scientific management” — Taylorism — to education. As Callahan makes clear, the process wasn’t really about “science” or about “learning” — instead, when it came to the political pressures to “fix education,” those charged with making schools operate more efficiently were found to "devote their attention to applying the scientific method to the financial and mechanical aspects of education.” 

A few years later, Jacques Ellul published The Technological Society in which he too identified efficiency as the all-encompassing, dominant force in our technological age,

“The human brain must be made to conform to the much more advanced brain of the machine.  And education will no longer be an unpredictable and exciting adventure in human enlightenment but an exercise in conformity and an apprenticeship to whatever gadgetry is useful in a technical world.”

What "use" will our technical world have for us, for our students?

One more literary reference, one different from Rossum’s Universal Universal Robots and from The Matrix, one where the machines do not destroy their human creators:

From Isaac Asimov, the “Three Laws of Robotics”:

Rule 1:  A robot may not injure a human being, or through inaction, allow a human being to come to harm.

Rule 2: A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

Rule 3:  A robot must protect its own existence as long as such protection does not conflict with the First and Second Laws.

Zeroeth: A robot may not harm humanity, or by inaction, allow humanity to come to harm.

But, see, we have no laws of “ed-tech robotics,” I’d argue. We rarely ask, “what are ethical implications of educational technologies?” (Mostly, we want to know "will this raise test scores?") We rarely ask, "Are we building and adopting tools that might harm us? That might destroy our humanity?”

That frames things in some nice SF hyperbole, I suppose, and this is isn't simply about "the survival of the human race" with or versus the machine. But these are questions about our humanity. Indeed humanity and learning are deeply intertwined -- intertwined with love, not with algorithms.

I’m not sure we need to devise laws of ed-tech robotics — not exactly. But I do think we need to deliberate far more vigorously about the ethics and the consequences of the technologies we are adopting.

It’s clear that building teaching machines has been a goal in the past. But that doesn't mean that doing so is "progress." What  is *our* vision for the future? Either we decide what these new technologies mean for us — what our ethical approach to technology will be — or someone else will. And sadly, it’ll probably be someone who wasn’t a literature major...

Audrey Watters


Published

Hack Education

The History of the Future of Education Technology

Back to Archives