read

Here are the notes and the slides from my talk today with Justin Reich's HGSE class "The Future of Learning at Scale." Between the jetlag (I got home from England late last night) and the fact that I've got way more to say about teaching machines than can fit into a 20 minutes talk, I'm not sure I was super coherent. But the students had great questions at the end (I've storified some of their tweets). Thanks for listening, folks!


Having spent the last week out of the US talking about education technology, I have been reminded how much context matters — it matters when we talk about education technology's future, its present, and its history. Despite all the talk about the global economy, global communications, global democratization of education — context matters. The business and the politics and the stories of ed-tech are not universal.

I think that’s something for you to keep in mind as you work your way through this course. It’s something to think about when we start to imagine and to build “education at scale.” What happens to context? What happens to local, regional education — its history, its content (the curriculum if you will), its cultural relevance and significance and signally, its politics, its practices?

How does technology shape this? How might technology erase or ignore context (pretending, perhaps, that such a thing is possible)?

What ideologies does education technology carry with it? Do these extend, reinforce, or subvert existing ideologies embedded in education?

Because of the forward-facing ideology of technology — that is, its association with progress, transformation, “the future” — I think we do tend to forget its history. We tend to ignore its ideology. I think that dovetails quite powerfully too with parts of American ideology and identity — an emphasis on and excitement for "the new”; a belief that this country marked a formal break from other countries, from other histories. A belief in science and business and “progress."

Yesterday was one of those regularly scheduled moments when the technology industry puts that best on display: an Apple keynote, where new products are introduced that have everyone cooing about innovation, that have everyone prepared to declare last year’s hardware and software obsolete, and often have education technology writers predicting that this is going to revolutionize the way we teach and learn.

This image is not from the guts of the Apple Watch, of course, or the new iPhone. It is a close-up of the Colossus, the world’s first electronic, programmable computer. The Colossus prototype was built at Bletchley Park, site of the British government’s Code and Cypher School during World War II, used to help successfully decrypt German military communications. Like I said, ideology is embedded in technology. Computers’ origins are wrapped up in war and cryptography and surveillance. How does that carry forward into education technology?

When we talk about “education technology” we do tend to focus on the things that teachers and students can do with computers. But education technology certain pre-dates the Colossus (1943). And perhaps we could reach as far back to Plato’s Phaedrus to see the sorts of debates about what the introduction of new technologies — in that case, Socrates’ skepticism about the technology of writing — would do to education and more broadly, to culture.

I’m in the middle of writing a book called Teaching Machines, a cultural history of the science and politics of ed-tech. An anthropology of ed-tech even, a book that looks at knowledge and power and practices, learning and politics and pedagogy. My book explores the push for efficiency and automation in education: “intelligent tutoring systems,” “artificially intelligent textbooks,” “robo-graders,” and “robo-readers.”

This involves, of course, a nod to “the father of computer science” Alan Turing, who worked at Bletchley Park, and his profoundly significant question “Can a machine think?”

I want to ask in turn, “Can a machine teach?”

Then too: Why would we want a machine to teach? What happens, as this course is asking you to consider, when we use machines to teach and learn “at scale”?

And to Turing’s question, what will happen to humans when (if) machines do “think"? What will happen to humans when (if) machines “teach”? What will happen to labor and what happens to learning?

And, what exactly do we mean by those verbs, “think” and “teach”? When we see signs of thinking or teaching in machines, what does that really signal? Is it that our machines are becoming more “intelligent,” more human? Or is it that humans are becoming more mechanical?

There’s a tension there between freedom and standardization or mechanization that both technology and education grapple with.

Rather than speculate about the future, I want to talk a bit about the past.

I want to suggest that the history of education in the US (and again, this is why context really matters) — public education in particular, both at the K-12 and university level) is woven incredibly tightly with the development of education technologies, and specifically the development of teaching machines. Since the mid-nineteenth century, there have been a succession of technologies that were supposed to improve, if not entirely reform, the way in which teaching happened: the chalkboard, the textbook, radio, film, television, computers, the Internet, Apple Watches, and so on.

There are a number of factors at play here that make education so susceptible to the technological influence. US Geography, for example: how do you educate across great distances? National identity: what role should schools play in enculturation, in developing a sense of American-ness? Should curriculum be standardized across the country? If so, how will that curriculum be spread? Individualism: how do we balance the desire to standardize education with our very American belief in individualism? How do we balance “mass education” with “meritocracy.” How do we rank and rate students? Industrialization: what is the relationship between schools and business? Should businesses help dictate what students should learn? Should schools be run like businesses? Can we make school more efficient?

These questions, how we’ve asked and answered them — these shape the ways in which education technology has been developed and wielded. Despite what you often hear that technologies transform teaching and learning, more likely technologies reinscribe what we imagine teaching and learning should look like and what function we believe school must fulfill.

For those interested in the history of education technology, I recommend Larry Cuban’s book Teachers and Machines. He’s better known for his book Oversold and Underused, which looks at computers and schools, but Teachers and Machines is interesting because you see the tension around technology in general — it’s not just a reluctance to adopt computers. The book looks at attempts to bring film (in the 1910s), radio (in the 1920s), and television (in the 1960s) into the classroom. Those are all broadcast technologies, obviously. They’re designed to “deliver educational content” — a phrase I really hate.

But that doesn’t mean that there were not some really fascinating projects and predictions in the twentieth century. Take, for example, the Midwest Program on Airborne Television Instruction which operated two DC-6 aircraft out of Purdue University Airport, using a technology called Stratovision to broadcast educational television to schools, particularly to those who couldn’t otherwise pick up a signal.

I think there are some key lessons to be learned from these broadcast technologies. I think they’re lessons that the MOOC providers — whose marketing sounds an awfully lot like some of these twentieth century "innovators" — could do well to learn from.If nothing else, how much are we still conceptualizing technologies that “deliver content” and “expand access"? How does “broadcast” shape what we mean when we talk about “scaling” our efforts? How does “broadcast” fit neatly into very old educational practices centered on the teacher and centered on the content?

You could argue that film and radio and airborne television are “teaching machines,” but typically the definition of “teaching machines” involves more than just “content delivery.” It involves having an instructional and an assessment component as well. But again, these devices have a very long history that certainly predate computers.

The earliest known patent in the United States was issued in 1809 to H. Chard for a “Mode of Teaching to Read.” The following year S. Randall filed a patent entitled “Mode of Teaching to Write.” Halcyon Skinner (no relation to Harvard psychology professor B. F. Skinner) was awarded a patent in 1866 for an “Apparatus for Teaching Spelling.” The machine contained a crank, which a student would turn until he’d arranged the letters to spell the word in the picture. The machine did not, however, give the student any feedback if it was right or wrong.

Between Halcyon Skinner’s 1866 teaching machine and the 1930s, there were an estimated 600 to 700 patents filed on the subject of teaching and schooling. The vast majority of these were filed by inventors outside of education. Halcyon Skinner, for example, also filed for a patent for a “motor truck for cars,” “tufted fabric,” a “needle loom,” a “tubular boiler,” and many other inventions.

There’s some debate about whether or not these early devices “count” as teaching machines, as they don’t actually do all the things that education psychologists later decided were key: continuous testing of what students are supposed to be learning; immediate feedback on whether a student has an answer correct; students can “move at their own pace”; automation.

American psychologist Sidney Pressey is generally credited with the person whose machine first met all these requirements. He displayed a “machine for intelligence testing” at the 1924 meeting of the American Psychological Association. Pressey received a patent for the device in 1928.

His machine contained a large drum that rotated paper, exposing a multiple choice question. There were four keys, and the student would press the number that corresponded to the right answer. Pressey’s machine had two modes of operation: one labeled “test” and the other labeled “teach.” In “test” mode, the machine would simply record the responses and calculate how many were correct. In “teaching” mode, the machine wouldn’t proceed to the next question until the student got the answer right. The machine did still track how many keys were pressed until the student got it correct. You could also add an attachment to the machine that was essentially a candy dispenser. It allowed the experimenter to set what Pressey called a "reward dial," determining the number of correct responses required to receive a candy reward. Once the response criterion had been reached, the device automatically delivered a piece of candy to a container in front of the subject.

For a prototype converted from a sewing machine, we can see in Pressey’s machine so much about 20th century education theory and practice — and so much of that that’s still with us today. There’s the connection to intelligence testing and the first World War and a desire to create a machine to make that process more standardized and efficient. There’s the nod to the work of education psychologist Edward Thorndike — his laws of recency and frequency that dictated how students were supposed to move through material. There is four answer multiple choice question. How much of this is now “hard coded” into our education practices? How much of this is now “hard coded” into our education technology?

Sidney Pressey tried very hard to commercialize his teaching machines, but without much success. It wasn’t until a few decades later that the idea really took off. And as such, “teaching machines” are probably most closely associated with the work of B. F. Skinner. (He did not receive the patent for his teaching machine until 1961.)

Skinner came up with the idea for his teaching machine in 1953. Visiting his daughter’s fourth grade classroom, he was struck by its inefficiencies. Not only were all the students expected to move through their lessons at the same pace, but when it came to assignments and quizzes, they did not receive feedback until the teacher had graded the materials — sometimes a delay of days. Skinner believed that both of these flaws in school could be addressed through a machine, and built a prototype which he demonstrated at a conference the following year.

All these elements were part of Skinner’s teaching machines: the elimination of inefficiencies of the teacher, the delivery of immediate feedback, the ability for students to move through standardized content at their own pace.

Today’s ed-tech proponents call this “personalization.”

Teaching — with or without machines — was viewed by Skinner as reliant on a “contingency of reinforcement.” The problems with human teachers’ reinforcement, he argued, were severalfold. First, the reinforcement did not occur immediately; that is, as Skinner observed in his daughter’s classroom, there was a delay between students completing assignments and quizzes and their work being corrected and returned. Second, much of the focus on behavior in the classroom has to do with punishing students for "bad behavior" rather than rewarding them for good.

As Skinner wrote in his book Beyond Freedom and Dignity, “We need to make vast changes in human behavior. . . . What we need is a technology of behavior.” Teaching machines are one such technology.

Skinner’s teaching machine differed from Pressey’s in that it did not have students push on buttons to respond to multiple choice questions. Students had to formulate their own answers. Skinner felt it was important that students could formulate their own responses. And he worried too that selecting the wrong answer was the wrong sort of behavioral reinforcement.

As with Pressey’s teaching machines, we can see in Skinner’s some of these elements that still exist in our technologies today. Behaviorism in general, for starters: excitement about gamification and “nudges” and notifications from our apps all designed to get us to “do the right thing” (whatever that means). And we see too this real excitement about the potential for transforming classrooms with gadgetry.

“There is no reason,” Skinner insisted, “why the schoolroom should be any less mechanized than, for example, the kitchen.” Indeed in the 1960s, there was a huge boom in teaching machines. There were door-to-door teaching machine salesmen, I kid you not, including those who sold the Min-Max made by Grolier, the encyclopedia company.

But alongside the excitement were the fears about robots teaching the children. And the machines were expensive, as was the development of the “programmed instruction” modules.

So excitement faded from these, just as new devices started being developed — ones that were computer-based. Ones that promises “intelligence.”

Intelligence, along with all the promises that teaching machines have made for a century now: efficiency, automation, moving at your own pace, immediate feedback, personalization.

Thomas Edison predicted in 1913 that textbooks would soon be obsolete. In 1962, Popular Science predicted that by 1965, over half of students would be taught by machines. I could easily find similar predictions made today about MOOCs or adaptive technology or Apple Watches. These themes persist, and it’s worth asking why.

I think you can explain a lot of it when you look at history and think about ideology, what we bring into our technologies, what we ask them to do and how and why.

Audrey Watters


Published

Hack Education

The History of the Future of Education Technology

Back to Archives