read

This talk was presented (virtually) to Nathan Fisk’s class on digital media and learning at the University of South Florida.

I’m currently working on a book called Teaching Machines, and I think that means my thoughts today are rather disjointed – this talk is one part ideas I’m working through for the book, and another part ideas pertaining to this class. I’m not sure I have a good balance or logical coherence between the two. But if you get lost or bored or whatever, know that I always post a copy of my talks online on my website, and you can read through later, should you choose.

Ostensibly “working on a book” means less travel and speaking this fall, although I seem to have agreed to speak to several classes, albeit virtually. I’m happy to do so (and thank you for inviting me into your class), for as an advisor once cautioned me in graduate school when I was teaching but also writing my dissertation, “students will always be more engaging than a blank page.” Students talk back; the cursor in Word just blinks.

I am doing some traveling too, even though I should probably stay at home and write. I’m visiting the archives of the Educational Testing Service in a week or so – maybe you know the organization for its exams the GRE and TOEFL – to continue my research into the history of education technology. The history of testing and technology are deeply interwoven. I’m there to look at some of the papers of Ben Wood, an education psychology professor at Columbia and the president of ETS after his retirement from the university. I’m interested, in part, in the work he did with IBM in the 1930s on building machines that could automatically grade multiple choice exams – standardized tests. I don’t want to rehash a talk I gave a couple of weeks ago to a class at Georgetown on why history matters except to say “history matters.” Education technology’s past shapes its present and its future.

Design matters. Engineering matters. But so too does the context and the practices around technology. Culture matters. All of these systems and practices have a history. (That’s one of the key takeaways for you, if you’re taking notes.)

Why does the cursor blink, for example? How does the blink direct and shape our attention? How is the writing we do – and even the thinking we do – different on a computer than on paper, in part because of blinks and nudges and notifications? (Is it?) How is the writing we do on a computer shaped by the writing we once did on typewriters? How is the testing we take, even when on paper, designed with machines in mind?

The book I’m writing is about the pre-history of computers in education, if you will, as I am keen to help people understand how many of the beliefs and practices associated today’s education technologies predate the latest gadgetry. “Personalized learning,” for example, is arguably thousands of years old, and we can date the idea of individualizing education by using machines back to the 1920s. The compulsion for data collection and for data analysis might seem like something that’s bound up with computers and their capability to extract and store more and more personal data. But collecting data about schools and classrooms, measuring student performance, measuring teacher performance are also practices with very long histories.

A side-note here on the topic of data collection and information accessibility: there’s nothing quite like visiting a museum or archives or library and seeing all the objects that aren’t digitized or even digitize-able to recognize that the people who tell you “you can learn everything on the Internet” are, to put it bluntly, full of shit. Moreover, visiting these institutions and working with their artifacts serves as a reminder about how fragile our records of the past can be. They are fragile as paper; and they are fragile when digital.

I say this as someone who thinks a lot about her digital profile, about the data that she creates, about what she can control and what she cannot. I pay for my own domains to host my scholarship, my “portfolio” – something I would encourage all of you to do. I try to build and run as much of the infrastructure as I can. (You need not do that.) I do so with a lot of intentionality – I don’t have comments on my site, and don’t track who visits my websites, for example – the latter because I think a lot about security and surveillance, the former due to spam and trolls. I post my thoughts, my writing on my own websites, even though social media has really discouraged us from doing this. (Stop and think about the ways in which this occurs. Much like the blinking cursor, there is always intention to the design.) If you go to hackeducation.com or audreywatters.com, you can see hundreds of essays I’ve written; you can see how my thoughts have changed and developed over time.

So while there’s a record of my writing on my websites, elsewhere I have become a “deleter.” Over the past year or so, it’s become very clear to me that, as a woman who works adjacent to technology, as a woman with strong opinions about technology, as a woman with a fairly high profile in her field, that it’s not a bad idea for me to start deleting old social media posts. I delete all my tweets after 30 days; I regularly delete everything I’ve posted to Facebook; I delete old emails. I know full well this doesn’t prevent my data from being used and misused or hacked or stolen; it just makes it harder to take 140 characters I typed in 2011 and rip them from their context. Instructions for making risotto that I sent someone in 2015 – god forbid – will never be part of some Russian conspiracy to alter the course of US politics.

I confess, however, when I visit archives, I do feel bad that I delete things. I feel bad that I haven’t saved a lot of the papers and letters my mum assiduously kept as a record of my childhood and teenage years. When my little brother and I cleaned out my dad’s house a few years ago after he died, I threw a lot of that stuff away. And I worry sometimes about the essays and letters and Very Official Documents that perhaps I should have saved as an adult – not just the papers, but the digital files that are now (or perhaps soon to be) inaccessible because a file format has changed or because a disk became corrupted or because the Internet company that I used to store the stuff has gone out of business.

I felt particularly guilty about all this when I visited the archives of the famed psychologist B. F. Skinner at Harvard University. (Not that I much like the idea of or see the need for having my papers gone through by future scholars. But still.) Skinner’s papers fill 82 containers – almost 29 cubic feet of stuff. Correspondence. Newspaper clippings. Data from his labs. Photographs. Notes. Lectures. An abundance for a researcher like me. I spent a week there in the Harvard University Archives, snapping photos of letters with my iPhone, and I barely scratched the surface.

It would be a mistake to see something like Skinner’s papers as providing unfettered access to his thoughts, his life. An archive is collected and curated, after all. Items are selected for inclusion (either by the individual, by the family, or by the university, for example). These materials tell a story, but that story can give us only a partial understanding.

Even if you, like me, balk at the idea of your papers being housed at a university library, it’s worth thinking about what sort of record you’re leaving behind, what sort of picture it paints about you – about your values, your beliefs, your habits, your interests, your “likes.” I don’t just mean your “papers”; I mean your digital footprint too. I don’t just mean your “legacy”; I mean what data you’re leaving behind now on a day-to-day basis. It’s worth thinking about how digital technologies are designed to glean certain information about you. Not just the letters you’ve written and the data about what, when, where, to whom, and so on, but a whole raft other metadata that every click generates. It’s worth thinking about how your behavior changes (and does not change) knowing (and not knowing) that this data is being recorded – that someone on Facebook is watching; that someone at Facebook is watching.

We are clicking on a lot of things these days, flashing cursors and otherwise.

There’s a passage that I like to repeat from an article by historian of education Ellen Condliffe Lagemann:


I have often argued to students, only in part to be perverse, that one cannot understand the history of education in the United States during the twentieth century unless one realizes that Edward L. Thorndike won and John Dewey lost.


(I am assuming, I suppose, that you know who these two figures are: Edward L. Thorndike was an educational psychology professor at Columbia University who developed his theory of learning based on his research on animal behavior – perhaps you’ve heard of this idea of his idea, the “learning curve,” the time it took for animals to escape his puzzle box after multiple tries. And John Dewey was a philosopher whose work at the University of Chicago Lab School was deeply connected with that of other social reformers in Chicago – Jane Addams and Hull House, for example. Dewey was committed to educational inquiry as part of democratic practices of community; Thorndike’s work, on the other hand, happened largely in the lab but helped to stimulate the growing science and business of surveying and measuring and testing students in the early twentieth century. And this is shorthand for Condliffe Lagemann’s shorthand, I realize, but you can think of this victory in part as the triumph of multiple choice testing over project-based inquiry.)

Thorndike won, and Dewey lost. I don’t think you can understand the history of education technology without realizing this either. And I’d propose an addendum to this too: you cannot understand the history of education technology in the United States during the twentieth century – and on into the twenty-first – unless you realize that Seymour Papert lost and B. F. Skinner won.

(I am assuming here, I admit, that you have done some of what I think is the assigned reading for this course. Namely, you’ve looked at Papert’s The Children’s Machine and you’ve read my article on Skinner.)

Skinner won; Papert lost. Oh, I can hear the complaints I’ll get on social media already: what about maker-spaces? What about Lego Mindstorms? What about PBL?

I maintain, even in the face of all the learn-to-code brouhaha that multiple choice tests have triumphed over democratically-oriented inquiry. Indeed, clicking on things these days seems to increasingly be redefined as a kind of “active” or “personalized” learning.

Now, I’m not a fan of B. F. Skinner. I find his ideas of radical behaviorism to be rather abhorrent. Freedom and agency – something Skinner did not believe existed – matter to me philosophically, politically. That being said, having spent the last six months or so reading and thinking about the guy almost non-stop, I’m prepared to make the argument that he is, in fact, one of the most important theorists of the 21st century.

“Wait,” you might say, “the man died in 1990.” “Doesn’t matter,” I’d respond. His work remains incredibly relevant, and perhaps insidiously so, since many people have been convinced by the story that psychology textbooks like to tell: that his theories of behaviorism are outmoded due to the rise of cognitive science. Or perhaps folks have been convinced by a story that I worry I might have fallen for and repeated myself: that Skinner’s theories of social and behavioral control were trounced thanks in part to a particularly vicious book review of his last major work, Beyond Freedom and Dignity, a book review penned by Noam Chomsky in 1971. “As to its social implications,” Chomsky wrote. “Skinner’s science of human behavior, being quite vacuous, is as congenial to the libertarian as to the fascist.”

In education technology circles, Skinner is perhaps best known for his work on teaching machines, an idea he came up with in 1953, when he visited his daughter’s fourth grade classroom and observed the teacher and students with dismay. The students were seated at their desks, working on arithmetic problems written on the blackboard as the teacher walked up and down the rows of desks, looking at the students’ work, pointing out the mistakes that she noticed. Some students finished the work quickly, Skinner reported, and squirmed in their seats with impatience waiting for the next set of instructions. Other students squirmed with frustration as they struggled to finish the assignment at all. Eventually the lesson was over; the work was collected so the teacher could take the papers home, grade them, and return them to the class the following day.

“I suddenly realized that something must be done,” Skinner later wrote in his autobiography. This classroom practice violated two key principles of his behaviorist theory of learning. Students were not being told immediately whether they had an answer right or wrong. A graded paper returned a day later failed to offer the type of positive behavioral reinforcement that Skinner believed necessary for learning. Furthermore, the students were all forced to proceed at the same pace through the lesson, regardless of their ability or understanding. This method of classroom instruction also provided the wrong sort of reinforcement – negative reinforcement, Skinner argued, penalizing the students who could move more quickly as well as those who needed to move more slowly through the materials.

So Skinner built a prototype of a mechanical device that he believed would solve these problems – and solve them not only for a whole classroom but ideally for the entire education system. His teaching machine, he argued, would enable a student to move through exercises that were perfectly suited to her level of knowledge and skill, assessing her understanding of each new concept, and giving immediate positive feedback and encouragement along the way. He patented several versions of the device, and along with many other competitors, sought to capitalize what had become a popular subfield of educational psychology in the 1950s and 1960s: programmed instruction.

We know that story well in education technology. I know I have told it a hundred times. Skinner probably told it many more than that. It’s sort of the archetypal story for ed-tech, if you will. Man sees problem in the classroom; man builds technological solution. Man tries to sell technological solution; schools don’t want it, can’t afford it. The computer comes along; now teaching machines are everywhere. There’s a nice narrative arc there, a nice bit of historical determinism (which, for the record, I do not subscribe to).

The teaching machine wasn’t the first time that B. F. Skinner made headlines – and he certainly make a lot of headlines for the invention, in part because the press linked his ideas about teaching children, as Skinner did himself no doubt, to his research on training pigeons. “Can People Be Taught Like Pigeons?” Fortune magazine asked in 1960 in a profile on Skinner and his work. Indeed, pigeons weren’t the first time Skinner had made the news. The public was arguably already familiar with the name by the time the teaching machine craze occurred in the late 1950s. Although Project Pigeon – his efforts during World War II to build a pigeon-guided missile (yes, you heard that right) – wasn’t declassified until 1958, Skinner’s work training a rat named Pliny had led to a story in Life magazine in 1937, and in 1951, there were a flurry of stories about his work on pigeons. (The headlines amuse me to no end, as Skinner was a professor at Harvard by then, and many of them say things like “smart pigeons attend Harvard” and “Harvard Pigeons are Superior Birds Too.” Fucking Harvard.)

Like Edward Thorndike – and arguably inspired by Edward Thorndike (or at least by other behaviorists working in the field of what was, at the time, quite a new discipline) – Skinner worked in his laboratory with animals (at first rats, then briefly squirrels, and then most famously pigeons) in order to develop techniques to control behavior. Using a system of reinforcements – food, mostly – Skinner was able to condition his lab animals to perform certain tasks. Pliny the Rat “works a slot machine for living,” as Life described the rat’s manipulation of a marble; the pigeons could play piano and ping pong and ostensibly even guide a missile towards a target.

In graduate school, Skinner had designed an “operant conditioning chamber” for training animals that came to be known as the “Skinner Box.” The chamber typically contained some sort of mechanism for the animal to operate – a plate for a pigeon to peck (click!), for example – that would result in a chute releasing a pellet of food.

It is perhaps unfortunate then that when Skinner wrote an article for Ladies Home Journal in 1945, describing a temperature-controlled, fully-enclosed crib he’d invented for he and his wife’s second child, that the magazine ran it with the title “Baby in a Box.” (The title Skinner had given his piece: “Baby Care Can Be Modernized.”)

Skinner’s wife had complained to him about the toll that all the chores associated with a newborn had taken with their first child, and as he wrote in his article, “I felt that it was time to apply a little labor-saving invention and design to the problems of the nursery.” Skinner’s “air crib” (as it eventually came to be called) allowed the baby to go without clothing, save the diaper, and without blankets; and except for feeding and diaper-changing and playtime, the baby was kept in the crib all the time. Skinner argued that by controlling the environment – by adjusting the temperature, by making the crib sound-proof and germ-free – the baby was happier and healthier. And the workload on the mother was lessened – “It takes about one and one-half hours each day to feed, change, and otherwise care for the baby,” he wrote. “This includes everything except washing diapers and preparing formula. We are not interested in reducing the time any further. As a baby grows older, it needs a certain amount of social stimulation. And after all, when unnecessary chores have been eliminated, taking care of a baby is fun.”

As you can probably imagine, responses to Skinner’s article in Ladies Home Journal fell largely into two camps, and there are many, many letters in Skinner’s archives at Harvard from magazine readers. There were those who thought Skinner’s idea for the “baby in a box” bordered on child abuse – or at the least, child neglect. And there were those who loved this idea of mechanization – science! progress! – and wanted to buy one, reflecting post-war America’s growing love of gadgetry in the home, in the workplace, and in the school.

As history of psychology professor Alexandra Rutherford has argued, what Skinner developed were “technologies of behavior.” The air crib, the teaching machine, “these inventions represented in miniature the applications of the principles that Skinner hoped would drive the design of an entire culture,” she writes. He imagined this in his novel Walden Two, a utopian (I guess) novel in which he envisaged a community that had been socially and environmentally engineered to reinforce survival and “good behavior.” But this wasn’t just fiction for Skinner; he practiced this throughout his science and “gadgeteering,” inventing technologies and applying them to solve problems and to improve human behavior – all in an attempt to re-engineer the entire social order and to make the world a better place.

“The most important thing I can do,” Skinner famously said, “is to develop the social infrastructure to give people the power to build a global community that works for all of us,” adding that he intended to develop “the social infrastructure for community – for supporting us, for keeping us safe, for informing us, for civic engagement, and for inclusion of all.”

Oh wait. That wasn’t B. F. Skinner. That was Mark Zuckerberg. My bad.

I would argue, in total seriousness, that one of the places that Skinnerism thrives today is in computing technologies, particularly in “social” technologies. This, despite the field’s insistence that its development is a result, in part, of the cognitive turn that supposedly displaced behaviorism.

B. J. Fogg and his Persuasive Technology Lab at Stanford is often touted by those in Silicon Valley as one of the “innovators” in this “new” practice of building “hooks” and “nudges” into technology. These folks like to point to what’s been dubbed colloquially “The Facebook Class” – a class Fogg taught in which students like Kevin Systrom and Mike Krieger, the founders of Instagram, and Nir Eyal, the author of Hooked, “studied and developed the techniques to make our apps and gadgets addictive,” as Wired put it in a recent article talking about how some tech executives now suddenly realize that this might be problematic.

(It’s worth teasing out a little – but probably not in this talk, since I’ve rambled on so long already – the difference, if any, between “persuasion” and “operant conditioning” and how they imagine to leave space for freedom and dignity. Rhetorically and practically.)

I’m on the record elsewhere arguing this framing – “technology as addictive” – has its problems. Nevertheless it is fair to say that the kinds of compulsive behavior that we display with our apps and gadgets is being encouraged by design. All that pecking. All that clicking.

These are “technologies of behavior” that we can trace back to Skinner – perhaps not directly, but certainly indirectly due to Skinner’s continual engagement with the popular press. His fame and his notoriety. Behavioral management – and specifically through operant conditioning – remains a staple of child rearing and pet training. It is at the core of one of the most popular ed-tech apps currently on the market, ClassDojo. Behaviorism also underscores the idea that how we behave and data about how we behave when we click can give programmers insight into how to alter their software and into what we’re thinking.

If we look more broadly – and Skinner surely did – these sorts of technologies of behavior don’t simply work to train and condition individuals; many technologies of behavior are part of a broader attempt to reshape society. “For your own good,” the engineers try to reassure us. “For the good of the world.”

Audrey Watters


Published

Hack Education

The History of the Future of Education Technology

Back to Archives