read

I gave the keynote this morning at the EdTechTeacher iPad Summit in San Diego. I wanted to talk about the history of ed-tech -- about how we ended up with technologies that are in many cases simply making old (awful) educational practices more efficient. I asked a room full of educators who'd ever heard of Alan Kay. I think four people raised their hands. I asked who'd ever heard of Seymour Papert. Maybe a dozen had. I asked who'd ever heard of B. F. Skinner. Everyone in the room. Sigh.

Here are the notes and slides from my talk. There's also a Google Doc with links to learn more about the history I mentioned.

The History of the Future of Ed-Tech


A couple of months ago, my brother Fred and I went back to the house we grew up in. We're getting to the age where we have parents that are “that age.” Our dad had fallen, broken his hip, and was in a nursing home. We went to his house to "check on things.”

It’s been over 20 years since either of us lived there. My bedroom has since become the guest bedroom. With the exception of the old bookshelf and bed, there’s nothing there that’s “me.” But my brother’s room too has become a shrine to the Fred that was — it’s almost untouched since he moved out to attend the Air Force Academy. But it’s pretty weird to visit the room now. Fred didn’t stay in the Air Force Academy. He dropped out after his sophomore year, became an environmental activist, then an emergency room nurse, and he’s now a nurse practitioner in Maine.

Visiting his old bedroom was like stepping into the past that felt disconnected from the present. But not totally disconnected. You could find glimpses there of the kid he was, of the person he was supposed to become — of my parents and grandparents’ vision of his future. A future predicted in the 1970s and 1980s. It’s worth asking, I think, whether this is the future that came to be.

The room was the history of that future.

We found on his shelf another example of this: The Kids’ Whole Future Catalog. Published in 1982, I remember Fred and I pouring over this. He admitted that it had probably shaped greatly his thoughts at the time on who he would become, what his future would look like.

It’s a future that contains “food factories” and “space vacations” and “flying trains." It’s a future where robots are teachers.

In many cases, it's the future that wasn’t. Or the future that isn’t quite. Or the future that isn’t quite yet.

* * *

I want to talk about the history of the future of ed-tech this morning. The future that wasn’t. The future that isn’t quite. The future that isn’t quite yet. The history of all of this.

I want us to consider: Where have we been? As educators. As technologists. Where are we going? What narratives help us answer questions about the past? What narratives help us shape the future?

As we move forward into a world that it increasingly governed by machines and algorithms, I think we must consider the trajectory of the path we’re on. Because “the future of ed-tech” is shaped by “the history of ed-tech” — whether we realize it or not.

* * *

Last year, the programmer and designer Bret Victor delivered what I thought was one of the smartest keynotes I’ve ever seen.

He came on stage, dressed in a button up shirt and tie with pocket protector and proceeded to utilize an overhead projector and transparencies for his talk. Three visual cues there about the conceit of his talk — the pocket protector, the overhead projector, and transparencies.

Victor spoke about the future of programming, but as those visual cues, those presentation technologies implied, he spoke of the future as though he was telling it in 1973. “Given what we know now,” he asked, “what might programming be like 40 years from now?” In other words, given what we knew about computing in 1973, what would it be like in 2013?

Victor proceeded to talk about some of the computer science research that had been conducted in the previous decade — that is, in the 1960s — research that he used to inform his predictions about the future:

Gordon Moore, Intel’s co-founder for example, who postulated in 1965 what we now call “Moore’s Law”: the observation that, over the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years — that is, the processing power of computer chips doubles roughly every two years. This is a prediction that has come true. But that’s because it has become a self-fulfilling prophesy: chip manufacturers like Intel have made increased computing power a goal.

And aye, there’s the rub.

Much of the research that Bret Victor cites in his keynote was never really adopted by the technology industry. It simply wasn't the focus. It wasn’t the goal. Victor points to a number of incredibly interesting and provocative research efforts that went nowhere. The Future of Programming that wasn’t.

And it isn’t simply that these innovative ideas were rejected or ignored. Worse, they were forgotten. The technology that powers our computing systems today took a very different path than the one that Victor wryly described in his talk. And today, many programmers don’t recognize, let alone teach others, that there could be other ways of doing things.

Take the work of Douglas Englebart, for example. He passed away last year, an amazing but largely unsung visionary in computer science. Among other things, Englebart was the first to use an external device one that rolled around on a flat surface and moved a pointer on a screen so as to highlight text and to select options — what we now call “the mouse."

Englebart first unveiled the mouse in what technologists refer fondly to as “The Mother of All Demos” — a demonstration in 1968 of the oN-Line System (more commonly known as NLS), a hardware and software package that had a number of incredible features demonstrated publicly for the first time — again, remember, this was the era of the mainframe and the punchcard. In the demo: the mouse, “windows," hypertext, graphics, version control, word processing, video conferencing, and a collaborative real-time editor.

1968.

(Englebart first envisioned these things even earlier — while driving to work one day in 1950. Another piece of trivia: he had the patent for the mouse, interestingly, but he never earned a dime in royalties from it.)

But many of the features in “the Mother of All Demos” weren’t picked up by the tech industry — not right away at least. And the team that had worked with Englebart on the NLS soon dispersed from their Stanford University-based research program, many of them ending up at Xerox PARC (Xerox’s Palo Alto Research Center). In turn Xerox PARC became the site where many more of the computing technologies we now take for granted — the Ethernet, laser printing, the personal computer as we know it today — were developed.

But even at Xerox PARC, new technologies were developed that were never adopted. Why? Why, when as Victor argues, many of these were more interesting and elegant solutions than what we have actually ended up with?

In part, it’s because computing technologies can be prototyped quite readily. The 1960s and 1970s really marked the beginning of this: computers had become powerful enough to do interesting things but the computer scientists and computer industry hadn’t really stopped imagining what those interesting things might be and how they might be done.

Building new technology is easy; changing behaviors, changing culture — is much much harder.

* * *

What does this have to do with ed-tech?

Well, the tension between new tools and old practices should give you a hint. It’s easy to put iPads in the classroom, for example. It’s much more difficult to use them to do entirely new things.

Watching Victor’s talk, I couldn’t help but wonder how might we have written “The Future of Ed-Tech” in the 1970s.

After all, Bret Victor says that his keynote was inspired by Alan Kay — an important figure not just in programming but in education technology as well.

Kay was a pioneer in object-oriented programming. He actually attended Englebart’s demo in 1968, and he later worked at Xerox PARC where he helped develop the programming language SmallTalk. (MIT Media Lab’s introductory programming language for kids, Scratch, is based, in part, on SmallTalk.) And Alan Kay designed the prototype for something called the Dynabook — “a personal computer for children of all ages."

If I were to tell you the story, using the conceit that Bret Victor used in his keynote — that is, if I were to come out here today and tell you about the future of education technology as it might have been seen in the early 1970s — I would ground the talk in Alan Kay’s Dynabook.

And again, let’s recall. In the late Sixties and early Seventies, computers were still mostly giant mainframes, and even the growing market for microcomputers were restricted to scientists and military. Alan Kay was among those instrumental in pushing forward a vision of personal computing.

We scoff now at the IBM CEO who purportedly said "I think there is a world market for maybe five computers.” But “personal computing” for Kay wasn't simply that computers would be adopted in the workplace — something you can imagine that any IBM executive would readily agree to.

Kay argued that computers should be commonplace devices, used by millions of non-professional users. And Kay believed this would foster a new literacy, a literacy that would bring about a revolution akin to the changes brought about by the printing press in the 16th and 17th century. And key: children would be the primary actors in this transformation.

In 1972 Kay published a manifesto — “A Personal Computer for Children of All Ages” — in which he describes the Dynabook — the underlying vision as well as its technical specifications: no larger than a notebook, weighing less than 4 pounds, connected to a network, and all for a price tag of $500, which Kay explains at length is “not totally outrageous.” ($500 was roughly the cost at the time of a color TV, Kay points out.)

“What then is a personal computer?,” Kay writes. "One would hope that it would be both a medium for containing and expressing arbitrary symbolic notations, and also a collection of useful tools for manipulating these structures, with ways to add new tools to the repertoire.” That is, it is a computer program but one that is completely programmable.

“It is now within the reach of current technology to give all the Beths and their dads a ‘Dynabook’ to use anytime, anywhere as they may wish,” Kay writes in his manifesto. Again, 1972 - 40 years before the iPad. "Although it can be used to communicate with others through the 'knowledge utilities' of the future such as a school ‘library’ (or business information system), we think that a large fraction of its use will involve reflexive communication of the owner with himself through this personal medium, much as paper and note- books are currently used.“ The personal computer isn’t “personal” because it’s small and portable and yours to own. It’s “personal” because you pour yourself into it — your thoughts, your programming.

So, if I were to tell you a story about the future of ed-tech like Bret Victor tells about the future of programming, I’d probably start from there — from the Dynabook’s design in 1972. And it would be a story, like Victor’s, with a subtext of sadness that this is not what history has given us at all.

In some ways, the Dynabook does look a lot like our modern-day tablet computers — a lot like the iPad even. (Kay did work at Apple, I should note, in the 1980s under then CEO John Scully). But as Kay has said in recent interviews, the iPad is not the actualization of the Dynabook.

He told TIME magazine last year that the primary purpose of the Dynabook was "to simulate all existing media in an editable/authorable form in a highly portable networked (including wireless) form. The main point was for it to be able to qualitatively extend the notions of 'reading, writing, sharing, publishing, etc. of ideas' literacy to include the 'computer reading, writing, sharing, publishing of ideas’ that is the computer’s special province. For all media, the original intent was 'symmetric authoring and consuming’.”

Consumption and creation -- a tension that's plagued the iPads since they were unveiled.

"Isn’t it crystal clear,” Kay continued, "that this last and most important service [authoring and consuming] is quite lacking in today’s computing for the general public? Apple with the iPad and iPhone goes even further and does not allow children to download an Etoy made by another child somewhere in the world. This could not be farther from the original intentions of the entire ARPA-IPTO/PARC community in the ’60s and ’70s."

For Kay, the Dynabook was meant to help build capacity so that children (so that adults too) would create their own interactive learning tools. The Dynabook was not simply about a new piece of hardware or new software — but about a new literacy, a new way of teaching and learning. And that remains largely unrealized.

Again, as Bret Victor’s talk reminds us: changing technology is easy; changing practices, not so much.

Alan Kay’s work draws heavily on that of Seymour Papert. (Bret Victor’s work does too, I should add. As does mine). He cites one of Papert’s best known lines in his manifesto: “should the computer program the kid or should the kid program the computer?”

Kay’s work and Papert’s work insists on the latter.

Kay met Papert in 1968 and learned of Papert’s work on Logo programming language.

As a programming language, Logo not only helped teach children programming concepts but also helped develop their "body-syntonic reasoning.” That is, Logo — and particularly the Turtle that the language became most synonymous with — helped give students an embodied understanding of math. There was a Turtle robot and later a Turtle graphic on the screen. Using Logo, students could manipulate these, and this, Papert argued, would help them to understand and reason mathematically.

Computers, argued Papert, should unlock children’s “powerful ideas.” That’s the subtitle to his 1980 book Mindstorms — one that both Bret Victor and I both insist you read. That book is “about how computers can be carriers of powerful ideas and of the seeds of cultural change, how they can help people form new relationships with knowledge that cut across the traditional lines separating humanities from sciences and knowledge of the self from both of these. It is about using computers to challenge current beliefs about who can understand what and at what age. It is about using computers to question standard assumptions in developmental psychology and in the psychology of aptitudes and attitudes. It is about whether personal computers and the cultures in which they are used will continue to be the creatures of ‘engineers’ alone or whether we can construct intellectual environments in which people who today think of themselves as ‘humanists’ will feel part of, not alienated from, the process of constructing computational cultures."

Computers, Papert insisted, will help children gain "a sense of mastery over a piece of the most modern and powerful technology and establish an intimate contact with some of the deepest ideas from science, from mathematics, and from the art of intellectual model building.”

Mindstorms. 1980.

Yet sadly, Papert’s work might be another example of the "Future of Ed-Tech" that hasn’t come to pass. He does address this in part in his 1993 book The Children’s Machine: “Progressive teachers knew very well how to use the computer for their own ends as an instrument of change; School knew very well how to nip this subversion in the bud.”

As Bret Victor argues in his keynote: developing new technologies is easy; changing human behaviors, changing institutions, challenging tradition and power is hard.

“Computer-aided inspiration,” as Papert encourage has been mostly trumped by “computer-aided instruction."

Indeed, computer-aided instruction came under development around the same time as Logo and the Dynabook — even earlier, actually. And the history of the future of CAI may well tell us more about the ed-tech we got. It certainly points to the ed-tech that many people still want us to have.

* * *

The first computer-aided instruction system was PLATO — Programmed Logic for Automatic Teaching Operations — a computer system developed at the University of Illinois. 1960 saw the first version, the PLATO I, operate on the university’s ILLIAC I computer. Then came PLATO II, PLATO III, and PLATO IV.

The PLATO IV was released in 1972 — the same year as Alan Kay’s manifesto. Roughly the same time as Bret Victor situates his “Future of Programming” talk.

Early versions of the PLATO system had a student terminal attached to a mainframe. The software offered mostly “drill and kill” and tutorial lessons. But as the PLATO system developed, new, more sophisticated software was added — more problem-based and inquiry-based lessons, for example. A new programming language called TUTOR enabled “anyone” to create their own PLATO lessons without having to be a programmer. The mainframe now supported multiple, networked computers. Students could communicate with one another, in addition to the instructor. Pretty groundbreaking stuff — remember, this is pre-Internet.

This networked system made PLATO a site for the development of number of very important innovations in computing technology — not to mention in ed-tech. Forums, message boards, chat rooms, instant messaging, screen sharing, multiplayer games, and emoticons. PLATO was, as author Brian Dear argues in his forthcoming book The Friendly Orange Glow “the dawn of cyberculture.”

And again, PLATO’s contribution to cyberculture is mostly forgotten.

And perhaps PLATO’s contribution to ed-tech is as well. I’m not sure. I think that we can see in PLATO many of the features in ed-tech today, many of the features that would make Alan Kay and Seymour Papert shudder.

One of the features PLATO boasted: tracking every keystroke that a student made, data on every answer submitted — right or wrong. PLATO offered more efficient computer-based testing. It offered the broadcast of computer-based lessons to multiple locations, where students could work at their own pace. Indeed, by the mid-Seventies, PLATO was serving students in over 150 locations — not just across the University of Illinois campus, but in elementary schools, high schools, and on military bases.

Sensing a huge business opportunity, the Control Data Corporation — the company that built the University of Illinois mainframe — announced that it was going to go to market with PLATO, spinning it out from a university project to a corporate one.

This is where that $500 price tag for Alan Kay’s Dynabook is so significant, I think.

CDC charged $50 an hour for access to its mainframe, for starters. Each student unit cost about $1900; the mainframe itself $2.5 million — on the low end — according to estimates in a 1973 review of CAI called The Computer and Education. CDC charged $300,000 to develop each piece of courseware.

Needless to say, PLATO the computer-aided instruction system product was a failure. The main success that CDC had with it: selling an online testing system to the National Association of Securities Dealers, a regulatory group that licenses people who sell securities.

CDC sold the PLATO trademark in 1989 to The Roach Organization, and it now sells e-learning software under the name Edmentum.

From a machine at "the dawn of cyberculture” to one that delivered standardized testing for stockbrokers. The history of the future of ed-tech. Remember, new technologies are easy to develop; new behaviors and new cultures are not.

* * *

One final piece of ed-tech history, this one a little further back in time than the computer-based innovations of the 1960s and 1970s. But it’s still a machine-based innovation. It’s still an object that enables efficient instruction and efficient assessment.

B. F. Skinner’s “teaching machine.”

I could go back farther than Skinner, admittedly. To a patent in 1866 for a device to teach spelling. Or a patent in 1909 for a device to teach reading. Or a patent in 1911 awarded to Herbert Aikens that promised to teach "arithmetic, reading, spelling, foreign languages, history, geography, literature or any other subject in which questions can be asked in such a way as to demand a definite form of words ... letters ... or symbols."

I could go back to the machine developed by Sidney Pressey. Pressey was psychologist at Ohio State University and he come up with an idea to be able to use a machine to score the intelligence tests that the military was using to determine eligibility to enlistment. Then World War I happened — slight delay. Pressey first exhibited his teaching machine at the 1925 meeting of the American Psychological Association. It had four multiple-choice questions and answers in a window, and four keys. If the student thought the second answer was correct, she pressed the second key; if she was right, the next question was turned up. If the second was not the right answer, the initial question remained in the window, and the learner persisted until she found the right one. A record of all the student's attempts was kept automatically.

Intelligence testing based on students’ responses to multiple-choice questions. Multiple-choice questions with four answers. Sound familiar?

Harvard professor B. F. Skinner claimed he’d never seen Pressey’s device when he developed his own teaching machine in the mid 1950s. Indeed, he dismissed Pressey’s, arguing they were testing and not teaching machines. Skinner didn’t like the multiple choice questions — his teaching machine enabled the student to enter their own response by pulling a series of levers. If she had the correct answer, a light would go on.

A behaviorist, Skinner believed that teaching machines could provide an ideal mechanism for operant conditioning. "There is no reason why the schoolroom should be any less mechanized than, for example, the kitchen.”

Skinner argued that immediate, positive reinforcement was key to shaping behavior — and all human actions could be analyzed this way. But Skinner argued that, despite their important role in helping to shape student behavior, "The simple fact is that, as a mere reinforcing mechanism, the teacher is out of date.”

Skinner’s teaching machine might look terribly out-of-date, but I’d argue that this is the history that still shapes so much of what we see today. Self-paced learning, gamification, an emphasis on real-time or near-real-time corrections. No doubt, ed-tech today draws so heavily on Skinner because Skinner (and his fellow education psychologist Edward Thorndike) have been so influential in how we view teaching and learning, how we view schooling.

So much B. F. Skinner. So little Seymour Papert. So little Alan Kay.

And I’d argue that this isn’t just about education technology either. There’s so much Skinner and so little Kay in “mainstream” technology too. Think Zynga, for example. Click click click. Level up! Rewards! Achievement unlocked!

Even as our society becomes more and more “technological,” this future remains quite burdened by this history.

I’ll quote Papert here, one more time, to close: "One might say the computer is being used to program the child. In my vision, the child programs the computer, and in doing so, both acquires a sense of mastery over a piece of the most modern and powerful technology and establishes an intense contact with some of the deepest ideas from science, from mathematics, and from the art of intellectual model building."

May that vision be what guides us forward, what shapes the future of ed-tech.

Audrey Watters


Published

Hack Education

The History of the Future of Education Technology

Back to Archives