This talk was delivered today at the Digital Pedagogy Lab Summer Institute at UW Madison
Thank you very much for inviting me here today. Once upon a time, as a graduate student, I imagined the University of Wisconsin Madison to be “a dream job.” And so I chuckle that it likely took me dropping out of a PhD program to end up here, speaking to you today. But I’d be remiss if I didn’t touch on the anger I feel at the attempts to decimate the University of Wisconsin system, not simply because attacks on tenure and academic freedom crush any semblance of “a dream job,” but because I feel across the board, academia hasn’t really responded to questions of labor and learning – not really, not well – and now other forces – not just the political forces of education reform or Scott Walker, but the political forces of Silicon Valley and venture capital too want to reshape systems to their own ideological and financial needs. The losers will be labor and learning.
With that in mind, I’ve changed the title of my talk slightly…
In 1913, Thomas Edison predicted that “Books will soon be obsolete in schools.” He wasn’t the only person at the time imagining how emergent technologies might change education. Columbia University educational psychology professor Edward Thorndike – behaviorist and creator of the multiple choice test – also imagined “what if” printed books would be replaced. He said in 1912 that
If, by a miracle of mechanical ingenuity, a book could be so arranged that only to him who had done what was directed on page one would page two become visible, and so on, much that now requires personal instruction could be managed by print.
Edison expanded on his prediction a decade later: “I believe that the motion picture is destined to revolutionize our educational system and that in a few years it will supplant largely, if not entirely, the use of textbooks.” “I should say,” he continued, “that on the average we get about two percent efficiency out of schoolbooks as they are written today. The education of the future, as I see it, will be conducted through the medium of the motion picture… where it should be possible to obtain one hundred percent efficiency.”
What’s interesting to me about these quotations isn’t that Edison or Thorndike got it wrong. Thorndike’s remarks sound an awful lot like Neal Stephenson’s science fiction novel The Diamond Age; and indeed they’re fairly prescient of what ed-tech marketers now call “adaptive textbooks.” (That education today looks like Thorndike’s vision shouldn’t be a surprise considering, as historian Ellen Lagemann and others have described the history of US education, “Edward Thorndike won and John Dewey lost.”)
History is full of faulty proclamations about the future of education and technology. (No doubt, Silicon Valley and education reformers continue to churn out these predictions – many prophesying doom for universities, many actively working to bring that doom about.)
What’s striking about these early 20th century predictions is that Thorndike set the tone, over one hundred years ago, for machines taking over instruction. And while he was wrong about films replacing textbooks, Edison was largely right that the arguments in support of education technology, of instructional technology would frequently be made in terms of “efficiency.” Much of the history of education technology, indeed the history of education itself, in the twentieth century onward involves this push for “efficiency.” To replace, to supplant – to move from textbooks to film or from chalkboards to interactive whiteboards or from face-to-face lecture halls to MOOCs or from human teachers to robots – comes in the name of “progress,” where progress demands “efficiency.”
Of course, “efficiency” is the language of business. We – and I use that plural first person pronoun quite loosely here – want education to be faster, cheaper, and less wasteful. As such, we want the system to be measurable, more managed, and in turn, increasingly mechanized. We want education to run more like a business; we want education to run more like a machine.
I use the verb “mechanized” and not “computerized” or “digitized” deliberately – even though, here we are at a Digital Pedagogy Lab – because I want us to think about the history of education technology and, more broadly, about technology itself. This pressure to mechanize isn’t something new, even if “to digitize” is a new way of doing it; as Edison and Thorndike’s quotations remind us, ed-tech predates the iPad, the Internet, the personal computer. We get so caught up, however, in the perceived novelty of the technology; we are so enthralled by its marketing – always “new and improved”; we are so willing to buy into obsolescence – “buy” being the operative behavior; we’re so hopeful about upgrades that we rarely look at the practices that technology does not change, those that it changes for the worse, those that it re-inscribes, and how all of these reflect the demands of politics and power much more often than they reflect progressive pedagogical concerns or teachers’ or learners’ needs.
I want to talk to you today about the history and the future of teaching machines. I want to talk about teaching machines and teaching labor, specifically the belief that machines are necessary because they are efficient, labor-saving. Here at the University of Wisconsin, these questions are all the more imperative: what labor, whose labor is saved, is replaced in this, an age of economic precarity, adjunct-ification, anti-unionism, automation? What is the role of education technology in pedagogy, in scholarly labor, in the labor of learning and of love? And again, whose labor is saved, and whose is replaced?
Many people insist that technology will not replace teachers; indeed they doth protest too much methinks. Often, they re-state what science fiction author Arthur C. Clarke pronounced: that if a teacher can be replaced by a machine, she or he should be. That’s an awfully slippery slope. “Can be replaced” gives politicians and investors and entrepreneurs and administrators a lot of room to maneuver, to re-inscribe or redefine what teaching – by humans or machines – should look like. “Can be replaced” is not a solid place to launch a resistance to machines; it acquiesces to machines – to the ideology of labor-saving devices – from the outset.
Much like the flawed predictions about what education technology will “revolutionize,” the pronouncements about replacing teachers (or not) with machines are also quite old, made by the earliest of educational psychologists, a legacy we can trace back again to Edward Thorndike and the earliest of teaching machines. People have been working to replace, streamline, render more efficient education labor with machines for a century.
Ohio State University professor Sidney Pressey, for example, said in 1933 that
There must be an ‘industrial revolution’ in education, in which educational science and the ingenuity of educational technology combine to modernize the grossly inefficient and clumsy procedures of conventional education. Work in the schools of the future will be marvelously though simply organized, so as to adjust almost automatically to individual differences and the characteristics of the learning process. There will be many labor-saving schemes and devices, and even machines – not at all for the mechanizing of education, but for the freeing of teacher and pupil from educational drudgery and incompetence.
Oh not replace you, teacher. Replace just some your labor, the bits of it we’ve deemed repetitive, menial, unimportant, and let’s be honest, those bits that are too radical. Not stop students from having to do menial tasks, oh no. Replace the efforts where you, teacher, have been labeled incompetent. Replace not so as to enable a more just and progressive education through technology, but rather to hard-code some high standards of instruction.
As Harvard psychology professor (and the name most readily associated with teaching machines) B. F. Skinner wrote in the 1950s:
Will machines replace teachers? On the contrary, they are capital equipment to be used by teachers to save time and labor. In assigning certain mechanizable functions to machines, the teacher emerges in his proper role as an indispensable human being. He may teach more students than heretofore – that is probably inevitable if the worldwide demand for education is to be satisfied – but he will do so in fewer hours and with fewer burdensome chores. In return for his greater productivity he can ask society to improve his economic condition.
Greater productivity. Larger classes. Global demand met through mechanization. Sound familiar?
Skinner said elsewhere, “There is no reason why the schoolroom should be any less mechanized than, for example, the kitchen.” Ah, the very gendered nature of post-War gadgetry that promised to automate and alleviate the drudgery of “women’s work.” (Women of a certain class, of course. And tasks of a certain kind.)
Keep this in mind: how robots taking over our jobs might be gendered.
The mid–1950s also marked the founding of a new field of research, artificial intelligence, that promised us better machines – not just more efficient machines, but smarter machines. And once again, sweeping predictions were made: Carnegie Mellon professor Herbert Simon then boasted that “machines will be capable, within twenty years, of doing any work a man can do,” (note: a man’s work) and MIT professor Marvin Minsky said “within a generation … the problem of creating ‘artificial intelligence’ will substantially be solved.”
As with Edison, we can chuckle at the foolish bravado of these forecasts; but I think we err to simply laugh off the miscalculations of early AI, because they reveal a great deal about what people want to see come to pass. Then and now. Indeed, we find ourselves again – this happens every decade or so it seems – in the midst of a hype cycle full of promises of smart machines and artificial intelligence and frenzied speculation about the impact they will have on the future of work.
“We are entering a new phase in world history – one in which fewer and fewer workers will be needed to produce the goods and services for the global population,” write Erik Brynjolfsson and Andrew McAfee in their book Race Against the Machine. “Before the end of this century,” says Wired Magazine’s Kevin Kelly, ”70 percent of today’s occupations will … be replaced by automation.“ The Economist offers a more rapid timeline: ”Nearly half of American jobs could be automated in a decade or two,“ it contends. Predictably contrarian, Vox’s Matt Iglesias laments that ”Robots aren’t taking your jobs – and that’s the problem.“ ”Be terrified," he cautions, that robots aren’t replacing you. Humans are just not productive enough. We’re not efficient enough.
The rise of the machines we are told, as all predictions purport to be, whether religious, secular, or scientific, is inevitable. It is, to echo the title of Kevin Kelly’s book, “what technology wants.” And there in that title we can see a convenient erasure of the machinations of investors or entrepreneurs or engineers, an ignorance of ideology; instead in that framing the machines have the agency – agency and will to which the rest of us should bend.
What will happen when robots replace us looks quite different depending on who is telling the story. And in some ways replacing us is, from their earliest origins, what robots have always been conceptualized to do.
In 1920, the playwright Karel Čapek coined the term “robot” for his play Rossum’s Universal Robots or R.U.R. The word comes from the Czech roboti which meant “serf labor.” “Drudgery,” another translation offers. Or, according to Wikipedia, “the amount of hours a serf owed his master in a given day.”
The robots in Čapek’s play aren’t the metallic machines that the word likely conjures for us today. They’re more biological, assembled out of a protoplasm-like substance.
In the play’s first act, a young woman named Helena, daughter of an industry mogul, comes to the island factory where the robots are built. She’s there as part of the League of Humanity, there to argue that the robots have souls and that they should be freed. The robots, which work faster and cheaper than humans (actually, the robots aren’t paid at all), have quickly taken over almost all aspects of work. They remember everything, but they do not like anything. “They make great university professors,” says Harry Domin, the general manager of the robot factory.
As the play progresses, as robots dominate all aspects of the economy, the human birth rate falls. The robots stage a rebellion, a global rebellion as, unlike humans, robots share a universal language and recognize the universality of their labor struggles. Hoping to stop the economic and political crisis, Helena burns the formula for building robots. The robots kill all the humans, save one man in the factory who still works with his hands.
But then the robots discover that without the formula that Helena has destroyed, they cannot re-produce. “The machines are turning out nothing but bloody chunks of meat.”
Čapek’s play was translated into over thirty languages and performed all over the world. The success of the play occurred no doubt as, in the 1920s, it struck a nerve – fears of automation, industrialization, war, revolution. The play demands the audience consider what is happening to our humanity as a result of these forces.
Indeed that’s the question that robots always seem to raise: what is happening to our humanity? As we mechanize and now digitize the world around us, what happens to our labor, our love, our soul? Are we poised to find ourselves exterminated like the humans in Rossum’s Universal Robots, or like the robots themselves, will we reduced to “bloody chunks of meat”?
We see headline after headline lately about the coming age of AI, but as Čapek’s work suggests, this has been an ongoing threat – psychological, I would argue, ideological as much as technological.
No doubt, we have seen a major impact on employment due to machines – computers, photocopiers, telephones, ATMs, online commerce. And in the last few decades, the US economy has lost millions of manufacturing jobs – although in fairness, the connection to automation here is likely one of correlation not simply causation. But robots offer a simple explanation: human labor was not sufficiently efficient. Robots increase productivity.
At the same time as the drop in manufacturing, the US economy has seen an explosion in jobs in the service sector, which now amounts to 84% of employment in the country. Although the service sector does include doctors and lawyers and university professors, it is mostly comprised of low-wage workers. Since 2010, the top five fastest growing occupations have been in the service sector, and four of these jobs have a median wage of $21,000 a year or less.
Coincidentally I’m sure, workers today are less organized than they were at the height of manufacturing (and in the US, of course, union members were never close to the majority of the workforce). In 2014, the percentage of workers in unions was 11.1%, down from 20.1% in 1983. Workers in education, as I’m sure this audience knows, now have the highest unionization rate of any profession in the US – 35.3% are union members; the rate of unionization of public sector workers in general, 35.7%. The largest union in the US: the National Education Association. The target of many recent education reforms, backed strongly by many in the technology industry: unions. Something is coming to take unionized teaching jobs: robots or otherwise.
Generally speaking, as employment has moved from manufacturing to service, work has become contingent, “casualized,” “adjunctified.” It has become femininized.
We can see the feminization of teaching labor is, as Dana Goldstein underscores in her recent history of the teaching profession Teacher Wars, deeply intertwined with a hope for a placated labor force, one with a missionary zeal that is angelic and loving in its demand for discipline from students – behaviorally, intellectually – but that makes few demands from administrators for things like equal pay, job stability, or academic freedom.
In all things, all tasks, all jobs, women are expected to perform affective labor – caring, listening, smiling, reassuring, comforting, supporting. This work is not valued; often it is unpaid. But affective labor has become a core part of the teaching profession – even though it is, no doubt, “inefficient.” It is what we expect – stereotypically, perhaps – teachers to do. (We can debate, I think, if it’s what we reward professors for doing. We can interrogate too whether all students receive care and support; some get “no excuses,” depending on race and class.)
What happens to affective teaching labor when it runs up against robots, against automation? Even the tasks that education technology purports to now be able to automate – teaching, testing, grading – are shot through with emotion when done by humans, or at least when done by a person who’s supposed to have a caring, supportive relationship with their students. Grading essays isn’t necessarily burdensome because it’s menial, for example; grading essays is burdensome because it is affective labor; it is emotionally and intellectually exhausting.
This is part of our conundrum: teaching labor is affective not simply intellectual. Affective labor is not valued. Intellectual labor is valued in research. At both the K12 and college level, teaching of content is often seen as menial, routine, and as such replaceable by machine. Intelligent machines will soon handle the task of cultivating human intellect, or so we’re told.
Of course, we should ask what happens when we remove care from education – this is a question about labor and learning. What happens to thinking and writing when robots grade students’ essays, for example. What happens when testing is standardized, automated? What happens when the whole educational process is offloaded to the machines – to “intelligent tutoring systems,” “adaptive learning systems,” or whatever the latest description may be? What sorts of signals are we sending students?
And what sorts of signals are the machines gathering in turn? What are they learning to do?
Often, of course, we do not know the answer to those last two questions, as the code and the algorithms in education technologies (most technologies, truth be told) are hidden from us. We are becoming as law professor Frank Pasquale argues a “black box society.” And the irony is hardly lost on me that one of the promises of massive collection of student data under the guise of education technology and learning analytics is to crack open the “black box” of the human brain.
We still know so little about how the brain works, and yet, we’ve adopted a number of metaphors from our understanding of that organ to explain how computers operate: memory, language, intelligence. Of course, our notion of intelligence – its measurability – has its own history, one wrapped up in eugenics and, of course, testing (and teaching) machines. Machines now both frame and are framed by this question of intelligence, with little reflection on the intellectual and ideological baggage that we carry forward and hard-code into them.
“Can a machine think?” Alan Turing famously asked in 1950. But rather than answer that question, Turing proposed “the imitation game,” something we’ve come to know since as the Turing Test. Contrary to popular belief, the test as conceived by Turing isn’t really about a machine that can pass based on “thinking.” Rather, Turing’s original contrivance was based on a parlor game – a gendered parlor game involving three people: a man, a woman, and an interrogator.
The game is played as follows: the interrogator cannot see the man or woman but asks them questions in order to identify their sex. The man and woman respond via typewritten answers. The goal of the man is to fool the interrogator. Turing’s twist: replace the man with a machine. “Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman?”
The question is therefore not “can a machine think?” but “can a machine fool someone into thinking it is a woman?”
What we know today as the Turing Test is not nearly as fascinating or as fraught – I mean, what does it imply that the bar for intelligent machinery is, for Turing, to be better at pretending to be a woman than a man is? What would it mean for a machine to win that imitation game? What would it mean for a machine to fool us into thinking, for example, that it could perform affective labor not just computational tasks?
Today, the Turing Test is often just a marketing ploy. The Loebner Prize, an annual competition based on the Turing Test for example, asks entrants to create a chat-bot that can fool the judges after a 5 minute conversation into thinking it, and not another human interlocutor, is the human. “The most human chat-bot” is awarded a prize, as is, in an award that should give us all pause as we move into a more mechanized future, one for “the most human human.”
Turing, for his part, was not seen as fully human because he was gay. Homosexuality was, at the time, against the law in Britain, and Turing was charged with gross indecency in 1952. He was convicted, opting for hormonal “treatment” rather than imprisonment. He committed suicide in 1954.
Identity. Performance. Deception. Passing. Secrets. Labor. Algorithms. These are Turing’s legacy, deeply embedded into the code-breaking, computing machine. We carry these forward into today’s education technologies.
Turing died before the term “artificial intelligence” was coined, but we can see his influence in particular in two areas in which AI has been developed, performed, and judged: chess and the chat-bot.
Turing wrote by hand an algorithm – pre-computer – for one of the very first computer programs in which a machine could play chess against a human. This was a predecessor for IBM’s Deep Blue which famously defeated chess champion Garry Kasparov in 1996. That research in turn was the predecessor for IBM’s Watson which defeated Brad Rutter and Ken Jennings on the TV game show Jeopardy in 2011. Men versus machine. It’s a tangent to a discussion about the future of teaching and learning and labor, you might think – except that these reflect what we think of as “smart.” Performances of “smartness” matter to education. Moreover, IBM is now exploring using Watson in schools, specifically to teach teachers. “The classroom will learn you,” the IBM website about the program boasts, perhaps unaware that it’s echoing the speech pattern of a well-known meme: “In Soviet Russia….”
Aspirations for IBM Watson in teacher training aside, it’s the development of chat-bots that has long been most closely associated with AI, modeled in part on what we expect AI to do to pass the Turing Test.
One of the earliest and best known chat-bots was developed by MIT computer scientist Joseph Weizenbaum in the mid–1960s. Its name: ELIZA – yes, named after Eliza Doolittle in a working-class character in George Bernard Shaw’s play Pygmalion, who is taught to speak with an upper-class accent so she can “pass.”
ELIZA ran a “doctor” script, simulating a Rogerian psychiatrist. “Hello,” you might type. “Hi,” ELIZA responds. “What is your problem?” “I’m angry,” you type. Or perhaps “I’m sad.” “I am sorry to hear you are sad,” ELIZA says. “My dad died,” you continue. “Tell me more about your family,” ELIZA answers. The script always eventually asks about family, no matter what you type. It’s been programmed to do so. That is, ELIZA was programmed to analyze the input for key words and to respond with a number of canned phrases, that contained therapeutical language of care and support – a performance of “smart,” I suppose, but more importantly a performance of “care.”
Weizenbaum’s students knew the program did not actually care. Yet they still were eager to chat with it and to divulge personal information to it. Weizenbaum became incredibly frustrated by the ease with which this simple program could deceive people. When he introduced ELIZA to the non-technical staff at MIT, according to one story at least, they treated the program as a “real” therapist. When he told a secretary that he had access to the chat logs, she was furious that Weizenbaum would violate her privacy – doctor-patient confidentiality – by reading them. Weizenbaum eventually became one of the leading critics of artificial intelligence, cautioning about the implications of AI research and arguing that no matter the claims that AI would make about powerful thinking machines, computers would never be caring machines. Computers would never have compassion.
And yet that doesn’t seem to matter. In some ways, we believe – we want to believe – that they are.
Many chat-bots were developed in the late twentieth century for use in education. These are often referred to as “pedagogical agents,” – agents, not teachers – programs that were part of early intelligent tutoring systems and that, like ELIZA, were designed to respond helpfully, encouragingly when a student stumbled. The effectiveness of these chat-bots is debated in the research (what do we even mean by “effectiveness”), and there is little understanding in how students respond to these programs, particularly when it comes to vulnerability and trust, such core elements of learning.
Are chat-bots sophisticated enough to pass some sort of pedagogical Turing Test? (Is that test, like Turing’s imitation game, fundamentally gendered?) Or rather is it, as I fear, that we’ve decided we just don’t care. We do not care that the machines do not really care for students; indeed, perhaps education, particularly at the university level, has never really cared about caring at all. Students do not care that the machines do not really care because they do not expect to be cared for by their teachers. “We expect more from technology and less from each other,” as Sherry Turkle has observed. Caring labor is a vulnerability, a political liability, a weakness, not an efficiency.
So what does it mean then if we offload affective labor – the substantive and the performative – to technology, to ed-tech? It might seem counterintuitive that we’d do so. After all, we’re often reassured that computers will never be able to do that sort of work. They’re better at repetitive, menial tasks, we’re told – at physical labor. “Men’s work.” “The Age of the Robot Worker Will Be Worse for Men,” The Atlantic recently argued.
And yet at the same time, we seem so happy to bear our souls, trust our secrets, be vulnerable with and to and by computers. What does that mean for teaching and learning? What does it mean for the work educators do – work that is, as it has been for centuries, under attack – insufficient and inefficient.
We’re told by some automation proponents that instead of a future of work, we will find ourselves with a future of leisure. Once the robots replace us, we will have immense personal freedom, so they say – the freedom to pursue “unproductive” tasks, the freedom to do nothing at all even, except I imagine, to continue to buy things.
On one hand that means that we must address questions of unemployment. What will we do without work? How will we make ends meet? How will this affect identity, intellectual development?
Yet despite predictions about the end of work, we are all working more. As games theorist Ian Bogost and others have observed, we seem to be in a period of hyper-employment, where we find ourselves not only working numerous jobs, but working all the time on and for technology platforms. There is no escaping email, no escaping social media. Professionally, personally – no matter what you say in your Twitter bio that your Tweets do not represent the opinions of your employer – we are always working. Computers and AI do not (yet) mark the end of work. Indeed, they may mark the opposite: we are overworked by and for machines (for, to be clear, their corporate owners).
Often, we volunteer to do this work. We are not paid for our status updates on Twitter. We are not compensated for our check-in’s in Foursquare. We don’t get kick-backs for leaving a review on Yelp. We don’t get royalties from our photos on Flickr.
We ask our students to do this volunteer labor too. They are not compensated for the data and content that they generate that is used in turn to feed the algorithms that run TurnItIn, Blackboard, Knewton, Pearson, Google, and the like. Free labor fuels our technologies: Forum moderation on Reddit – done by volunteers. Translation of the courses on Coursera and of the videos on Khan Academy – done by volunteers. The content on pretty much every “Web 2.0” platform – done by volunteers.
We are working all the time; we are working for free.
It’s being framed, as of late, as the “gig economy,” the “freelance economy,” the “sharing economy” – but mostly it’s the service economy that now comes with an app and that’s creeping into our personal not just professional lives thanks to billions of dollars in venture capital. Work is still precarious. It is low-prestige. It remains unpaid or underpaid. It is short-term. It is feminized.
We all do affective labor now, cultivating and caring for our networks. We respond to the machines, the latest version of ELIZA, typing and chatting away hoping that someone or something responds, that someone or something cares. It’s a performance of care, disguising what is the extraction of our personal data.
A year ago, in the midst of frustration about freelancing and data collection and digital hyperemployment, I wrote an article called “Maggie’s Digital Content Farm.” I borrowed from Bob Dylan’s song, which he’d in turn borrowed from the Bentley Brother's 1929 recording of “Down on Penny's Farm,” which criticized rural landlords who systematically exploited their day-laborers. I felt as though that’s what we’d become, working away on someone else’s space – someone else’s digital land – for a boss that disrespected us, lied to us, extracted value from us, handed us over to the law enforcement if we appeared to have become radicalized.
Dylan’s performance of “Maggie’s Farm” at the Newport Folk Festival in 1965 was, of course, part of his controversial electric set. He plugged in to boos and hisses from the crowd. I asked in my article “Maggie’s Digital Content Farm” if it was time for us to unplug, to refuse to labor on the digital content farms and certainly not to conscript our students to labor there for us and with us.
But I think this refusal cannot simply be one of us, two of us, a handful of us individually unplugging. We have to strike en masse. We have to embrace our radical inefficiencies. We have to re-discover our collectivity in an age that is luring us with an ideology of individualism. We have to tell different stories about what the robots will do.
We are told that the future of machines – teaching machines and intelligent machines – will be personalized. That is the original promise, if we think back to Edward Thorndike in 1912:
If, by a miracle of mechanical ingenuity, a book could be so arranged that only to him who had done what was directed on page one would page two become visible, and so on, much that now requires personal instruction could be managed by print.
Personalization. Automation. Management. The algorithms will be crafted, based on our data, ostensibly to suit us individually, more likely to suit power structures in turn that are increasingly opaque.
Programmatically, the world’s interfaces will be crafted for each of us, individually, alone. As such, I fear, we will lose our capacity to experience collectivity and resist together. I do not know what the future of unions looks like – pretty grim, I fear; but I do know that we must enhance collective action in order to resist a future of technological exploitation, dehumanization, and economic precarity. We must fight at the level of infrastructure – political infrastructure, social infrastructure, and yes technical infrastructure.
It isn’t simply that we need to resist “robots taking our jobs,” but we need to challenge the ideologies, the systems that loath collectivity, care, and creativity, and that champion some sort of Randian individual. And I think the three strands at this event – networks, identity, and praxis – can and should be leveraged to precisely those ends.
A future of teaching humans not teaching machines depends on how we respond, how we design a critical ethos for ed-tech, one that recognizes, for example, the very gendered questions at the heart of the Turing Machine’s imagined capabilities, a parlor game that tricks us into believing that machines can actually love, learn, or care.