This is part one of my annual review of the year in ed-tech
In 2005, Joan Didion published The Year of Magical Thinking, which chronicles her husband’s death in December 2003, shortly after their daughter had fallen into septic shock and been placed an induced coma.
I read the book that very year – and honestly, I don’t often buy or read books in hardcover – shortly after my own husband died.
I needed help in understanding grief – my own grief, my son’s grief. “In time of trouble, I had been trained since childhood, read, learn, work it up, go to the literature. Information was control,” Didion writes. “Given that grief remained the most general of afflictions its literature seemed remarkably spare.” I found her book comforting, while its material horrific, in part because it was a book. Didion had found the words to talk about grief and mourning, and she’d written those words down, and she’d published them in a material object I could hold and weep into.
In Didion’s writing, I recognized my own performance of and reliance upon the rituals of “magical thinking,” the omens and interdictions that I believed somehow could undo or stop or assuage the horror of Anthony’s death.
“I opened the door and I seen the man in the dress greens and I knew. I immediately knew.” This was what the mother of a nineteen-year-old killed by a bomb in Kirkuk said on an HBO documentary quoted by Bob Herbert in The New York Times on the morning of November 12, 2004. “But I thought that if, as long as I didn’t let him in, he couldn’t tell me. And then it – none of that would’ve happened. So he kept saying, ‘Ma’am, I need to come in.’ And I kept telling him, ‘I’m sorry, but you can’t come in.’”
This is the rationality of the irrationality of grief. This is the irrationality of the rationality of death.
Mourn and Organize
I want to start this year’s review of education technology acknowledging grief. This has been a terrible, terrible year. I want to start this year’s review of education technology sanctioning, if such a thing is necessary, our mourning. It is not self-indulgent to mourn. We need not hide our feelings.
Until now I had been able only to grieve, not mourn. Grief was passive. Grief happened. Mourning, the act of dealing with grief, required attention. Until now there had been every urgent reason to obliterate any attention that might otherwise have been paid, banish the thought, bring fresh adrenaline to bear on the crisis of the day.
I want to start this year’s review of education recognizing what’s been lost. Not just the loss of Seymour Papert and Prince and David Bowie and Phife Dawg and Harper Lee and Gwen Ifill and Alan Rickman and Gene Wilder and Ursula Franklin and Scott Erik Kaufman and Jerome Bruner and Elie Wiesel and Alvin Toffler and Leonard Cohen (and many more), but the grief and the pain that stems from these and so many other losses. So many losses. It is impossible for me to write about education technology in 2016 without noting their passing, without talking about Brexit, Trump, Duterte, Aleppo, Orlando… I could go on… and acknowledging that many of us have stumbled through this year – from tragedy to tragedy (personal, local, regional, national, global) – in a state of shock, in a state of grief.
Grief clouds your thinking, magical or otherwise.
In a society that considers itself highly rational, highly technological, highly scientific, to call something “magical” often serves to dismiss or diminish it – or to dismiss or diminish those who do not sufficiently understand science or tech. That famous Arthur C. Clarke saying and whatnot. That’s not what Didion meant to do, of course. Indeed, she’s quite methodical with her study – her inspection, introspection – of grief. When she turns to “the literature,” she reads medical and psychiatric texts alongside poetry.
But there’s something about how grief in particular ruptures the rational. It makes us want to believe in – cling to, really – unbelievability. We want to believe that it can’t be true. And in the midst of death and horror and suffering, we often find some small piece of comfort there.
I mean, the Cubs did win the World Series in 2016. Bob Dylan won a Nobel Prize.
Expertise in an Age of Post-Truth
Oxford Dictionaries has declared “post-truth” the word of the year. “Post-truth,” an adjective: “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.”
It’s been over a decade since comedian Stephen Colbert introduced the word “truthiness” on The Colbert Report – his attempt to describe political arguments, particularly those made by conservatives, that need no facts or evidence because they “just feel right.” In other words, we did not suddenly enter a period of “post-truth” in 2016. But facts, evidence, and expertise have taken blow after blow in recent years (at least since President George W. Bush), and the invocation of facts, evidence, and expertise in political arguments (particularly those arguments on social media) is now interpreted as bias rather than objectivity.
On the night of the US Presidential election, sociologist Nathan Jurgenson wondered if there wasn’t an equivalent to “truthiness” embraced by those of different political persuasions:
let’s call it “factiness.” Factiness is the taste for the feel and aesthetic of “facts,” often at the expense of missing the truth. From silly self-help-y TED talks to bad NPR-style neuroscience science updates to wrapping ourselves in the misleading scientisim [sic] of Fivethirtyeight statistics, factiness is obsessing over and covering ourselves in fact after fact while still missing bigger truths.
Jurgenson calls “factiness” a belief of the left-wing – one contrary to the “truthiness” on the right. I’m not sure these fit, quite so neatly, into a political binary. I contend that “factiness” is a core belief of technocracy, which finds a foothold in corporations as often as in academia, in bureaucracy far more often than on the barricades. “Factiness” is, no doubt, a core belief of “elites,” and it’s a core belief of “experts,” and like it or not, these two have become intertwined. Intertwined and despised.
“We now operate in a world in which we can assume neither competence nor good faith from the authorities, and the consequences of this simple, devastating realization is the defining feature of American life at the end of this low, dishonest decade,” Chris Hayes wrote in Twilight of the Elites (2012). “Elite failure and the distrust it has spawned is the most powerful and least understood aspect of current politics and society. It structures and constrains the very process by which we gather facts, form opinions, and execute self-governance.”
And here we are. A loss of faith in governments, governance, globalization, pluralism, polling, pundits, public institutions, private institutions, markets, science, research, journalism, democracy, each other.
“If the experts as a whole are discredited,” Hayes cautions, “we are faced with an inexhaustible supply of quackery.”
Education technology faces an inexhaustible supply of quackery.
Education Technology and (Decades and Decades of) Quackery
Education technology has faced an inexhaustible supply of quackery for quite some time – those selling snake oil, magic pills, and enchanted talismans and promising disruption, efficiency, and higher test scores. The quackery in 2016 wasn’t new, in other words, but it was notable. It is certainly connected to the discrediting of “expertise,” whether that’s teachers-as-experts or researchers-as-experts. (Students, of course, have rarely been recognized as experts – unless they fit the model of “roaming autodidacts” that society so readily lauds.)
What do we believe about education? About learning? How do we know, and who knows “knowing” in a world where expertise is debunked?
Psychology, a field of research whose history is tightly bound to education technology, continued to face a “reproducibility crisis” this year, with challenges to research on “ego depletion,” to claims based on fMRI software, and – of course – to the Reproducibility Project itself, that 2015 report that found that the results in less than 40% of a sample of 100 articles in psychology journals held up to retesting.
So who do you believe? The scientists? The engineers? The advertisers? The media?
In January, “brain-training” company Lumosity agreed to pay $2 million to settle Federal Trade Commission charges that it had deceived customers with its claims that its games improved cognitive functions. But despite the settlement and despite what science journalist Ed Yong politely calls “The Weak Evidence Behind Brain Training Games,” “brain training” remains quite a popular product in education technology, with the phrase “brain-based” used to “scientize” all sorts of classroom practices.
Much like “brain training,” “brain scanning” was repeatedly hyped this year as a possible way to improve the efficacy of education software. Hooking up students to headbands to monitor their brain activity has now left the research lab and entered the exhibit hall. Some of the headbands and helmets now on the market deliver electric shocks and promise to boost “performance” or deliver “instant energy or calm.” Some promise to monitor and measure brain activity. “Brain-zapping” is, according to a story in The Observer this spring, a “nascent industry,” even though there’s really no evidence to support it.
No evidence. But a lot of wild claims made in (ed-)tech journalism nonetheless.
“Researchers Create Matrix-Like Instant Learning Through Brain Stimulation,” Techcrunch announced early this year, a re-write of a press release issued by HRL Laboratories, a research center jointly owned by Boeing and General Motors, regarding an article it had published in the February 2016 issue of Frontiers in Human Neuroscience (a pay-to-publish journal). The press release invoked The Matrix. Of course it did. But the press release was misleading; what the researchers had actually discovered about brain simulation: more research is needed. As I wrote then in response,
Whether or not this is science or fiction, let’s consider why “Matrix-style learning” is so compelling. Stories like this seem to emerge with some frequency. (We might ask too, why do neuroscientific claims frequently go unchallenged by the press - but then again, so much education/technology journalism is wildly uncritical. Parroting PR is pretty routine.)
Science aside, let’s think about culture and society. What’s the lure of “instant learning” and in particular “instant learning” via a technological manipulation of the brain? This is certainly connected to the push for “efficiency” in education and education technology. But again, why would we want learning to be fast and cheap? What does that say about how we imagine and more importantly how we value the process of learning?
There’s little evidence of how these products or practices will improve teaching or learning. But there’s a ton of snake oil. And a lot of wishful thinking.
Education Technology and (Decades and Decades of) Wishful Thinking
The promise of education technology, like it or not, is mostly wishful thinking. Proponents of ed-tech insist that ed-tech is necessary; that without ed-tech, schools are outmoded and irrelevant; that “the future” demands it. But as I argued in a talk I gave at VCU in November, “the best way to predict the future is to issue a press release.” That is, the steady drumbeat of marketing surrounding the necessity of education technology largely serves to further ideologies of neoliberalism, individualism, late-stage capitalism, outsourcing, surveillance, speed, and commodity fetishism.
I know many of us wished otherwise.
Arguably the business of “predicting the future” took a bit of a hit this year, what with the failure, as some describe it, of polling in the Presidential election. But “predicting the future” – with or without the mantle of science – is often about pointing to and sanctioning a particular vision of the future. Wishful thinking.
Folks have long made predictions about the future of education and education technology. Such-and-such practice or product will die out. Such-and-such practice or product will disrupt. Such-and-such practice or product will revolutionize. Such-and-such practice or product will soon be adopted and will change everything.
Or as the Education Week headline described Bill Gates’ keynote at this year’s annual venture capital gala, the ASU-GSV Summit, “Ed Tech Has Underachieved But Better Days Are Ahead.” They always are.
Hype as Wishful Thinking
There’s a long list of technology products that I’m sure will appear on many “2016 Top Ed-Tech Trends” lists:
Chatbots: As eCampus News pronounced in November: “How chatbots will change the face of campus technology.” (I wrote about the history of the future of chatbots in education in September.)
Blockchain: “10 amazing ways Blockchain could be used in education” by Donald Clark. (I’ll write more about the blockchain and certification in a forthcoming article in this series.)
Pokemon Go: “Why Pokemon Go shows the future of learning gamification,” according to Education Dive at least. (Bonus: “5.3 Reasons Pokemon Go will Replace the LMS” by Tom Woodward.)
3D Printing: 3D printing is “Revolutionizing Project-Based Learning,” according to Makerbot. Related: “MakerBot will no longer make its own 3D printers.” And “The MakerBot Obituary.” RIP.
Wearables: “Eye-trackers that detect when your mind is wandering. Clothes that let you ‘feel’ what it’s like to be in someone else’s body. Sensors that connect your heart rate to how engaged you are in class. These are the kinds of wearable technologies that could soon impact how we learn,” says Edsurge. Wishful thinking? Quackery? (I’ll be talking more about wearables and surveillance in a forthcoming article in this series.)
Bullshit or not, the marketing of these products continues – often with a breathless description of “revolution” and “transformation” and “disruption” and ever-growing business opportunities – even if few schools or teachers or students buy the product, can afford to buy the product, or want to buy the product.
Fads fade, of course. Hype wanes. Take iPads, for example. Or the flipped classroom. Or MOOCs even.
Fads fade, and then sometimes they re-emerge, sometimes they re-brand. “Zombies ideas,” as I’ve previously called them.
No doubt, the most wishful of the ed-tech zombies this year was VR.
Virtual Reality as Wishful Thinking
In July, I wrote an article called “(Marketing) Virtual Reality in Education: A History.” The opening paragraphs:
Virtual reality is, once again, being heralded as a technology poised to transform education. I say “once again” because virtual reality has long been associated with such promises. VR appeared in some of the earliest Horizon Reports for example, with the 2007 report positing that virtual worlds would be adopted by higher ed institutions within two to three years' time; funnily enough, the 2016 report offers the same outlook: we’re still two to three years out from widespread adoption of VR.
The history of VR goes back much farther than this – the phrase “virtual reality” was coined in 1987 by Jaron Lanier, but attempts to create the illusion of being somewhere else – through art and/or technology – date back farther still.
“But this time it’s different.” That’s the common response from some quarters to my (repeated) assertion that there’s a substantial history to education technologies – to both the technologies themselves and to the educational purposes for which they’re designed or utilized – that is consistently ignored.
This much is true: augmented reality and virtual reality startups have seen record-setting levels of venture capital in recent years predicated on advancements in the tech (although much of that investment has gone to just a handful of companies, such as Magic Leap). In 2014, Facebook acquired Oculus VR, Google released its Cardboard viewer, and Playstation announced it was working on a VR gaming headset – these have all been interpreted in turn as signs that virtual reality will soon be mainstream.
“Soon.” As the New Media Consortium’s annual reports should serve to remind us, VR has always been “on the horizon.”
In that article – which I’ll refrain from just copy-pasting here – I look at the history of educational uses of stereoscopy, which date back to the Victorian era when a combination of lenses and imagery were used to trick the brain into interpreting two- as three-dimensionality. Many of the products touted as VR today are simply that: stereoscopy with a fancier viewer. (Say, the Android device in Google’s Cardboard Viewer and Expeditions program.)
Virtual reality, at least in its “purest” or strictest sense, requires some very expensive and cumbersome hardware in order to create something more than an “immersive” viewing experience of a 360 degree video. Headsets. Gloves. Sensors. Projectors. Processors. To truly provide a virtual reality, the technology must achieve “sensory immersion in a virtual environment, including a sense of presence,” game developer and VR scholar Brenda Laurel argued in June, listing a series of requisite characteristics almost entirely absent from the multimedia products marketed to schools as VR.
And that marketing, it is worth pointing out, is almost the same as marketing in the early twentieth century urging educators to adopt film as an education technology: “Learn about other cultures.” “Visit faraway lands without leaving the classroom.” “Guided tours of places school buses cannot go.” “Modern pedagogical methods require modern media.” “Pictures speak a universal language.” “This is science.”
(Image credits: Educational Screen, 1924)
So the claims that this is new and revolutionary are dubious. The claims that stereoscopy is VR are dubious. The claims that VR will “reinvent education” are dubious. (According to a report on “The Top 10 Companies Working on Education in Virtual Reality and Augmented Reality,” one of the “top” applications “simulates a lecture hall in virtual reality.”) The claims that VR will expand access to education for everyone are dubious. (Despite headlines claiming, for example, that “Anybody can now buy Microsoft’s $3,000 HoloLens,” not everyone and not every school can afford to do so.) The claims that the technology is finally ready for consumers are dubious. (VR still makes people nauseous.) The claims that the technology will enhance empathy are really dubious, particularly in the light of Oculus Rift founder Palmer Luckey’s financial support for an unofficial pro-Donald Trump group dedicated to “shitposting” and spreading hateful memes about Hillary Clinton. (I’ll have more to say on Oculus Rift and Facebook in the next post in this series – one that will address the politics of ed-tech – as well as in the final post – one that will talk about discrimination by design.)
But the breathlessness about VR persists, much as it has persisted for decades: it will change education forever. It is the next big thing. It’s the future of school (and it’s the future of Facebook). It is a disruptive innovation.
It is all wishful thinking.
Grief and Loss and Education Technology
Perhaps it’s time to ask why – why this is the ritual and the story that education continues to turn to? It has, after all, for at least one hundred years: the promise of teaching machines. What is the loss that we are suffering? What are we grieving? Why are we in this fog of educational make-believe? Why are we so wrapped up in the magical thinking and wishful thinking of education technology? What do we hope the practices of ed-tech will deliver, will relieve? What are we hoping to preserve? What are we hoping to absolve? What might we afraid to admit has died? Why is wishful thinking, in and through and with education technology, a balm for so many of us?
At what point should we just let go...
Financial data on the major corporations and investors involved in this and all the trends I cover in this series can be found on funding.hackeducation.com. Icon credits: The Noun Project.