read

Sorry for the light posting here, but I’m immersed in writing my book – Teaching Machines – which is due out late 2013/early 2014 (I hope).

However, the news of this past week has been fairly distracting from that project: revelations about the US government’s massive spying programs that include the monitoring of all our telephony metadata, as well as our usage of many popular technology sites. Verizon. Google. YouTube. Apple. Microsoft. Skype. Yahoo. Facebook.

My worries here aren’t simply about the sanctity of the US Constitution (although, god yes, there’s that). Nor are my education-related concerns that schools have been outsourcing many of their IT functions – hardware and software – to these very companies (although, god yes, there’s that too).

My book examines the history of education technologies and our long-running drive to automate teaching and learning. Pressey’s teaching machines of the 1920s. Skinner’s teaching machines of the 1950s. Radio, television, and YouTube broadcast of lectures and lessons. Intelligent tutoring systems. Khan Academy and its millions of lessons delivered. Adaptive learning tools. MOOCs. Massive student data collection. Artificial intelligence.

It’s the latter few that were the inspiration of my book, when I sat in one of Google’s self-driving cars with its creator and now MOOC startup founder Sebastian Thrun. And these also have me thinking about the relationship between Boundless Informant, the government surveillance program, and the boundless informants/information we’re collecting and developing and analyzing in education.

No doubt, it’s the data collection proposed by the Gates Foundation-funded inBloom that seems to be getting the most scrutiny lately – the non-profit’s plans to create a data storage and analysis infrastructure to, in its words, offer a “more complete picture of student learning and [make] it easier to find learning materials that match each student’s learning needs.” Many people are concerned about the organization’s plans to collect and centralize an unprecedented amount of data about public school children. Baptismal records. Disruptive behavior and disciplinary consequences. Immigration status. Parents’ marital status. Homelessness. Foster care. Learning objectives. Life insurance policy. Pregnancy. Attendance. Grades. Graduation Plans. Test scores. And so on.

When it comes to big (education) data collection, inBloom remains mostly a scheme, still in beta and not yet fully implemented even in the states participating in its pilot. But that doesn’t mean that there aren’t other organizations – schools, districts, universities, non-profits, and for-profits – that are amassing huge amounts of student data. Through learning management systems. Student information systems. Digital textbooks. Apps. Websites. Email.

The promise often echoes inBloom’s “vision statement”: more data ostensibly means better products and services, more “personalized,” more “individualized” tools to meet learners’ needs. That “personalization” involves the collection of data – mouse clicks, keyboard strokes, viewing patterns, quiz answers, likes, purchases, course enrollments, reading habits – and the development of algorithms, models, graphs and profiles to deliver “appropriate” content, assessment, recommendations, tutoring, and so on.

But who decides what is “appropriate”? Who has oversight over these algorithms? What is the profile of a “good student”?

This last question seems particularly relevant in light of the NSA surveillance: who is a “good citizen”? Can the government build a profile of one? Can it identify the patterns “good citizens” make and, with enough data amassed, identify threats or deviance?

I’m not sure that the technologies of surveillance and intelligence analysis really can perform with the precision and omniscience that big data boosters suggest. That doesn’t make the surveillance any less frightening, I should add – in fact, it might make it even more so, as we’re all (wonderfully) aberrant, and as there's no such thing as having "nothing to hide." And I should add too: I’m skeptical of the boasts I hear about the potentials data, education analytics, and adaptive learning software. I think our desire to predict and control human behavior and human knowledge exceeds our ability to do so.

What will be the results of these data collection and surveillance initiatives, even if the all-knowing-ness remain mostly fiction? As Timothy Burke writes about surveillance culture in “The Slow Poison of the Covert Imagination,”

The belief that there should be a special advantage, a backdoor, corrodes the ability of both nations and individuals to face the unfolding history of their future with a realistic understanding of their own limits and frailties. There is a fatalism that comes with a belief that we are in everything we might do already known by powers greater than ourselves, known better by invisible and abstract institutions than we know ourselves. But that is the flip side of a grandiose, delusional trust in what that surveillance state will do, a belief that someday we will sit down to the greatest banquet ever of peaceful, democratic omelettes made from a legion of broken eggs. So we neither do the hard work of self-fashioning (what would be the point?) or expect political and social institutions to do their own kind of hard work in fashioning real progress step by painful step, and especially we stop expecting the latter to flow from the former.

Burke writes here about surveillance and political personhood, but I have to wonder more generally about surveillance and selfhood, particularly as it relates to schooling, particularly as it relates to the growing pervasiveness of education technologies. No doubt, the surveillance culture of schools is a different one than the surveillance culture of the nation-state. The data collection undertaken by education technologies is different from the PRISM program. But I do wonder what the happens to personhood here – political personhood, intellectual development, subjectivity, autonomy, and agency – when our institutions pronounce their algorithmic intentions to monitor and track and profile us all.

Image credits: The Noun Project: Eye and Computer

Audrey Watters


Published

Hack Education

The History of the Future of Education Technology

Back to Archives