read

This is the seventh article in my series Top Ed-Tech Trends of 2015

Competing Narratives Surrounding Education Data


“Data” – its collection, analysis, monetization, and usage – might just be one of the most controversial elements of education technology. I say this despite the assuredness from many in industry and government who continued throughout 2015 to make wild promises about what “data” can do: that the analysis of education data will unlock the “black box” of the human mind and as such make learning more efficient; that the analysis of education data will enable better, more informed decision-making.

There has been a growing backlash in recent years against a perceived obsession with data, “seen as a proxy for an obsessive focus on tracking standardized test scores,” The New York Times wrote this spring. “But some school districts, taking a cue from the business world, are fully embracing metrics, recording and analyzing every scrap of information related to school operations. Their goal is to help improve everything from school bus routes and classroom cleanliness to reading comprehension and knowledge of algebraic equations.”

“Anything that can be counted or measured will be,” the newspaper observed.

Measuring and counting anything and everything – this underscores the concerns that many have about education data: that its collection is overzealous and as such a privacy violation; that its storage is a security risk; that its analysis is furthering a culture of algorithmic control and surveillance.

After the disastrous end of inBloom in 2014, a data infrastructure initiative funded by the Gates Foundation and Carnegie Corporation, it did appear as though parents and others who were demanding better privacy protections for education data had the upper hand – at least in the court of public opinion. Schools and ed-tech companies found themselves on the defensive about what potentially sensitive student information they were collecting and sharing and why.

So this year, the industry made a concerted effort to push back on the PR front and reframe the issue: “Standardized Tests Suck. But the Fix Is More Data, Not Less,” Wired wrote in March. “Why Opting Out of Student Data Collection Isn’t the Solution,” the Future of Privacy Forum’s Jules Polonetsky and Brenda Leong wrote in March. “Are We Overregulating Student Data Privacy?” Edsurge asked in June. “Privacy Push Must Not Prevent Personalized Learning,” the Clayton Christensen Institute’s Michael Horn wrote in July.

As I will explore in more detail below, probably the main way in which the “data” issue gets reframed by the ed-tech industry is to describe it as a cornerstone of “personalization.”

But “data” remains too a cornerstone of education “accountability” (as I’ve noted in my articles on “the politics of education technology” and “standardized testing.”) And when the Gates Foundation announced its new higher ed agenda in March, it included the creation of “a national data infrastructure that enables consistent collection and reporting of key performance metrics for all students in all institutions that are essential for promoting the change needed to reform the higher education system to produce more career-relevant credential.” InBloom isn’t dead at all, it appears.

Also echoing inBloom, this news from July: “LearnSphere, a new $5 million federally-funded project at Carnegie Mellon University, aims to become ‘the biggest open repository of education data’ in the world, according to the project leader, Ken Koedinger.” Koedinger is the co-founder of Carnegie Learning, a “cognitive tutor” company – that’s 80s-speak for “personalization.” And that’s why I write again and again about “zombie ideas” in ed-tech.

LearnSphere said it would store “No student names, no addresses, no zip codes, no social security numbers… No race, family income or special education designations. ‘The student identifier column, even if yours is already anonymized, we re-anonymize it automatically,’ [Koedinger] added.”

The promise to “de-anonymize” and “de-identify” was one way in which the industry responded to concerns about privacy and security this year – that is, it didn’t agree to collect less data, but said it would work to obscure its owners’ identity. According to a report released by the industry-funded Future of Privacy Forum, “Integrated with other robust privacy and security protections, appropriate de-identification – choosing the best de-identification technique based on a given data disclosure purpose and risk level – provides a pathway for protecting student privacy without compromising data’s value.”

But completely de-anonymized data is not really a “thing.” According to research on credit card transactions published early this year by MIT’s Yves-Alexandre de Montjoye, “knowing just four random pieces of information was enough to reidentify 90 percent of the shoppers as unique individuals and to uncover their records.” In the midst of privacy and security concerns, this means too that there’s an important trade-off here, particularly for education researchers: “higher standards for de-identification can lead to lower-value de-identified data.”

There are trade-offs that run throughout the various approaches to education data: more or less privacy, more or less security, more or less transparency, more or less surveillance, more or less agency, and so on.

"Big data sets do not, by virtue of their size, inherently possess answers to interesting questions." - Education researcher Justin Reich on "Rebooting MOOC Research."

Personalization (Whatever That Means)


But it’s not really clear to me that more or less “data” really equals more or less “personalization” – although that’s certainly how the ed-tech industry likes to frame it. Why, if you listen to the way “personalization” gets pitched, it’s almost as if schools had no way of knowing how well students understood a concept or how often they were coming to class or which school lunch kids preferred – and no way to respond to these “personal preferences” – before “learning analytics” and “dashboards” made this possible.

It’s probably worth starting by asking what exactly do we mean by “personalization”? The technology sector likes to invoke “personalization” too, of course – it’s your friends’ status updates as Facebook’s algorithm deems best to display them; it’s the list of movies available in its current catalog that Netflix recommends you watch next. There are other industries that have long boasted “personalization”: the anagrammed towel set, your name on a tourist trap tchotchke (as long as your name is one of the most popular names, of course. It’s pretty rare to find “Audrey” on that crap). But just like that sort of “personalization,” you’re still stuck with industry-standards when it comes to towel dimension. You’re still stuck with the choices of color they make available. Contrary to the Burger King slogan, personalization doesn’t mean you can have it your way. (Oh okay, for an ed-tech-specific definition, see: The Glossary of Education Reform.)

Often when invoked by those in ed-tech, “personalization” is presented as a counter to (the strawman argument) “the factory model of education,” and the notion that, without ed-tech’s interventions and data collection and analysis, classrooms operate on a “one size fits all” model. “Personalization” is often used to explain what various teaching machines offer – both the earliest versions of the 20th century and the more recent VC-funded “adaptive” textbooks and tests: the ability for students to move through a lesson or curriculum at their own pace.

Royal Roads University professor George Veletsianos recently described personalized learning as “the locus of ed-tech debates,” which seems quite right to me. “Personalized learning” – whatever that means – raises all sorts of questions about all those trade-offs I mentioned above: about privacy and security and agency and transparency and surveillance and agency and ethics. And I’d have chosen “personalization” as one of this year’s “Top Ed-Tech Trends” if I didn’t think it was more fitting to head the list of “Best Ed-Tech Buzz and Bullshittery.”

In August, The Chronicle of Higher Education reported that “Researchers at the University of Wisconsin at Madison say they are getting closer to designing a system to deliver the ideal lesson plan for each student, through a process they call ‘machine teaching.’” Inside Higher Ed covered the research a month later: “Imagine if schoolteachers and college professors were immediately able to identify how each of their students learns, what learning style works best for each child and what new topics he or she is struggling with.” Just imagine.

Now imagine if the media approached the claims of ed-tech “personalization” with more scrutiny and skepticism and ceased writing headlines like “This Robot Tutor Will Make Personalizing Education Easy.”

What does research – and there is decades of it – tell us about the effectiveness of computer-assisted instruction, “cognitive tutors,” and the like? (Spoiler alert: it’s actually a mixed bag.)

But you wouldn’t know it from the marketing hype: “Artificially intelligent software is replacing the textbook – and reshaping American education,” Slate argued this fall. “Think of it as a friendly robot-tutor in the sky,” Jose Ferreira, Knewton founder and CEO, said in a press release this summer. “Knewton plucks the perfect bits of content for you from the cloud and assembles them according to the ideal learning strategy for you, as determined by the combined data-power of millions of other students.” “We think of it like a robot tutor in the sky that can semi-read your mind and figure out what your strengths and weaknesses are, down to the percentile,” Ferreira told NPR. Knewton is a “magic pill,” he told Wired.

Dear tech press and politicians and gullible people everywhere: there is no magic pill.

In response to the hype around Knewton (which Edsurge reported was raising a $47 million round of funding this year) Mindwire Consulting’s Michael Feldstein was quoted by NPR as saying that Ferreira is “selling snake oil.” He elaborated on his blog:

No responsible educator or parent should adopt a product – even if it is free – from a company whose CEO describes it as a “robot tutor in the sky that can semi-read your mind” and give you content “proven most effective for people like you every single time.” I’m sorry, but this sort of quasi-mystical garbage debases the very notion of education and harms Knewton’s brand in the process.


If you want to sell me a product that helps students to learn, then don’t insult my intelligence. Explain what the damned thing does in clear, concrete, and straightforward language, with real-world examples when possible. I may not be a data scientist, but I’m not an idiot either.

Feldstein and his colleague Phil Hill also took the learning management system Desire2Learn to task for questionable marketing claims. Here’s Feldstein again:

I can’t remember the last time I read one of D2L’s announcements without rolling my eyes. I used to have respect for the company, but now I have to make a conscious effort not to dismiss any of their pronouncements out-of-hand. Not because I think it’s impossible that they might be doing good work, but because they force me to dive into a mountain of horseshit in the hopes of finding a nugget of gold at the bottom. Every. Single. Time.

So here we have two of the most highly funded education technology startups – Knewton which has raised in $147.25 million in venture capital and D2L which has raised $165 million – making questionable (at best) claims about what their products can do with the data they collect in order to “personalize” education. But as long as the technology sector builds on those claims and journalists continue to promote the spin and investors continue to fund it…

In May, AltSchool, a “microschool” founded by former Google exec Max Ventilla, announced $100 million in venture capital from a group of investors that Edsurge described as “Silicon Valley’s blue bloods”: Founders Fund, Andreessen Horowitz, Silicon Valley Community Foundation (that’s a fund that Mark Zuckerberg and his wife Priscilla Chan advise), Emerson Collective (that’s the fund run by Steve Jobs’ widow Laurene Powell Jobs), First Round Capital, Learn Capital, John Doerr, Harrison Metal, Jonathan Sackler, Omidyar Network, and Adrian Aoun.

While in previous years, it was Khan Academy or MOOCs that found itself the darling of a credulous education/technology press, AltSchool was clearly the beloved in 2015 as publication after publication penned veritable love letters to the private school startup. I’ll have more to say about AltSchool below when I look more closely at how startups like this and those who promote their particular version of “personalization” are fostering a culture of surveillance.

AltSchool was hardly the only big name boasting that the future of school would draw on tech-heavy “personalization.” In September, Facebook formerly unveiled the work it has undertaken with the Summit Public Schools charter chain. It’s called a Personal Learning Platform – linguistically, at least, quite different from a “learning management system,” I suppose. According to Edsurge, “The tool started as a string of Google docs and spreadsheets, but Summit needed help to refine it into a full-scale platform for the seven schools in its network. First it hired a full-time engineer. But to open the tool up for the entire world to use, Summit needed more help. So it called up Facebook.”

Related: in December, Zuckerberg and Chan announced they would donate 99% of their Facebook shares to a new LLC that would invest in, among other things, “personalization” in education – that is, having already invested in the Summit Public Schools system and in AltSchool. I think we can see where this is headed… (If you don’t, know that I’ll cover this more in the final article in this series, “The Business of Ed-Tech.”)

“Personalized education does not mean kids just doing what they want. In fact, quite the opposite.” – AltSchool founder Max Ventilla

(Personalization versus) Privacy


In previous years, when I’ve covered the topic of “data,” I’ve usually written too about how the demands for more data quickly run afoul of students’ and teachers’ privacy. (To recap: “Top Ed-Tech Trends of 2014: Data and Privacy.” “Top Ed-Tech Trends of 2013: Data vs. Privacy.” “Top Ed-Tech Trends of 2012: Education Data and Learning Analytics.” "Top Ed-Tech Trends of 2011: Data (Which Still Means Mostly Standardized Testing.)

There was quite a bit of coverage of the privacy implications of ed-tech this year – a shout-out to The New York Times’ Natasha Singer who wrote a number of articles on the topic and to Stephanie Simons, who left Politico this year after being one of the best education reporters on this beat. A sampling of their work: “Cyber snoops track students’ activity.” “Tools for Tailored Learning May Expose Students’ Personal Details.” “Privacy Pitfalls as Education Apps Spread Haphazardly.” “When a Company Is Put Up for Sale, in Many Cases, Your Personal Data Is, Too.” “Parents Challenge President to Dig Deeper on Ed Tech.”

Kudos should also go to Bill Fitzgerald, who along with his FunnyMonkey colleague Jeff Graham, joined Common Sense Media this year. Both Common Sense Media and Fitzgerald were fairly tireless this year in advocating for student privacy – The New York Times called the organization an “advocacy army.” Among the products and services that Fitzgerald analyzed: Facebook and Summit Public Schools’ personalized learning platform; “MySchoolBucks, or Getting Lunch with a Side of Targeted Advertising”; Google Apps for Education Terms of Service (or the lack thereof); Remind’s new Terms of Service; and Clever’s new privacy policy. Thanks in part to Fitzgerald’s urging and support, a number of startups, including Clever, “open sourced” their privacy policies, making them available on GitHub so it would be easy for others to “fork” them. (Here’s a list of those participating in that effort.)

Via Kickboard’s Jennifer Medbery, a good guide on what startups should consider as they think through their privacy policies and security practices. For its part, the US Department of Education also released model Terms of Service guidance “aimed at helping schools and districts protect student privacy while using online educational services and applications.” It’s unfortunate that the department’s “best practice” guidelines suggest that TOS should say schools – not students – own the data, including all IP.

Also unfortunate: as privacy becomes increasingly important for schools and for parents, it then gets used as a marketing claim, as companies promised “kid friendly” products with “private sharing” features and the like.

Once again, Google found itself repeatedly in hot water over its products, its usage of user data, and privacy this year: Consumer groups filed a complaint in April with the FTC over the YouTube Kids app, “claiming it misleads parents and violates rules on ‘unfair and deceptive marketing’ for kids.” In November, the YouTube Kids app faced more complaints, this time over junk food ads. In June, Privacy Online News reported that “Google has been stealth downloading audio listeners onto every computer that runs Chrome, and transmits audio data back to Google. Effectively, this means that Google had taken itself the right to listen to every conversation in every room that runs Chrome somewhere, without any kind of consent from the people eavesdropped on.” In November, Detectify Labs reported that “Popular Google Chrome extensions are constantly tracking you per default, making it very difficult or impossible for you to opt-out. These extensions will receive your complete browsing history, all your cookies, your secret access-tokens used for authentication (i.e., Facebook Connect) and shared links from sites such as Dropbox and Google Drive. The third-party services in use are hiding their tracking by all means possible, combined with terrible privacy policies hidden inside the Chrome Web Store.”

But hey, schools keep on adopting Google Apps for Education, and Google Chromebooks “now make up half of classroom devices sold.”

In December, the EFF filed an FTC complaint against Google, charging that the company “deceptively tracks students’ Internet browsing.” Google disputed the complaint, insisting that it complies with both the law and the Student Privacy Pledge (to which the EFF responded in turn.)

That Student Privacy Pledge was announced late last year, an initiative put forward by the industry groups the Software and Information Industry Association and the Future of Privacy Forum. (Google is a funder of the Future of Privacy Forum, and the latter stepped up to defend Google against the EFF’s complaint.) Among other things, signers of the pledge pinky-swore they would not sell student information and would only use data for “authorized education purposes,” whatever the hell that means. (For what it’s worth, the US Court of Appeals for the Seventh Circuit ruled in November that test takers were not harmed when testing companies sold their personally identifiable information.)

Google was not one of the original signers of the Student Privacy Pledge, but it did add its name in January. There’ve been a lot of questions about whether or not the pledge – with Google’s signature affixed or any of the others – is actually meaningful at all. (Let’s ask the Court of Appeals!)

And that’s a problem, more broadly, not just for privacy pledges but for privacy laws. These are often woefully out-of-date, lagging behind the data-related implications of new technologies, and simply do not protect students.

One of the worst violations of student privacy this year probably isn’t related to education technology, but I’ll note it here nonetheless because it does underscore how very little FERPA, the Family Educational Rights and Privacy Act of 1974, actually covers. From Katie Rose Guest Pyral’s story in The Chronicle of Higher Education: “Raped on Campus? Don’t Trust Your College to Do the Right Thing”:

In January, a rape survivor sued the University of Oregon for mishandling her sexual-assault case. Through the campus judicial process, the university found the three male students responsible for gang-raping her (not the technical term). They were kicked off the varsity basketball team and eventually out of school. But there is a lot more to the story, including the ways that the university delayed the investigation of the students long enough so that they could finish up their basketball season.


The story is long, and it might destroy your faith in humanity, even if the university did drop its counterclaim against the survivor last week. In that counterclaim, Oregon had accused her of “creating a very real risk that survivors will wrongly be discouraged from reporting sexual assaults.”


But I want to focus on only one sliver of this case - one ugly, frightening sliver. I guess we can thank the university's administration for shining some daylight on the legal quirk that I'm about to talk about, because otherwise it might have stayed hidden.


The Oregon administration accessed the rape survivor's therapy records from its counseling center and handed them over to its general counsel's office to help them defend against her lawsuit. They were using her own post-rape therapy records against her.


It was a senior staff therapist in the counseling unit who blew the whistle on the administration's actions. In her public letter, she sounds horrified that the work she thought was protected by medical privilege could be violated in such a fashion.

The university claimed that FERPA gave them a right to access the victim’s mental health records. And while the Department of Education said it might issue new guidance on FERPA after Oregon Senator Ron Wyden and Representative Suzanne Bonamici inquired how the hell this was legal, we saw little change when it came to updating the federal student privacy law, on this or on other matters.

As I chronicled in my post “The Politics of Education Technology,” it’s not that there weren’t legislative proposals. There were plenty. Despite being (or, because it was) a proposal in President Obama’s State of the Union address, at the federal level at least –states have had more luck – privacy bills in Congress went nowhere.

The European Court of Justice, on the other hand, did push the envelope on some of these legal questions about privacy. It declared in October that EU-US “safe harbor” rules regulating firms’ retention of Europeans’ data in the US were invalid – a ruling that could have implications for US ed-tech companies operating in Europe.

But in the US at least, we end 2015 with that 1974 privacy law still in place. Oh sure, college students did briefly find a way to use it to their benefit, forcing schools to hand over their admissions files and letters of recommendation – something that in turn prompted “elite colleges” to destroy admissions records. So I’m not sure how we can declare that any sort of victory in privacy or intelligent data usage.

A few notable reports from 2015 on data and privacy: from Education Week: “A Special Report on Student-Data Privacy.” From the National Education Policy Center, a report called “On the Block: Student Data and Privacy in the Digital Age.” From the ACLU of Massachusetts, a report on student data and privacy in K–12.

One question I want us to always keep in mind when we talk about education technology and privacy was asked in May by danah boyd: Which Students Get to Have Privacy?

Wait, I have another question too (this one a headline from The Guardian in April): “Is the online surveillance of black teenagers the new stop-and-frisk?” I’m going to cover social media monitoring, free speech, and social justice in the next article in this series. But I think it’s important to keep in mind that privacy and surveillance are equity issues.

Ed-Tech InfoSec


According to a Campus Technology story in June, “More than one third of all malware events in 2014 happened within the education sector.” And Education Week reported this spring that “Data breaches are costing companies in education up to $300 per compromised record, making it the second most impacted sector – behind only healthcare – for businesses with lost or stolen records globally.” Nonetheless, there was little indication in 2015 that the sector was going to pull its act together, and every indication that its obsession with collecting more and more data would make it even more of a target.

It’s an old video now – old in ed-tech startup years, that is. But I like to refer to the comments made by Knewton’s CEO at the Department of Education’s Datapalooza event back in 2012. The video is on YouTube (and was posted to the Department’s website too):

We literally know everything about what you know and how you learn best, everything. Because we have five orders of magnitude more data about you than Google has. We literally have more data about our students than any company has about anybody else about anything, and it’s not even close.

And okay, as I’ve already written above (and elsewhere), I think these claims are dubious at best. But they – quite inadvertently, no doubt – underscore one of the major issues that education technology companies and schools and hell, the Department of Education itself is still struggling with: the security of all this education data that’s being amassed.

This year as years past, there were a number of data breaches at schools: at Cal State (80,000 students affected); in British Columbia (as many as 34 million students affected); at Central New Mexico Community College (3000 students affected); and ULCA Health (as many as 4.5 million people affected). The University of Connecticut reported a “criminal cyberintrusion” of its engineering college, blaming Chinese hackers. Penn State also said “hackers from China” had infiltrated its computer systems in an attack that had lasted more than two years. Rutgers also said it was under repeated “cyberattacks.” The University of Virginia’s IT system was hit with a “cyberattack.” In March, the University of Chicago admitted it had suffered a breach that affected students and employees and included their Social Security numbers. Minnesota’s Metropolitan State University admitted to a data breach, also involving personnel records and Social Security numbers. Harvard admitted in July that it had suffered a breach, impacting eight colleges and administrations.

And the Department of Education itself, with all its purported guidance for ed-tech companies about Terms of Services and the like – continued to have its own issues with security. It had 91 data breaches in 2015. 91! An employee of the department was also found to have committed identity theft by using information gleaned from students’ loan applications. Awesome leadership.

Some of these security breaches did involve “cyberintrusions,” to be sure, and while blaming “Chinese hackers” seemed to be a ready excuse, many schools’ security problems occurred simply because of human error, were traced to “internal” causes, and/or were done by students themselves. The latter typically involved changing grades and attendance records. Many of these students were charged with felonies for “hacking,” even if they’d only done as much as “shoulder-surfed” their teachers’ passwords in order to gain access to their school’s computer systems. With far too much frequency this year, schools were forced to admit that they’d accidentally released confidential records, posted them online, and such.

Beware of state-sponsored sites that promised in 2015 to “keep kids safe” – those in South Korea and in the UK, for example – were also found to have big security holes.

Elsewhere in security breaches and potential security breaches thanks to tech companies (and not schools): a massive breach from the toy-maker VTech (one in which some 4.8 million families and some 6.8 million children could be affected). Pearson VUE experienced a data breach. There were potential flaws with Lenovo devices, with Microsoft’s Windows 10, with Mattel’s Hello Barbie toy, and with Android tablets for kids. Cheat-on-your-spouse site Ashley Madison suffered a data breach, and oh my, there were lots of school-issued email addresses among those leaked. Damn, even Taylor Swift’s website served as an example of poor privacy practices for children."

Oh look: more solid reporting from The New York Times’ Natasha Singer: “Uncovering Security Flaws in Digital Education Products for Schoolchildren.”

Data and Transparency


What data should education technology companies and schools collect? What education-related data should the government (federal, state, and local) collect? How long should this data be stored? What policies and practices should be put in place to ensure the privacy and security of this information? What policies and practices should be put in place to ensure transparency? That is, what data do we want released? What should we know – about schools, about teachers, about students, individually and in aggregate? Whose data is forced to be “transparent”? Teachers? Students? Institutions? Ed-tech companies?

There are things we do want to know about students and about schools. There are things we have the right to know. The rate of campus sexual assaults, the rate of campus suicides,graduation rates, suspension and expulsion rates for Black students, for example.

How can we “check the work” of ed-tech’s zany, often misleading marketing promises? How can we “check the work” of schools’ zany, often misleading marketing promises?

How can we “check the work” that’s performed by algorithms in education? By and large, these are proprietary and hidden from view.

Algorithmic Decision-Making and the Future of Education


This fall, I gave a keynote at NWeLearn on “The Algorithmic Future of Education.” In part, it was a look at the history of “robot tutors” and the claims made about the effectiveness of computer-based instruction. As the title suggests, it was also a look forward at how three trends – automation, algorithms, and austerity – might shape the future of education and ed-tech. Swarthmore’s Timothy Burke also identified “algorithmic culture” as something that drives ed-tech visions like AltSchool’s.

Algorithms drive many “adaptive learning” products which make promises, as I’ve described above, that they’re delivering the “perfect bits of content” for each student. In addition to adaptive testing and adaptive textbooks, algorithms are part of other analytics tools that purport to offer better decision-making for students, teachers, and administrators alike.

Civitas Learning, which raised $60 million in venture capital this year, “builds tools that analyze different buckets of data collected at colleges and universities to help administrators, faculty and students make better-informed decisions to boost course completion and, by extension, on-time graduation rates.” It’s not the only player in the market. The Washington Post wrote in June about Virginia Commonwealth University which hired the consulting firm EAB to use predictive analytics to identify which students were most likely to drop out of school. Increasingly, this is the sort of thing that learning management system providers sell, using the data collected there to analyze attendance and grades and to send students or staff members “early warnings.”

In May Inside Higher Ed covered a new app built by Dartmouth College researchers that they said “can predict with great precision the grade point averages of students. The app tracks student behaviors associated with higher or lower GPAs. Students need to report their activities, as the app infers what they are doing and can tell when students are studying, partying or sleeping, among other activities.” Gee, no privacy issues there at all.

From an interview between Evan Selinger and Jeffrey Alan Johnson in The Christian Science Monitor published under the headline “With big data invading campus, universities risk unfairly profiling their students”:

Selinger: Is this [profiling] transparent to students? Do they actually know what information the professor sees?


Johnson: No, not unless the professor tells them. I don’t think students are being told about Stoplight at all. I don’t think students are being told about many of the systems in place. To my knowledge, they aren’t told about the basis of the advising system that Austin Peay put in place where they’re recommending courses to students based, in part, on their likelihood of success. They’re as unaware of these things as the general public is about how Facebook determines what users should see.

Related: “How Facebook’s Algorithm Suppresses Content Diversity (Modestly) and How the Newsfeed Rules Your Clicks” by Zeynep Tufekci. “How Companies Turn Your Facebook Activity Into a Credit Score” by Astra Taylor and Jathan Sadowski. “Credit Scores, Life Chances, and Algorithms” by Tressie McMillan Cottom.

Algorithms discriminate.

So when the New York City schools says that it’s going to adopt the “data tactics used by NYPD to weed out crime” – part of its “broken windows” policing policy – in order to “fix schools,” we should be concerned. Who gets profiled? Which students are seen at risk? Which students are deemed a risk? How are students’ choices circumscribed? How are existing educational inequalities encoded into ed-tech’s algorithms?

As Tressie McMillan Cottom writes,

It is the iron cage in binary code. Not only is our social life rationalized in ways even Weber could not have imagined but it is also coded into systems in ways difficult to resist, legislate or exert political power.

Recommended: Frank Pasquale’s 2015 book The Black Box Society.

A Culture of Surveillance


It’s disheartening to recognize how profiling, tracking, and surveilling students with new technologies fit quite neatly into longstanding practices of the education system. Disheartening, but true.

As such, it’s hardly a surprise to see new technologies marketed to schools in order to (purportedly) curb things like “bad behavior” like cutting class or cheating.

Some schools try to prevent cheating “the old fashioned way” – banning watches during exams, for example. The Chronicle of Higher Education wrote about University of Chicago professor and Freakonomics author Steven Levitt’s idea to prevent cheating by assigning seats algorithmically, profiling students “deemed most suspicious.” The People’s Daily Online wrote about the usage of drones in China’s Luoyang City to monitor the gaokao exam.

Cheating happens in face-to-face classes, duh. But moving education online seems to elicit fears of even more of it. “Cheating in Online Classes Is Now Big Business,” The Atlantic reported in November.

And in turn, preventing cheating becomes big business – and becomes a potential privacy violation and a security risk as well.

In April, The New York Times’ Natasha Singer covered a growing controversy at Rutgers over ProctorTrack, an “anti-cheating” online testing platform.

“You have to put your face up to it and you put your knuckles up to it,” Ms. Chao said recently, explaining how the program uses webcams to scan students’ features and verify their identities before the test.


Once her exam started, Ms. Chao said, a red warning band appeared on the computer screen indicating that Proctortrack was monitoring her computer and recording video of her. To constantly remind her that she was being watched, the program also showed a live image of her in miniature on her screen.


…Proctortrack uses algorithms to detect unusual student behavior – like talking to someone off-screen – that could constitute cheating. Then it categorizes each student as having high or low “integrity.”

The university actually had no written contract with Verificient Technologies, the maker of Proctortrack until seven months into a supposed one-year agreement. When Rutgers finally did receive the contract, among its provisions were the erasure of all student data 90 days after each exam. Gawker wrote in September “Students Wonder When Creepy-As Hell App That Watches Them During Exams Plans on Deleting Their Data.” And the answer seemed to be: after the media started covering the problem.

Others in the online assessment monitoring business: Software Secure’s RPNow (which edX and ASU said they would use for their newly announced Global Freshman Academy); Proctorio, and ProctorU (one of the signers of the Student Privacy Pledge, LOL).

Another type of technology that schools increasingly turn to, in order to prevent cheating and in order to keep schools “safe,” is social media monitoring. As I noted above, I’m going to cover this in the next articles in this series, particularly as it related to free speech issues. But I’d be remiss if I didn’t mention the dust-up this spring surrounding Pearson’s use of monitoring software during the PARCC exams.

In March, former Newark Star-Ledger reporter Bob Braun posted a photo of an email sent by a school superintendent, revealing that Pearson was actively monitoring students’ social media. Cue: panic and mayhem. (I wrote about this at length here on Hack Education.) In his original story, Braun revealed the full details of this (female) superintendent’s name and contact information (phone number and email address). Later that week, in making very spurious connections between a different NJDOE official, Pearson, and the open source database company MongoDB, Braun “doxxed” that (female) NJDOE official. Ah, the irony of using “doxxing” to defend student privacy.

Or maybe that’s just the culture we have now: one of constant surveillance and sousveillance. Or as Siva Vaihyanathan observed, it’s “The Rise of the Cryptopticon”:

Unlike Bentham’s prisoners, we do not know all the ways in which we are being watched or profiled – we simply know that we are. And we do not regulate our behavior under the gaze of surveillance. Instead, we seem not to care. The workings of the Cryptopticon are cryptic, hidden, scrambled, and mysterious. One can never be sure who is watching whom and for what purpose. Surveillance is so pervasive, and much of it seemingly so benign (“for your safety and security”), that it is almost impossible for the object of surveillance to assess how he or she is manipulated or threatened by powerful institutions gathering and using the record of surveillance. The threat is not that expression and experimentation will be quashed or controlled, as they supposedly would have been under the Panopticon. The threat is that subjects will become so inured to and comfortable with the networked status quo that they will gladly sort themselves into “niches” that will enable more effective profiling and behavioral prediction.


The Cryptopticon, not surprisingly, is intimately linked to Big Data. And the dynamic relationship between two has profound effects on the workings of commerce, the state, and society more generally.

The Crypotopticon has profound effects on the workings of education.

Surveillance technologies are becoming ubiquitous at schools and at home, although frequently marketed as “smart” or “connected” which sounds a lot friendlier than “Crypotopticon,” I reckon: It’s not Google Glass (for now at least), but Google gave Carnegie Mellon $500,000 to install sensors all over the campus – “temperature sensors, cameras, microphones, humidity sensors, vibration sensors, and more in order to provide people with information about the physical world around them. Students could determine whether their professors were in their offices, or see what friends were available for lunch.” A “smart campus.” Edsurge offered “A Peek at a ‘Smart’ Classroom Powered by the Internet of Things” in August, highlighting researchers at the University of Belgrade who “used sensors to measure different aspects of the classroom environment – including temperature, humidity and carbon dioxide levels – and attempted to link these factors to student focus.”

Iris-scanning and other biometric technologies, as well as audio monitoring technologies, were touted for school buses, which are pretty much becoming a roving surveillance system these days.

Districts are adopting body cams for principals and school police officers. The Atlantic reported in July on one district that “spent about $1,100 to purchase 13 cameras at about $85 each. They record with a date and time stamp, can be clipped onto ties or lanyards, and can be turned on and off as needed. For now, they won’t be used to record all interactions with adults.” (That we see a push for body cams on police and body cams on school employees should give us pause about the function of school.)

Video cameras are appearing in more and more classrooms to monitor teachers and students as well. But not just plain ol’ video cameras, oh no: St. Mary’s High School, a Catholic school in St. Louis, said it March it was “upping the game when it comes to school security, becoming one of the first in the nation to install facial recognition cameras.” San Diego Unified School District revealed it was using facial recognition software too. “It’ll Be A Lot Harder To Cut Class With This Classroom Facial-Recognition App” Fast Company wrote in February in a story that raised zero questions about privacy or ethics but noted the app is “unobtrusive.”

In June, Newark Memorial High School in California became the first high school in the US to install “gunshot-sensing technology” which places microphones and sensors in hallways and classrooms. The $15,000 system isn’t designed to record conversations. Mmhmm. Right.

Meet the Classroom of the Future.” It’s absolutely terrifying.

Or, I think it’s terrifying. Perhaps surveillance has just become normalized.

The Pew Research Center released data from several surveys this year pertaining to data and surveillance. From March: “17% of Americans said they are ‘very concerned’ about government surveillance of Americans’ data and electronic communication; 35% say they are ‘somewhat concerned’; 33% say they are ‘not very concerned’ and 13% say they are ‘not at all’ concerned about the surveillance. …When asked about more specific points of concern over their own communications and online activities, respondents expressed somewhat lower levels of concern about electronic surveillance in various parts of their digital lives.” (emphasis added)

From a survey in May: “93% of adults say that being in control of who can get information about them is important.”

Even though Pew found that adults value their own privacy, it’s not clear that they believe those same protections should be extended to their children. An op-ed by one parent put it this way: “When it comes to my teens' online activity, safety trumps privacy every time.” The Wall Street Journal published a first person account of a father who digitally surveilled his daughter at college. Truly more disturbing: “The Technology Transhumanists Want in Their Kids.”

This fall, the Future of Privacy Forum released the results of a survey it conducted with parents. The group said the survey showed that “Parents strongly support schools’ collection and use of information they feel appropriately contributes directly to educational purposes.” But it seems more complicated than that. 87% of parents also said they were concerned about student data security. Just 42% said they were comfortable with ed-tech companies having access to students’ education records. Less than a quarter of parents knew about the laws that protect student privacy and restrict access to student data.

Hey parents. Here’s a good place to start thinking about privacy and security: “Keep Your Kid’s Info Safe: Opt Them Out of School Directory Information Sharing.”

“This Will Go Down On Your Permanent Record”


“This will go down on your permanent record.” That’s long been the threat from schools, hasn’t it. And now, thanks to all manner of new technologies, the threat seems compounded. The disciplinary data trail is more robust and perhaps even more persistent. There are more possible infractions, more ways to be caught. “The terror of the personal, digital archive is not that it reveals some awful act from the past, some old self that no longer stands for us,” writes Navneet Alang, “but that it reminds us that who we are is in fact a repetition, a cycle, a circular relation of multiple selves to multiple injuries.” This terror seems particularly inescapable for those growing up online, those incessantly surveilled by the technologies and systems that purport to be guiding them into adulthood. What space will we leave for growth, for vulnerability, for trust?

In education and in society more broadly, we’ve come to confuse surveillance with care. We confuse surveillance with self-knowledge. As Rob Horning wrote earlier this year:

I don’t think self-knowledge can be reduced to matters of data possession and retention; it can’t be represented as a substance than someone can have more or less of. Self-knowledge is not a matter of having the most thorough archive of your deeds and the intentions behind them. It is not a quality of memories, or an amount of data. It is not a terrain to which you are entitled to own the most detailed map. Self-knowledge is not a matter of reading your own permanent record.

And yet the ed-tech industry – its engineers, its investors, its entrepreneurs, its proponents – wants us to believe that this is the case: that more data will provide efficiency, effectiveness.

There was perhaps no better example of this in 2015 than AltSchool – the investment, the uncritical adulation, the compulsion for data, the philosophy (one that Mike Caulfield has called “Dewey plus surveillance.”) AltSchool highlights how easily we confuse “technological progress” with “progressive education” and up with neither. Its founder, Max Ventilla, does not have an teaching background or a research background, but rather a deep faith in the power of data. An NPR story from May described an AltSchool classroom as “outfitted with fisheye-lens cameras, for a 360-degree view at all times, and a sound recorder. And the company is prototyping wearable devices for students with a radio frequency ID tag that can track their movements. Why all the intensive surveillance? Safety and health are two applications, but right now, Ventilla says, it’s mostly R&D. One day, all these data could be continuously analyzed to improve teaching techniques or assess student mastery.” One day…

Surveillance and speculation – the history of the future of education technology.

I'll close with an excerpt from Pinboard founder Maciej Ceglowski’s keynote “Haunted By Data”:

There’s a little bit of a con going on here. On the data side, they tell you to collect all the data you can, because they have magic algorithms to help you make sense of it.


On the algorithms side, where I live, they tell us not to worry too much about our models, because they have magical data. We can train on it without caring how the process works.


The data collectors put their faith in the algorithms, and the programmers put their faith in the data.


At no point in this process is there any understanding, or wisdom. There's not even domain knowledge. Data science is the universal answer, no matter the question.

Audrey Watters


Published

Hack Education

The History of the Future of Education Technology

Back to Archives