This is part eight of my annual review of the year in ed-tech
Pokémon Go, a free augmented reality game developed by Niantic (a company spun out of Google in 2015), became the most popular mobile game in US history this year. The game was launched in July and despite mixed reviews, was downloaded some 10 million times the first week it was released. Pokémon Go generated more than $160 million by the end of July, hitting $600 million in revenue within its first 90 days on the market – the fastest mobile game to do so.
Always eager to associate itself with the latest tech craze, education technology embraced Pokémon Go with great gusto: “Why Pokemon Go shows the future of learning gamification.” “The Educational Potential of Pokémon Go.” “Why Pokémon Go marks a new step forward in education.” “14 Reasons Why Pokémon Go Is The Future Of Learning.”
But these wild proclamations – wishful thinking, no doubt – tended to overlook the realities of mobile technology and education. Many students cannot afford the heavy data usage required by geofencing apps, for starters. And Pokémon Go’s PokéStops and Gyms were rarely found in low-income neighborhoods and in areas where the population was not overwhelmingly white. (I’ll look more closely at discrimination by design – in software and in algorithms – in the final article in this series.) Furthermore, at launch, Pokémon Go demanded users sign over a great deal of personal data and grant permissions to the app that, for a time, gave it access to a user’s entire Google account.
Handing over data, often quite thoughtlessly, has become par for the course – in education and in society more generally. Although privacy experts have urged parents and educators to be more proactive about protecting children’s data and privacy) – while using Pokémon Go and other data-hungry apps – we now live in a culture of surveillance, where data collection and data extraction have become normalized.
Surveillance starts early. “Quantified babies” and “Surveillance Barbie” and such. Rather than actively opting children out of a world of tracking and marketing, parents increasingly opt them in – almost always without their children’s consent.
Many of us have become quite lackadaisical about the data we share. “It doesn’t matter.” “I have nothing to hide.” Schools, operating under longstanding mandates to track and to measure as much as possible, have been more than willing to expand the amount and types of data they’re collecting on students. Fears of FERPA are frequently stoked to stymy certain projects – perhaps unnecessarily in some cases – but schools have not always been cautious about who has access to student data.
Has our confidence that we or our students have “nothing to hide” changed now under President-Elect Trump?
Under a Trump administration: I very much want ed-tech companies and schools to reconsider collecting so much data about students
— Audrey Watters (@audreywatters) November 10, 2016
What will happen to the massive amounts of data that education has collected? To data about students’ immigration status? To data about students’ sexual identity? To data about students’ intellectual, academic, and political preferences? What sorts of insights can we glean or will we glean from this data anyway? Who, if anyone, might these insights benefit? Is all that data secure? Is students’ privacy being protected? Is students’ privacy and personal integrity being considered at all?
Education’s Obsession with Data
Why collect data? “To keep schools accountable,” in part.
“Accountability rhetoric echoes a broader turn toward data-driven decision-making and resource allocation across sectors. As a tool of power, accountability processes shift authority and control to policymakers, bureaucrats, and test makers over professional educators,” Data & Society’s Claire Fontaine wrote in a working paper released this summer on the use of data in education.
Today, measurements of school performance have become so commonplace that they are an assumed part of education debates. As new forms of data are easier to collect and analyze, drawing on and interacting with information to measure the impact of programs and to inform decision-making and policy has emerged as a key strategy to foster improvement in public schools.
…The accountability movement reflects the application of free market economics to public education, a legacy of the Chicago School of Economics in the post-World War II era. As a set of policies, accountability was instantiated in the Elementary and Secondary Education Act (ESEA) of 1965, reauthorized as the No Child Left Behind Act (NCLB) of 2002, and reinforced by the Every Student Succeeds Act (ESSA) of 2015. ESSA gives more autonomy and flexibility to states than they had under NCLB through competency-based assessments, which could drive the development of personalized learning technologies. ESSA’s accountability processes also require new types of data collection and disaggregation, including of non-academic indicators of school quality. Significantly, ESSA mandates the collection and reporting of per pupil expenditure data at the school level. Teaching and learning are increasingly being measured and quantified to enable analysis of the relationship between inputs (e.g., funding) and outputs (e.g., student performance) with the goal of maximizing economic growth and productivity and increasing human capital.
The accountability movement is built on a long history of standardized testing and data collection that privileges quantification and statistical analysis as ways of knowing. An underlying assumption is that learning can be measured and is an effect of instruction. This is an empiricist perspective descended from John Locke and the doctrine that knowledge derives primarily from experience. Accountability in education also holds that schools are fundamentally responsible for student performance, as opposed to families, neighborhoods, communities, or society at large. This premise lacks a solid evidentiary basis, as research shows that student performance is more closely linked to socioeconomic status. Finally, efforts to achieve accountability presume that market-based solutions can effectively protect the interests of society’s most vulnerable.
While the collection of data – enrollment data, attendance data, graduation data, disciplinary data, standardized test data – has been mandated for some time now, new digital technologies bring with them an expanded capacity for data-mining as well as an underlying ideology that more data necessarily means better measurement, better “outcomes” and better “solutions.”
There is, however, little evidence that collecting more data improves teaching or learning. Nevertheless, education technology continues to insist that its software and algorithms can identify students who are struggling – academically or emotionally.
With all this data collection and analysis come major issues surrounding ethics and privacy and substantial risks surrounding information security. What purports to work in the service of helping students may actually function to shame, demotivate, alienate, and endanger them. It certainly undermines trust. And too often, students do not know the extent to which their data is being mined and activities being surveilled.
Students as Cheats
Much of the surveillance of students – in the classroom, on their computers, on the playground, online, offline, at school, at home – is based on a suspicion that they’re cheating.
(The scrutiny always falls disproportionately on low income students, students of color, and foreign students. “Foreign Students Seen Cheating More Than Domestic Ones,” The Wall Street Journal reported. “Seen” cheating.)
Students cheat, we’re repeatedly told. They use cheat sheets and Google Search during exams. They cheat with smartwatches. They cheat and they lie. They lie about their skills and lie about their degrees. (That’s part of the rationale for adopting the blockchain for transcripts.) In response to all this deceit, there’s a “new cheating economy,” The Chronicle of Education reported in August. Implied: students are lazy; students are conniving; students are academically unprepared.
As such, there’s a growing market for ed-tech products that assure schools they can curb cheating, particularly during testing. ProctorFree, for example, raised an undisclosed amount of funding in January for “an automated service that uses biometric and machine learning technologies to eliminate the need for human oversight in online exams” which doesn’t sound horrifying at all.
Details about the investments in test monitoring software can be found at funding.hackeducation.com.
No longer is it simply a matter of buying software specifically designed or marketed to stop cheating or plagiarism. The metadata from students’ everyday activities – in online classes and in learning management systems and in personal tech usage – can now be used to investigate those suspected of cheating, as it was, for example, in an NCAA investigation this spring into the former head coach of the men’s basketball team at University of Southern Mississippi. IP addresses, online aliases, and edits were all analyzed to confirm that graduate assistants were completing athletes’ school work for them.
Sports-related cheating aside, arguably the biggest cheating scandals this year involved two of the most well-known standardized tests – the ACT and the SAT. That’s probably no surprise, considering the significance placed on these exams in college admissions (and in some cases, high school graduation). As Campbell’s Law tells us, “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”
Reuters has led the way with investigative reporting on this. From March: “As SAT was hit by security breaches, College Board went ahead with tests that had leaked.” “How Asian test-prep companies swiftly exposed the brand-new SAT.” From April: “U.S. students given SATs that were online before exam.” From July: “Students and teachers detail pervasive cheating in a program owned by test giant ACT.” From August: “‘Massive’ breach exposes hundreds of questions for upcoming SAT exams.” “ACT shakes up security unit, plans audit after cheating reports.” “FBI raids home of ex-College Board official in probe of SAT leak.” From September: “College Board says upcoming SATs won’t contain questions exposed in breach.”
The College Board, which owns the SAT, had a rather anemic response to news of the breaches: “Throughout the 90-year history of the SAT, the College Board has faced the issue of cheating. The sad truth is that cheating is as old as testing. But the internet age brings new challenges that are in no way unique to the College Board. Every testing organization – including all of the major undergraduate and graduate program admissions tests – reuse some test questions or forms. Targeted reuse is one way testing organizations ensure quality and comparability of tests over time.” The College Board urged the media to pay attention to students who “work hard” and “play by the rules” instead of the ongoing security issues.
But the number of breaches and vulnerabilities – whether as part of cheating rings or other sorts of scams – are too important and too frequent to simply brush off as an “old” problem.
How Secure is Education Data?
A partial list of security breaches at schools this year: Social Security numbers, names and birth dates of some 78,000 students and staff at Katy ISD in Texas. W–2s of about 1400 employees and the direct deposit banking information of another 40 at the University of Virginia. The names, names, Social Security numbers and dates of birth of about 1000 students in the Frederick County Public Schools in Virginia. The student identification number, race, age, school, and disabilities of about 12,000 students in the DC school system. The Social Security numbers of about 4100 University of Mary Washington students and staff. Personal information, including Social Security numbers of about 63,000 student athletes, former student athletes, current employees, and former employees at the University of Central Florida. The personal data of about 2000 students in the Lewis-Palmer School District 38 in Colorado. Social Security and bank account numbers of some 80,000 students, alumni, employees and former employees at the University of California, Berkeley. The personal data and banking information of 51 El Paso Independent School District employees. The banking information of 13 employees of Illinois State University. The personal information of “thousands” of current and former employees of Holley Central School District in New York. The personal information, including Social Security numbers, of about 450 staff and students at Michigan State University.
The latter was discovered when an email was sent to the school attempting to extort money. This is becoming an increasingly common tactic among cyber-criminals: ransomware, a malicious software that can hold an entire computer system and all its data “hostage” until a fee is paid. That’s what happened to the Horry County Schools in South Carolina earlier this year; it had to pay $10,000 to get access to its system back. The University of Calgary paid $20,000 that was demanded after an attack this spring on its email system. Bournemouth University, which funnily enough actually boasts a cybersecurity center, was hit by ransomware twenty-one times in the past year.
There were also cyber attacks against testing systems and several examples of data being leaked due to negligence and “human error” – and as a result of all of this, plenty of lawsuits.
Clearly schools and school systems are unprepared for cybercrime and unable to protect the basic personal information kept in student and employee information systems. (This will be an obstacle to ed-tech, Edsurge laments.)
The US Department of Education itself is also incredibly vulnerable to cyber threats. As The Hill reported in January, “House Oversight Committee Chairman Jason Chaffetz (R-Utah) is warning that a hack on the Department of Education would dwarf last year’s massive breach at the Office of Personnel Management. ‘Almost half of America's records are sitting at the Department of Education,’ Chaffetz said. … ‘I think ultimately that’s going to be the largest data breach that we’ve ever seen in the history of our nation.’” In February, the Department of Education’s chief information officer Danny Harris was “hammered” at a committee hearing meeting for the ongoing vulnerabilities in the agency’s information systems. Harris resigned shortly afterwards.
2016 underscored that security flaws aren’t solely a problem of educational institutions and government agencies. In September, the tech giant Yahoo confirmed that some 500 million users’ accounts had been compromised in the largest breach ever. (You know what’s cooler than a million hacked Yahoo accounts?) Education technologies and digital technologies that cater to students, teens, and children experienced several breaches and vulnerabilities of their own: Blackboard, Code.org, i-Dressup, OurTeenNetwork, OKCupid, and UKnowKids.com, for example.
Although it didn’t say it was an explicit response to the compromise it experienced earlier in the year, Code.org announced in July that it was changing how students would log into its Code Studio and that it was deleting some 10 million student email addresses from its system and would not collect that information any longer. “The data we don’t store cannot be stolen from us,” founder Hadi Partovi wrote in a blog post.
That’s an incredibly important point – “the data we don’t store cannot be stolen from us.” And yet education technology companies, in conjunction with schools, collect so much data – much of it utterly unnecessary.
Just how secure is that data? And even if it is “secure,” are there still terrible privacy practices involved? Are the Terms of Service, for example, onerous and their data-sharing practices sketchy, absolving themselves of all responsibility, for example, for leaking the personal data of some 6.3 million children? Are log-ins encrypted? (A recent study by Common Sense Media found that some 25% of ed-tech products it evaluated do not support encryption at all, and only a paltry few education organizations and companies offer two-factor authentication to protect log-in credentials.)
In order to help schools and parents make “informed decisions” about privacy and security, Common Sense Media released an “Information Security Primer for Evaluating Educational Software” this year. With the help of some forty school districts, the organization has also launched a multi-year initiative to evaluate educational software, rating these in terms of privacy, security, and safety.
According to one market research firm, security and cybersecurity are among the products that K–12 schools have expressed the most interest in buying. To clarify, however, what many schools count as “safety” procurements: “gun detectors when you walk into a school, security guards or patrol officers who hang out at schools, or the equipment [schools] buy like security cameras.” Yes, gun-detectors are ed-tech.
This interest in cybersecurity has prompted a handful of education technology companies to take steps to improve their own security practices. Instructure, for example, undertook its fifth annual security audit this year to identify possible vulnerabilities in its LMS Canvas. Cybersecurity for schools will also likely become a product in its own right – poised to be a boon for those companies that charge other companies to verify their privacy and security bonafides in exchange for a “seal of approval.”
Unfortunately, too often education technology companies and educators are willing to trade security and privacy for the sake of convenience in the classroom. One of the best examples of this might be Clever’s release of a QR code badge that students can use to automatically enter their username and password into school software. QR codes are, of course, notoriously bad security and a way for malicious code to be executed. Education infosec expert Jessy Irwin examined Clever’s product launch, arguing that that “what they’re doing with this new product is not a security upgrade or a major privacy coup for schools.” Indeed, using these badges creates a host of other security problems: what happens when students swap badges, for instance? What sorts of lessons about insecure digital practices are students being taught? Will this be effective in any way in keeping students’ data safe? Or is this simply about classroom management and efficiency – making it as simple as possible to log students into their machines? “There is no basis upon which schools can evaluate Clever’s claims or trust Clever’s product with the data of the young children,” Irwin concludes, “especially given the unsubstantiated claims made in its marketing materials and the insecure practices it has modeled for educators everywhere.”
Privacy Pushback
Federal and state laws do dictate what kinds of data must be collected about students and what kinds of privacy protections must be in place. The best known of these are the Family Educational Rights and Privacy Act (FERPA) and the Children’s Online Privacy Protection Act (COPPA). First passed in 1974 and 1998 respectively, these laws are often criticized for not keeping pace with new digital technologies.
High profile lawmakers questioned education companies about their privacy practices this year – Senator Al Franken, for example, demanded Google clarify its collection of student data, and California Attorney General Kamala Harris urged ed-tech companies to do more to protect student privacy. According to the Data Quality Campaign, “Student data privacy was a priority issue in state legislatures in 2016.” The DQC reports that 49 states and the District of Columbia have introduced some 410 bills addressing student data privacy since 2013, with 73 of these bills actually becoming law. 11 states introduced bills modeled on language from the ACLU. Among the provisions in new legislation: Connecticut requires local education agencies to electronically notify parents every time districts sign a new contract, while Colorado requires districts to develop policies for dealing with misuse of data or breaches by contractors.
There were a number of privacy-related lawsuits this year too: In January, students and alumni from the University of Berkeley filed a lawsuit against Google, claiming that the search engine “misled Berkeley and other institutions into believing that school email accounts would not be subject to scanning for commercial purposes.” University of Zurich professor Paul-Olivier Dehaye continued his case against Coursera, questioning the authority that the MOOC startup had to transfer European student data. And in August, following widespread outcry, a federal judge backtracked on an earlier ruling that the California Department of Education would have to release a database containing the records of some 10 million students to plaintiffs suing the state over special education provisions.
Do these laws and lawsuits reflect a shift in the public’s attitude towards privacy?
“Nearly one in two Internet users say privacy and security concerns have now stopped them from doing basic things online – such as posting to social networks, expressing opinions in forums or even buying things from websites,” according to a survey administered by the Department of Commerce and released in May. 86% of Internet users told the Pew Research Center in a survey released this fall that they “have taken steps online to remove or mask their digital footprints.” In that same survey, 74% said it was “very important” to them that they be in control of who can get information about them, and 65% said it was “very important” to them to control what information is collected about them. And yet almost half of those surveyed admitted they weren’t certain what kind of data was being collected about them or how it was being used.
Another survey by Pew earlier in the year also found that the majority of Americans were willing to share their personal data or permit surveillance if they felt they benefited from it. This is the justification for surveillance at school – a decision that schools make for students (often without their knowledge or consent): that the collection of personal data will be beneficial.
Education Technology and School Surveillance
Imagine classrooms outfitted with cameras that run constantly, capturing each child’s every facial expression, fidget, and social interaction, every day, all year long. Then imagine on the ceilings of those rooms infrared cameras, documenting the objects that every student touches throughout the day, and microphones, recording every word that each person utters. Picture now the children themselves wearing Fitbit-like devices that track everything from their heart rates to their time between meals.
Imagine.
“Eye-trackers that detect when your mind is wandering. Clothes that let you ‘feel’ what it’s like to be in someone else’s body. Sensors that connect your heart rate to how engaged you are in class. These are the kinds of wearable technologies that could soon impact how we learn,” Edsurge wrote excitedly in November.
Surveilling students, so we’re told by this sort of ed-tech futurist PR, will help instructors “monitor learning.” It will facilitate feedback. It will improve student health. It will keep students on track for graduation. It will keep schools safe from violence. It will be able to ascertain which student did what during “group work.” It will identify students who are potential political extremists. It will identify students who are suicidal. It will offer researchers a giant trove of data to study. It will “personalize education.” (More on this in the next article in this series.) Tracking biometrics and keystrokes will make education technology more secure. (Spoiler alert: this is simply not true.)
“Big Brother is coming to universities,” The Guardian pronounced in January, although arguably this culture of surveillance has been a part of education for quite some time. But undoubtedly new digital technologies exacerbate this. The monitoring of students is undertaken to identify “problem behaviors” and in turn to provide a revenue source for companies willing to monetize the data they collect about all sorts of student behaviors. “Enabled by Schools, Students Are Under Constant Surveillance by Marketers,” as the National Education Policy Center cautioned in May.
Under surveillance by marketers. Under surveillance by companies. Under surveillance by schools. Under surveillance by police. Under surveillance by governments. Under surveillance by gadgets. Under surveillance when they use school software. Under surveillance when they use social media. And again, it’s all justified with a narrative about “success” and “safety.”
Some 2016 school surveillance highlights:
The Christian college Oral Roberts University required all its incoming students this year to wear a Fitbit. “It appears as though school staff and instructors will be able to access the fitness tracking information gathered by the students’ devices. ‘The Fitbit trackers will feed into the D2L gradebook, automatically logging aerobics points,’” according to the university’s website," local news reported. The university promised it wouldn’t track their having sex – but then again, the school’s honor code already prohibits sex.
The University of Michigan signed a $170 million deal with Nike. The contract included a clause that “could, in the future, allow Nike to harvest personal data from Michigan athletes through the use of wearable technology like heart-rate monitors, GPS trackers and other devices that log myriad biological activities.”
It’s not just students who are being surveilled; it’s faculty and staff too. Faculty at Rutgers expressed concern this fall over the monitoring of their email. The University of California system prompted an outcry when it installed a monitoring system on its network and told IT staff not to inform faculty. Western Sydney University was accused of snooping on faculty emails. Faculty in Minnesota protested when a new rule allowed colleges to inspect their cellphones. Faculty at Our Lady of the Lake University discovered they were being monitored by administrators who added themselves to online course rosters. It’s not just administrative software either that’s monitoring what teachers write and say; students are filming teachers as well. (Some professors apparently like this idea of surveilling their classrooms very much.)
This Will Go Down on Your Permanent Record (But Not Ours)
“UC Davis spent thousands to scrub pepper-spray references from Internet,” The Sacramento Bee reported in April. Thousands as in at least $175,000. The university hired “reputation management companies” to improve search results for the school, so that stories and images of Lt. Pike unleashing pepper spray into the face of peaceful protesters in 2011 weren’t the first thing that people found about the school online.
Laugh all you want about the Streisand Effect, but it’s quite chilling that a public institution would do this, removing from its own websites many of the reports that it had commissioned on the incident.
The university defended its decision, of course. Chancellor Linda Katehi lost her job (although she might be re-hired to run the university’s Feminist Research Institute).
What data gets collected? What data gets preserved? What data gets analyzed? What data gets shared? What data gets scrubbed? Whose privacy and reputation are protected?
Resisting the Culture of Surveillance
At home, at work, and at school, we are surrounded by computing devices that track our location, our keystrokes, our browsing habits, our clicks, our shopping preferences, our energy consumption, our conversations. In the last two articles in this series, I’m going to look at two “trends” connected to this obsession with data collection: the promise of “personalization” and the dangers of “discrimination by design.”
Although there have been some efforts to extend privacy protections, there are many voices in education technology that insist data-mining is necessary and that concerns about privacy and security are overblown. “‘Freakouts’ Over Student Privacy Hamper Innovation,” says Mindwires Consulting’s Michael Feldstein wrote in February, for example. Tracy Mitrano, Academic Dean of the University of Massachusetts Cybersecurity Certificate Programs, responded that “Vendor motivations for profit hardly qualify as freedom for a student.” But it sure seems to count as “innovation” these days.
For its part, Edsurge has suggested that it might not be possible to balance academic freedom and privacy. Yik Yak, the anonymous messaging network marketed to college students, has proven to be one such challenge to campuses and free speech protections there, as the app has been used to harass and harangue. (It appears, however, that Yik Yak might headed to the ed-tech dead pool, laying off 60% of its staff earlier this month.)
How will education and education technology balance data collection – accountability and transparency – and information security? In light of Wikileaks and the DNC hacks – all those who combed through this stolen data looking to confirm, for example, their suspicions about Hillary Clinton and the Common Core – how might education data be further weaponized?
It’s weaponized already, of course. None of this surveillance plays out equitably. None of the surveillance and none of the punishment.
We could ask: it really necessary to collect all this education data on students? (We might want to rethink how much of our own personal data we store across various companies’ servers as well.)
What can schools learn from libraries, several of which have announced that they’ll be destroying user data in order to avoid surveillance under the Trump administration.
What are education technology proponents doing, if anything, to minimize, rather than exacerbate, risk?
Financial data on the major corporations and investors involved in this and all the trends I cover in this series can be found on funding.hackeducation.com. Icon credits: The Noun Project