Article Image
read

Last Friday, former Star-Ledger education reporter Bob Braun posted a screenshot on his blog of an email by a New Jersey superintendent detailing a “Priority 1 Alert” issued by Pearson and the state department of education, alleging that a student had tweeted about a test question on the PARCC, causing a security breach. The email expressed several concerns: the potential for more parental outcry about student data and testing, particularly as the DOE had requested the student be disciplined. Also, through the incident, the superintendent had learned that “Pearson is monitoring all social media during PARCC testing.”

And with that sentence, Braun’s story went viral, temporary knocking his website offline (prompting, in certain circles, conspiracy theories that it had been DDOSed by a foreign corporation).

There’s already been a lot of ink spilled about this, with criticisms and justifications coming from a number of different angles. (See: Bill Fitzgerald, Alice Mercer, Cynthia Liu, for example). The AFT has since weighed in, demanding an end to the monitoring; and New Jersey Department of Education responded today, justifying its “vigilance” in “safeguarding test questions.”

But all of this strikes me as much more complicated than simply an act to protect the security of Common Core assessments. There are a number of important and interconnected questions raised by this story:

What Do We Mean By “Spying” on Students?


Bob Braun’s initial post used the word “spying” in its headline. No doubt, that’s a pretty loaded word that some, like Cynthia Liu, have objected to. Alternatives, the thesaurus tells me: “surveilling” or “monitoring.” All these words carry slightly different weights, slightly different meanings. “To spy” means “to watch.” But it also means “to watch secretly” – and that’s where a lot of the concern comes from, I think.

Do students know they’re being watched? Yes, I think they (mostly) do when it comes to their interactions (offline) at school. They know they’re being watched in class, in the halls, in the cafeteria, on the playground. We've socialized them to conform, to "behave" there. But no, I don’t think students (necessarily) do know they're being watched when it comes to their after-school updates on social media. And it’s worth asking as such: what expectation of privacy from school surveillance should students expect while at home? What behaviors are we going to compel from students in their personal lives? Who gets to decide what that looks like?

I have heard a lot of adults sneer that “if you post it on Twitter, you should realize it’s public,” but I’m not sure all of us who use social media – and it’s not simply teens who get the societal finger-wag here – think about our social media updates that way. Nor should we, I’d argue.

“Public” and “private” are not simple binaries. As danah boyd has argued, “there’s a big difference between something being publicly available and being publicized. I worry about how others are going to publicize this publicly available [social media] data and, more importantly, who will get hurt in the cross-fire.” (Her original quote said “Facebook,” but I think the same holds across a number of platforms: Twitter, Facebook, and so on.)

What Do We Mean By “Privacy”?


As boyd and others contend, “private” is not the opposite of “public.” There are things that we do out in public – conversations that we have in the coffee shop or bar or park, for example – that, we still expect to be private. We don’t anticipate that our conversations in these settings are recorded or broadcast or data-mined.

You can argue that that’s changing, that that’s naive for us to think that when those conversations in a “public space” are in an online public space, that we’d expect them not to be tracked or monitored. Perhaps that’s true. That doesn’t mean we shouldn’t fight corporations' compulsion to track us. And that doesn’t mean we shouldn’t move to defend the least powerful among us from having their lives monitored – and yes, that includes our students. (And a side note: I really do hate the whole “I can’t believe you didn’t know this was already happening” line that accompanies a lot of tech surveillance revelations. This sort of dismissive attitude offers nothing but smugness.)

I do think there are differing levels of “publicness” online – differing based on a number of factors: the popularity of the site, the user, the topic, for starters. As someone who has 28K followers on Twitter, I experience this often: a casual comment or RT echoes in ways that I hadn’t really expected. And even more importantly, it’s wrong to assume that we all get to move about – in either physical or virtual spaces – with the same assurances about our personal privacy, integrity, and safety. For a woman to lose elements of personal privacy has different ramifications than it does for a man; for an African American woman to lose personal privacy, moreso. For a teen... etc. "Public" and "private" are descriptions of power and privilege. They are not social absolutes. They are not "given."

Surveillance does not affect all students equally. Privacy is increasingly a premium feature; which students can afford such?

What is privacy? As Helen Nissenbaum has argued,

Attempts to define [privacy] have been notoriously controversial and have been accused of vagueness and internal inconsistency — of being overly inclusive, excessively narrow, or insufficiently distinct from other value concepts. Believing conceptual murkiness to be a key obstacle to resolving problems, many have embarked on the treacherous path of defining privacy. As a prelude to addressing crucial substantive questions, they have sought to establish whether privacy is a whether privacy is a claim, a right, an interest, a value, a preference, or merely a state of existence. They have defended accounts of privacy as a descriptive concept, a normative concept, a legal concept, or all three. They have taken positions on whether privacy applies only to information, to actions and decisions (the so-called constitutional rights to privacy), to special seclusion, or to all three. They have declared privacy relevant to all information, or only to a rarefied subset of personal, sensitive, or intimate information, and they have disagreed over whether it is a right to control and limit access or merely a measure of the degree of access others have to us and to information about us. They have posited links between privacy and anonymity, privacy and secrecy, privacy and confidentiality, and privacy and solitude.

We fail to have much nuance when we talk about student data and privacy, and here Nissenbaum’s work is particularly helpful: context matters. Privacy shouldn’t mean “never share.” Or “never share student data without parent’s consent.” These sorts of assertions are particularly irksome to me because they highlight the ways in which so many of our privacy conversations fail to recognize student agency at all. Students and their data are objects in many of these formulations. Too often conversations about privacy fail to give students a voice, for starters, in what pieces of their personal data are shared (or why they're not). Indeed, students are compelled -- by the syllabus and the TOS -- to share. They have little choice - in opting in or opting out. Policies and parents often fail to recognize that students might have – should have – a voice in determining what’s worth opening up to aggregation and analysis and what’s something not really meant for that. (Many so-called privacy advocates in education reinforce this, asssuming they always speak for students, assuming that they know better than students. Again, students just end up as objects of a different sort of paternalism.)

Discussions of privacy are rarely framed about personal integrity – about how identity is performed in certain venues and how surveillance and punishment in those venues might be detrimental to experimentation, exploration. or personal growth. Yet these factors - these vulnerabilities even - seem to be particularly important to consider in education technology circles. What happens to students’ personal growth if we’re going to watch them and collect all their clicks and updates and images and videos during and after school? Who do you get to be, what identities do you get to try out and peform, if you know you're always watched -- by your teachers, by brands? That is, what do intellectual freedom and personal identity development look like under total data surveillance? How much do we want to monitor students as they figure out how to express themselves, as they figure out who they are – again, on and offline?

How much of students' behavior do we want to give a side-eye, how much do we want to squint at, how much do we want to scrutinize algorithmically?

These are such important questions when we’re dealing with K–12 students and college students alike. But mostly, instead talking about identify formation and social media monitoring, it seems we want to wring our hands about “cheating.” We let that drive the conversation...

Why Monitor Social Media?


There’s a longstanding debate over whether or not teachers should “friend” their students on social media. My 2 cents: it depends. It depends what educators’ relationship with students looks like. It depends on what students want out of that relationship. It depends what educators want. I know teachers who have been able to provide counseling and support in teens’ most dire moments, thanks to their being attuned to social media. (This is complicated by the tools we use too: take Twitter: I'd argue its infrastructure is built on “watching” not “friending.” It's different than Facebook's mechanisms - not that those folks are really your "friends.")

The whole "it depends" thing governs a lot of what "monitoring" looks like, doesn't it. If it's done out of caring or done out of concern.

So what is Pearson doing in this particular case? Pearson doesn’t care about individual students’ struggles with queer identity, homework, cyberbullying, college applications and college affordability, homework, after-school jobs, homecoming king drama, the basketball team’s season, band tryouts, drama tryouts, drama, a parent’s death, parents’ divorce, or standardized testing. Wait. No. Pearson “cares” about that last one.

Pearson is involved in social media monitoring, as is almost every major corporation, not because they care about students. It’s because they care about their brand. They care about their intellectual property. Corporations like Pearson monitor social media, in part, so they can provide customer service. Pearson monitors social media so it can glean insights based on social media sentiment about its brand. (You suck, Pearson!) And when it comes to assessment, Pearson monitors social media so it can identity – and based on its interaction with the NJDOE, punish – those who post status updates about its tests.

In this case, so we’re told, the social media monitoring falls under the umbrella of "test security," which isn’t a new concern by any means. Students have long been told to not bring anything into standardized tests but a number 2 pencil and the pre-approved calculator. Eyes on the exam. No talking. Etc. The same test that's given in New Jersey this week is going to be given in Iowa (or somewhere) next week; so no one can talk about it. Like, ever.

For what it's worth, the technology tools used to monitor testing are only increasing. Rutgers, for example, uses a tool called ProctorTrack to verify student identity (i.e. to prevent cheating) that demands they hand over biometric data including facial recognition and knuckle monitoring.

As Jessy Irwin has argued, we are grooming students for a lifetime of surveillance.

Social media monitoring, so we’re told, provides a new and powerful way to monitor and punish those students who talk about the test after the test. (That’s different than punishing those who talk during the test.) Students have always done so, let’s be honest. Some of this talk, schools long have decreed, counted as cheating – particularly if there were cheat-sheets that others could work from. But some of the talk about tests, we shrugged off merely as banter.

“That one reading comprehension passage about electricity was so easy, ya know, because we just talked about that on Thursday.” – if that’s the content of a tweet, is that cheating? What if it’s a conversation in the cafeteria? What if it’s the topic of a student’s phone call? Which do we monitor? Why?

What Does Social Media Monitoring Track?


Again, a common response to the Pearson social media tracking here is that students’ tweets are public, so they’re fair game. But it’s not actually clear what Pearson is tracking. The social media firm Tracx had posted a case study about its work with Pearson, but immediately following Braun’s revelations, that link went dead. (Pearson is still listed as a client.) Tracx boasts, among other things, a “capability which automatically stitches together a user’s social profiles.” Many services (such as Full Contact) let you look up an email address and identify all the social media profiles associated with that. So even if a student’s Twitter account doesn’t have his or her name attached, it’s still discoverable with these monitoring tools as long as it was created with a known address.

Tracx also says it can “visualize social posts at the street level.” Again, if students haven’t turned off geolocation on Google Maps, Twitter, Foursquare, Yelp or the like, they’re pretty easy to find. And these are just a few public clues that a social media monitoring company can use to identify and monitor the Twitter accounts of students who’re currently sitting PARCC exams. That is, it’s not just that there was a tweet about the test; it’s that, thanks to data analytics, Pearson can immediately know who tweeted it.

Social media monitoring is an incredibly sophisticated and incredibly lucrative business. Social media monitoring is not, as I’ve seen some suggest, simply a matter of looking at all the #PARCC hashtags. Social media monitoring involves data collection and data analysis at such a level that schools are told they cannot do this in-house.

This whole process is based on algorithms that surface certain “insights” that the algorithm designers deem important. What metrics does Pearson care about? Even if it’s sucking in all sorts of data about teenagers – across platforms, across locations – what signals matter? What signals does it ignore? (We do not know because Pearson has not shared details of its monitoring algorithms.)

Is This a Free Speech Issue?


Cynthia Liu has argued that “this is not a student data privacy issue, but a student free speech issue.” I disagree. It’s both. These issues are not either/or. Indeed, surveillance chills free speech.

In the light of this week’s revelations, students - particularly savvy ones in New Jersey - will move their conversations elsewhere. They will not stop talking about testing. They will just do so in venues in which they do not think adults are listening. They will whisper rather than tweet. They will Yik Yak or SnapChat; they will text. (Will Pearson try to monitor those too?)

Liu argues that that the crackdown on students' social media updates about the Common Core tests is a violation of the First Amendement. It's worth noting that, according to the initial report out of New Jersey, the student’s tweet that prompted this whole brouhaha was made at 3:18 – that is, after school. Do schools have a right to monitor and discipline students’ behavior and speech in school? – yup. Do schools have a right to monitor and discipline students’ online behavior and speech after school? – not so clear.

And again: what happens when, thanks to Internet technologies, schools and their corporate ed-tech providers, opt to surveil students 24–7?

Who Benefits? Who Loses?


The response to the news from New Jersey – in certain circles at least – was shock. The blame for all this, placed on Pearson. But let’s be honest: many schools already engage in social media monitoring. Schools, not just ed-tech providers, hire social media monitoring companies. And many standardized tests – Smarter Balanced, the SAT, AP exams – have similar procedures and policies in place that also involve paying attention to what’s said about the assessments on social media. As such, focusing on Pearson or PARCC misses the point.

Late last year, news broke that the Huntsville, Alabama school district had paid over $150,000 to a security firm to investigate students’ online activity. As a result of his activity, 14 students were expelled. 12 of those were African American. And while the data involves more than this particular sting, it’s worth noting that in a school district where only 40% of students were Black, almost 80% of expulsions in that year involved African American students.

What does the school-to-prison pipeline look like when we bring it online?

What does the school-to-prison pipeline look like if we base it on Twitter updates? 24% of teens, according to Pew, use Twitter. But not equally: teen girls use Twitter more than teen boys. And 39% of Black teens do versus 23% of white teens.

As such, who’s going to be caught up in the Pearson dragnet? If schools and ed-tech companies are going to use social media to track behavior, whose behavior exactly will they track? Who's most likely to get caught up in these social media monitoring dragnets for "inappropriateness"? Who's too loud in class? Who's too loud online? Who's talking out of line? Social media monitoring algorithms are written by people. (Who writes them? Can we see them? Can we review the data these algorithms gather?) Crucially: none of this is neutral.

Policy Says...


"This will go down on your permanent record"... Except none of this is protected by FERPA.

"It's covered by an NDA"... Except I'm not sure students ever signed a non-disclosure agreement, agreeing they'd never speak of the test.

Students should just know to never talk about the test... And the lesson here, for students: you have no rights to speak publicly about your education. It's all covered by some bullshit policy decree - most of it made-up once something goes awry, once someone dares complain. You're just a cog, an object. Fill in the blanks. You don't matter. And we're watching to make sure you know that.

How Should We Rethink Assessment?


In an age of ubiquitous technology and social media, shouldn't we rethink assessment instead of opting to surveil students more severely?

Perhaps if a single tweet – 140 characters – can so easily destroy a test’s security and validity, the whole science of testing thing needs to be reconsidered? Because that’s pretty fragile.

Vulnerability and Trust


Here’s an excerpt from Part 1 of my contribution to the Speaking Openly conversation about education, privacy, and risk (the other videos – from Cory Doctorow, Dan Gillmor, and others – are well worth watching):

Learning requires a certain vulnerability. We have to recognize we don’t know things; we have to be open to not knowing things; we have to listen and experiment and sometimes stumble and fail. We have to be open to learning.

But that vulnerability can play out in lots of different ways, depending on the setting for our learning, for example, and on the role we get to play in deciding what that learning looks like, the way we are treated as learners. Whether we like it or not, we are vulnerable when we’re enrolled in formal educational institutions, for example. That vulnerability is different for a five year old than a fifteen year old than a fifty year old returning to college.

In some ways, school is designed to do something to you – it tells you what you should know, it tells you how you should behave. So we are vulnerable not just intellectually and not just in ways that might open us up to new ideas – a good thing, generally, right? – but in ways that might open us up to less pleasant experiences as well.

Do you trust school? Do you trust your instructors? Your peers? Do you have a choice?

How we answers those questions will vary greatly based on any number of factors.

Trust, vulnerability, choice, control, power – these are all interconnected when it comes to learning. And they’re all connected to issues of privacy as well. What’s key to remember: privacy isn’t really the opposite of publicness. To have privacy isn’t the same as to be hidden – and by extension, privacy is not the opposite of “openness.” We have to recognize that privacy isn’t this universal “thing” society has always respected – or that all members of society have benefited from equally – that is now suddenly under attack by virtue of new technologies. Context matters. Again, power matters. But we do also have to recognize how much new technologies reshape these issues – they reshape practices, contexts, and power – in ways that are both obvious and subtle.

How much privacy do you have to hand over in school? How much have you had to do so historically? It’s one thing for a teacher to recognize that you’re still struggling with your 8 times table, for example. It’s another thing entirely for a piece of software, that the school mandates you use, to track massive quantities of other data about your “progress” – not just how well you score on various math exercises or math quizzes, but all the mouse clicks, all the videos you watch, all the times you rewound a video or fast-forwarded. All this data and metadata represents an unprecedented opportunity to learn more about how students learn, we’re often told. But what does this data collection and data-mining mean in terms of power and privacy and vulnerability? What does it mean in terms of how students have already been surveilled and shaped by school? Do students know this data is being collected? What sort of trust relationship is expected between a student, a school (or an informal learning environment too, I should add), and technology when it comes to learning data?

My report card, even when I was learning my math facts 35 years ago, might have said “she’s getting better at the 8 times table, but she tends to talk a lot in art class.” Or “she can do all the math times tables in some arbitrary time we’ve decided you need to know your math facts – good for her, bravo, but she tends to push to the front of the line in library.” To some extent, students have always been watched and observed as they learn. And we have to think about what that looks like in terms of their autonomy and their agency – are they objects or subjects?

We have to recognize too that this surveillance has never been applied equally – some bodies – “marked bodies,” if you will – have been seen as more “undisciplined.” They’ve always been watched more closely.

Will technology change this? Will technology put even more scrutiny on students? On which students? Which students are in a position to resist that scrutiny? Which students will be granted privacy?

These are questions of power, not simply questions of policy or of technology.

Ideally, of course, open education breaks open some of the control and power, because it recognizes that the learner is the driver here, not the instructor, not the institution. I think we need to do more, however, to make sure that open education when paired with various Internet technologies, isn’t re-inscribing new forms of control and power – it is not just a matter of the control of education institutions, but it’s surveillance and control by the technology sector. Do learners trust technology? Why? Why not? Has that trust been earned? What sorts of privacy should learners demand? How do we reconcile that need for a certain amount of vulnerability in order to learn, with the vulnerability of having so much more of ourselves – our data – exposed as we turn to technologies to do that very learning?

Who Tracks Learners Online? Why?


It's not just Pearson. Pearson is a red herring here...

Blog Logo

Audrey Watters


Published

Image

Hack Education

The History of the Future of Education Technology

Back to Blog