This article first appeared in the publication Identity, Education and Power in February 2016
Late Friday night, Buzzfeed published a story indicating that Twitter is poised this week to switch from a reverse-chronological timeline for Tweets to an algorithmically-organized one. Users’ responses ranged from outrage to acceptance (after all, it’s hardly the first time that Twitter has made a drastic change to the user experience) – but the predominant response seemed like outrage, as #RIPTwitter soon began to trend.
Many questioned the decision, pointing out the importance that Twitter’s real-time feed has played in breaking news. Some noted that events like those that transpired in Ferguson, Missouri in August 2014 were first and foremost spread via Twitter – long before the story of Michael Brown’s death was picked up by the mainstream media. And others pointed out that the story of the shooting and the subsequent protests were glaringly absent from Facebook, whose news feed is displayed algorithmically.
These observations about a filtered and circumscribed access to information get to the heart of the concerns about an algorithmic Twitter: it will reinforce the voices of the powerful and silence the voices of the marginalized.
Twitter has already become rather infamous as a platform that facilitates the harassment of women – it’s an issue that the company has never really taken seriously, many contend. An algorithmic timeline hasn’t been put forward as a “solution” for harassment; rather it’s a “solution” for “engagement” and for “growth” – a response to the demands of investors and shareholders, not the demands of active users.
Algorithms are not neutral, although they are frequently invoked as such. They reflect the values and interests of their engineers, although it’s hard to scrutinize what exactly these values and interests entail as the inputs and calculations that feed algorithms are almost always “black boxed.”
When Twitter engineering manager Leslie Miley quit last year, his departure left no managers, directors, or VPs of color in engineering or product management at the company. This matters. It matters, as Miley noted in an article he wrote describing Twitter’s ongoing diversity problems, because “27% of African American, 25% of Hispanic Americans and 21% of Women use Twitter according to Pew. Only 3% of Engineering and Product at Twitter are African American/Hispanic and less than 15% are Women.” It matters for the shape of the product; it matters for the shape of the algorithms that will dictate how we Twitter users experience the product.
And this matters in turn, because as law professor Frank Pasquale has argued, “authority is increasingly expressed algorithmically.” Algorithms – their development and implementation – are important expressions of power and influence. This is increasingly how decisions get made. This is, as Twitter and Facebook both underscore, how networks are constructed and how information is delivered.
And this matters greatly for education technologies, not only for the learning networks that have emerged on sites like Twitter, but in the various software and systems used in the classroom and by administration.
Some of the buzziest of buzzwords in ed-tech currently involve data collection and analysis: “personalization,” “adaptive learning,” “learning analytics,” “predictive analytics.” These all involve algorithms – almost to a tee, algorithms that are proprietary and opaque.
As with Twitter’s promise that an algorithmic timeline will surface Tweets that are more meaningful and relevant to users, we are told that algorithmic ed-tech will offer instruction and assessment and administrative decision-making that is more efficient and “personalized.” But we must ask more about why efficiency is a goal – this is, after all, the application of business criteria to education – and how that goal of efficiency shapes an algorithmic education. (How, for example, is “engagement” in ed-tech shaped by Web analytics; that is, “engagement” has become a measure of clicks and “time on page.”) And we must ask “personalized” how and for whom? Who benefits from algorithmic education technology? How? Whose values and interests are reflected in its algorithms?
Although “personalization” implies that algorithms work to deliver each of us an individualized experience, it does so in part by building profiles about us. This isn’t necessarily new or particularly refined, as advertisers have long divided us into different market segments based on things like our age or gender. Similarly, schools have profiled and tracked students based on things like age or “aptitude.” Many of these educational practices have been questioned for perpetuating discrimination, and it’s worth considering how we might challenge institutional racism, for example, if more and more decisions occurs algorithmically. Is historical discrimination amplified? Are these decisions now also less transparent?
An algorithmic education, despite all the promises made by ed-tech entrepreneurs for “revolution” and “disruption,” is likely to re-inscribe the power relations that are already in place in school and in society. Profiling has very different implications for different groups.
In the UK, for example, schools are monitoring students’ Internet activity for words and phrases that software deems might signal Islamic radicalism. A spokesperson for Impero, a company that is piloting this technology throughout the UK and US, told The Guardian that “The system may help teachers confirm identification of vulnerable children, or act as an early warning system to help identify children that may be at risk in future. It also provides evidence for teachers and child protection officers to use in order to intervene and support a child in a timely and appropriate manner.” Almost every word in those two sentences demands clarification: how is “vulnerable” or “at risk” defined? What feeds the algorithms that make the predictions that identify these students? We don’t know. What happens to the data that’s being collected on all students – “at risk” or not – to devise these profiles? We don’t know. What evidence do we have that predictive modeling and “intervention” – whatever that entails – works to decrease radicalism rather than increase mistrust and discrimination?
There are some competing and contradictory arguments made by algorithms and education technologies. On one hand, algorithmic decision-making promises improved surveillance over “vulnerable children” – those whose behaviors and backgrounds mark them as “at risk,” according to schools’ and societies’ values (and more specifically, the values and interests of software makers). But on the other hand, much of ed-tech also insists that it has stripped out the context and culture of the classroom as well as the context and culture of the student. This imagined ed-tech using student is what sociologist Tressie McMillan Cottom has called “the roaming autodidact” – an “ideal, self-motivated learner," "embedded in the future but dis-embedded from place.” Dis-embedded from place, disembodied – an erasure that just as easily serves as a re-inscription of a “universality” of the white, middle class male. (Again, we don’t know. The algorithms are opaque.)
As I’ve argued previously, some ed-tech companies contend (in response to privacy concerns, to be clear) that their learning algorithms work without knowing who students are. Knewton’s CEO, for example, has said that “We can help students understand their learning history without knowing their identity.” But what does it mean to do so? What does it mean when ed-tech companies talk about an “identity-less-ness” learning? What does it mean to build “learning sciences” and “learning technologies” on top of this sort of epistemology? What does “personalization” possibly mean if there’s no “personally identifiable information” involved? What happens to bodies and identities – particularly bodies and identities of marginalized people – when they’re submitted to a new algorithmic regime that claims to be identity-less, that privileges identity-less-ness? And of course, what are the ideologies underneath these purportedly identity-less algorithms? (We might be able to answer that question, if partially, even when the algorithms are black-boxed.) These are some of the most important questions we must ask about identity, power, and education technologies.
Twitter users might balk at letting the company control their social and information networks algorithmically; it’s time we bring the same scrutiny to the algorithms we’re compelling students and teachers to use in the classroom.