read

Amapolas magicas *

New Year’s Predictions

It’s customary this time of year for bloggers and journalists and pundits alike to make their predictions for the new year. I’ve done so in the past (See: 2011, 2012). But I’m skipping out on the tradition in 2013.

In part, it’s because I can’t possibly write another blog post for the foreseeable future with a headline that contains a number — any number — or the words “My” or “Top.” I’ve done far too much of that lately with my look back at ed-tech in 2012.

But mostly, it’s because I recently read Nate Silver’s book The Signal and the Noise: Why Most Predictions Fail but Some Don’t. I’ve been thinking a lot about the questions it raises about predictive modeling, data, dogma, politics, and expertise. (I’m most interested in how this relates to education, technology, and ed-tech journalism, no surprise, although Silver never addresses this sector’s sort of predictions or punditry.)

Of course, the flurry of predictions posts that we’ll see churned out in December and January are hardly the same as the predictions that Silver examines in his book. Among other things, The Signal and the Noise covers predictions about weather and climate change, elections, sports, gambling, and financial markets — all “serious business.” That’s not like many New Year’s predictions, which are done half tongue-in-cheek —often a gleeful or a panicky “What if…?” Most New Year’s predictions blogged aren’t based on algorithms, models, or Bayes Theorem. They’re simply wishes for about what we hope to see (or hope not to see) in the new year.

The accuracy of these New Year’s predictions might not be all that important and is probably of interest solely to the author who can, at the end of the year, look back and chuckle at how she managed to get things so right or wrong. As for my 2012 predictions, I was wrong about Google’s plans to scrap Chromebooks. I was wrong about Chegg’s acquisition plans. I was wrong about Arne Duncan’s tenure as Secretary of Education, even more so considering I pegged Karen Cator to replace Duncan, and she’s now leaving the DOE altogether. Of my 10 predictions, I was almost entirely wrong.

Getting The Future Wrong

So why was I wrong? Why were my predictions so terrible?

Why, after such an in-depth look at the “top trends of 2011,” wasn’t I able to better identify trends and developments going forward?

Were my predictions just sloppy and poorly thought-out? Was I looking at the wrong data, measuring the wrong things, monitoring the wrong signals? By picking out things that Google, Amazon, Microsoft, Chegg, and the federal government would do, was I trying too hard, was I playing it too safe? Were some of these things more a matter of wishful thinking than serious prediction?

Or was it, to use essayist Isaiah Berlin’s formulation — one that Nate Silver draws on for his book — that I was more “hedgehog” than “fox,” more dogma than responsiveness?

(Most common words on Hack Education during 2012)

I’m making a bit too much of a single New Year’s prediction blog post, no doubt. And I’m hardly in the business of predictive modeling here. But I am in the business of helping my readers wade through the news and information surrounding education technology — its markets and politics. I can shrug and say that I'm not that interested in predicting the future of education and technology, but I am in fact very, very interested in what that future might hold.

As such, I do think it’s worth considering Silver’s observations about political pundits and what they get wrong (as compared to Bayesian statisticians, I suppose). What data do we have about education technology trends? What counts as education data? What’s signal and what noise? 

Just as important here might be the issues that Mathbabe’s Cathy O’Neil raises about Silver's book and in the possibly flawed analyses of "data science celebrities."

O’Neil argues that Silver assumes that “the only goal of a modeler is to produce an accurate model,” something that might hold true for some topics — topics in which Silver happens to have expertise, like baseball, gambling, and polling — but that doesn’t hold true for other areas he covers in his book, including medical research and financial markets. Can we really ignore politics to create our predictions? O'Neill writes that

This raises a larger question: how can the public possibly sort through all the noise that celebrity-minded data people like Nate Silver hand to them on a silver platter? Whose job is it to push back against rubbish disguised as authoritative scientific theory?

It’s not a new question, since PR men disguising themselves as scientists have been around for decades. But I’d argue it’s a question that is increasingly urgent considering how much of our lives are becoming modeled. It would be great if substantive data scientists had a way of getting together to defend the subject against sensationalist celebrity-fueled noise.

This seems particularly relevant for education, and not just because its politics and policies seem to be increasingly obsessed with data. What models are we building for education (and why)?  Who are the experts we trust in ed-tech and why? What are their interests in making predictions or even -- and I am implicated here too -- in identifying trends? 

Image credits: Jacinta Lluch Valero

Audrey Watters


Published

Hack Education

The History of the Future of Education Technology

Back to Archives