read

Well kids, it's back to the drawing board, I guess. And by "drawing board," I do mean that old, wooden desk-like contraption, replete with paper and pencil. Not some nifty, new-fanged whiteboard app. Because the results are in: Computer and Internet technologies aren't doing much good.

That's according to a story in this weekend's The New York Times. Districts are spending millions of dollars on technology, but there's no proof that it's improving teaching and learning. Or rather, there's little indication it's improving test scores.

That's an important distinction to make.

Teaching with Technology / Teaching toward the Test

By all appearances, 7th-grade English teacher Amy Furman seems to use technology well in her classroom. The opening anecdote of the NYT piece describes her class studying Shakespeare's As You Like It and working on constructing social media profiles and writing blogs for the play's characters. Furman's class appear engaged -- these are 12 year olds using the Web (and Kanye lyrics) to help understand a play that's some 400 years old. That's something for a which a good teacher is as much responsible for as a good Internet connection is, I'd wager.

But that's the thing -- Furman's district has spent a lot to equip the schools' classrooms with laptops and Internet. It's spent millions of dollars on technology -- some $33 million over the course of the last 6 years. And now the school is looking for another $43 million from taxpayers over the course of the next 7 years. That's just 3.5% of the district's annual spending, but it's more than it spends on textbooks, as the NYT points out.

Test Scores and Technology Budgets

The story does highlight the work of innovative teachers like Furman, but it also questions her efficacy along with that of the technology the district has implemented (no questions asked about how well spent that textbook money is, but I digress). Furman says she hopes all the money is worth it, but the story highlights the dearth of evidence to demonstrate that these major tech investments actually do much. "Hope and enthusiasm are soaring here," the story says of Furman's district. "But not test scores."

Ah. Test scores -- the sole measure by which, apparently, we judge teaching and learning. Cathy Davidson, author of a recent book on learning and brain science, has a great take on the deep flaws with this approach:

"It is not the test scores that are stagnant. It is the tests themselves. We need a better, more interactive, more comprehensive, and accurate way of testing how kids think, how they learn, how they create, how the browse the Web and find knowledge, how they synthesize it and apply it to the world they live in. As long as we measure great teaching such as Ms. Furman's by a metric invented for our great grandparents, we give kids not just the limited options of A, B, C, and D in a world where they can Google anything, anytime. Worse, we are telling them that, in the world of the future, the skills they need, they will have to learn on their own. For, after all, they are not on the test."

The NYT does point to Karen Cator, the Director of the Department of Ed's Office of Educational Technology in essence shrugging off the test scores: In places where we've had a large implementing of technology and scores are flat, I see that as great, she said. Test scores are the same, but look at all the other things students are doing: learning to use the Internet to research, learning to organize their work, learning to use professional writing tools, learning to collaborate with others.

But it's fairly clear that Cator's bosses -- the Secretary of Education and the President -- see things differently. Test scores matter -- they matter so much we should assess teacher's performance and by extension base their pay on them.

Inconclusive Results

One of the points the NYT article makes repeatedly is that there's just not enough research. "The data is pretty weak," says investor and blended learning propoent Tom Vander Ark. "There is insufficient evidence to spend this kind of money," says Stanford education professor Larry Cuban.

That's hardly a surprise -- and not just because of the government's penchant for multiple choice tests as assessment tools. It's incredibly difficult to assess technology in the classroom: every class is different. Teachers are different. Each student is different. Technologies change so rapidly. So many variables. Standardized testing, my ass.

Another (less retweeted) story about technology and testing data broke this week too. It's another story, I'd say, that highlights some of the problems we face when we look at test scores and when we try to make these blanket claims about what works and what doesn't work in a classroom

The story comes from Envision Academy which released the results of its pilot project this summer, using Khan Academy in one of its math remediation classes. The school offered two algebra classes during summer school -- one with a "blended learning" approach that utilized Khan Academy and one that followed the "traditional" curriculum. According to results posted this week on BlendMyLearning blog, there was a negligible difference between the two groups. Those in the Khan Academy group scored only very slightly better than those who were in the non-blended learning class.

You can look at this data and use it as justification to dismiss the technology. You can question the value of Khan Academy and/or of blended learning. Because while, true, Khan Academy is itself a free resource, it still requires a tech investment, and shools are spending a lot of money piloting programs and training teachers in order to use it in classrooms.

But I don't want to misconstrue the results of the Envision Academy pilot program. As principal Brian Greenberg makes clear, there are so many caveats to his findings, least of which being the small test groups and short duration of the summer class. To say Khan Academy "worked" or "didn't work" based on the MDTP Algebra readiness test scores belies a lot of what actually happens in a classroom. Working at their own pace, the students in the Khan Academy group, for example, spent a lot more time working on their pre-algebra skills (fractions, decimals, basic computation) -- skills that were not tested. And it's impossible to tell how much the students' enthusiasm for Khan Academy and for getting to their math lessons on a computer impacted how well they performed.

"More Data!" Shouldn't Mean "More Tests!"

And that's really just the beginning of the problems we have with these particular cries for "proof" that ed-tech works. Test scores always fail to account for everything that happens in a classroom. They don't give you much insight about the rapport students have with the teacher or with each other. They don't indicate much about deep cognition or retention. And despite test names that purport to look at "readiness," these tests do nothing to gauge our children's readiness for the future.

Yes, I understand it's easy to say "no more technology expenditures til I see proof" -- that's what the last line of the NYT story leaves you with and that's the question that taxpayers may be asking -- but we have to look critically at what we're looking at when we ask whether or not technology works for teaching and learning.

Audrey Watters


Published

Hack Education

The History of the Future of Education Technology

Back to Archives