This post first appeared on Educating Modern Learners
Late last year, Pearson announced that it would make "a series of unique commitments designed to measure and increase the company's impact on learning outcomes around the world.”
Pearson is, of course, the world's largest education company, with holdings across the publishing, school services, and assessments industries. (To see a visualization of Pearson’s reach, click on the “show as network” button to update the graph.) So its decision to commit to measuring and increasing “learning outcomes” is significant and will undoubtedly shape the larger education sector.
Pearson's announcement shouldn’t be dismissed outright as simply a matter of rebranding either. Indeed, education consultant Michael Feldstein has described Pearson’s new focus as "a remarkably detailed blueprint of how they intend to rebuild their machine from the ground up."
Pearson calls this commitment to measuring learning outcomes “efficacy,” a term it admits it has borrowed from the pharmaceutical industry — an attempt to echo the processes and the research that go into medical trials before, during, and after products are brought to market. (It is an interesting way to frame Pearson products, no doubt — textbooks and assessments are prescribed to students by knowledgable, licensed professionals. It certainly views the process of education as a clinical endeavor.)
How Does Pearson Define “Efficacy”?
Pearson’s definition of “efficacy": “an education product has efficacy if it has 'a measurable impact on improving people’s lives through learning." (More on this in its “Incomplete Guide to Delivering Learning Outcomes”) To that end, Pearson has created an “efficacy framework tool," a workbook you can complete online or offline (PDF), to "help you to explore what efficacy means and identify any barriers to delivering desired learner outcomes.”
Pearson’s efficacy framework contains four areas: outcomes, evidence, planning/implementation, and capacity to deliver. It asks those reviewing products and services many questions along the way:
- Have you identified specific outcomes for your target group?
- Do you have a way to measure the intended outcomes?
- Do you have ambitious and measurable targets in place, and deadlines for achieving them?
- Are your intended outcomes clearly documented and understood by the relevant people within and outside your organization?
- Is the product designed in a way that will most effectively help your target group to reach their goals?
- Do you understand the benefits of your product or service to your target group, relative to other options?
- Is the cost of the product/service competitive, considering the benefits it would deliver?
- Do you collect evidence using a range of methods (quantitative, qualitative, internal and external for example)?
- Do you have evidence from all users of your product/service?
- Does the evidence you have collected link directly to what you are trying to achieve?
- Is the evidence you have collected unbiased? Applicable to your product/service? Recent? And does it measure success over a period of time?
- Is the evidence you have collected relevant, representative and where possible at an individual level?
- Do you have a plan in place to achieve your outcomes, including milestones, actions, responsibilities and timelines? Is it easy to access and update?
- Does your plan include short- and long-term priorities?
- Have you identified any potential risks and included actions to mitigate these in your plan?
- Do people within and outside your organisation understand who is responsible for decision-making regarding your product/service?
- Do you get/have access to real-time feedback from your users?
- Does your organisation have the right number of people, and people with the right skills to enable you to deliver your desired outcomes?
- Does your organisation have a culture focused on delivering outcomes, and is it collaborative and innovative?
- Does your organisation have enough budget to support this?
- Do leaders within your organisation support your work and are there opportunities to work with others across the organization?
- Have you put measures in place to build users’ skills?
- Is there a culture of partnership and collaboration between your organization and your stakeholders?
There's nothing wrong with any of these questions But as a framework designed to re-align the company's products and services with better learning outcomes I’m not sure that they’re sufficient to provide any major changes in how things are developed or implemented. It seems unlikely that running through these questions will lead to the sort of transformation that Michael Feldstein argues Pearson is after. Feldstein suggests that, despite using words like “stakeholders,” that Pearson is very much focused here on an internal culture change, a way to stay relevant (and profitable) as the traditional textbook industry is reshaped, even threatened by digital technologies.
Is “Efficacy” The Right Framework for Education?
But it’s fair to ask, of course, if Pearson’s definition of “efficacy” is likely to match those of schools, scholars, parents, communities, and students. Are the priorities the same? Is “efficacy” even the right word? The right goal? Is focusing on “measurable outcomes” the right approach? Feldstein adds,
There are potentially conflicting criteria. The framework itself provides nothing to help resolve this tension. At best, it potentially scaffolds a norming conversation. But a product management methodology that can combine knowledge about efficacy, user desires, and usability requires more tools than that. And that problem is even worse in some ways now that product teams have multiple specialized roles. The editor, author, adopting teacher, instructional designer, cognitive science researcher, psychometrician, data scientist, and UX engineer may all work together to develop a unified vision for a product, but more often than not they are like the blind man and the elephant. Agreeing in principle on what attributes an effective product might have is not at all the same as being able to design a product to be effective, where “effective” is shared notion between the company and the customers.
Wait. What Do We Mean By “Learning Outcomes"?
Although it remains an education juggernaut, Pearson’s core textbook business is hardly thriving. (No major textbooks publisher’s business is, truth be told.) And as Feldstein notes in his analysis of Pearson’s new efficacy push, one of the strategies that many publishers are toying with involves a focus on analytics.
Textbook publishers have noticed that products that have the word “analytics” or “adaptive learning” associated with them sell well. There is still a lot of whiz-bangery to the industry’s thinking, but it is starting to dawn on them that their products might sell better if they can prove that those products actually…you know…work. If they do work. If they provably help people to learn.
Pearson is a major investor in Knewton, a company that takes textbook and course materials and delivers them in an order algorithmically "adapted" to students’ learning profiles. Both adaptive learning and this push for “efficacy” tie neatly into a larger reframing of education — as an efficiently designed pathway through pre-determined learning objectives, enhanced and measured by technologies.
Of course we want our learning to be "effective." But the fixation on measurement and efficiency remains unsettling. Thankfully (for my own thinking at least) Virginia Commonwealth University Vice Provost Gardner Campbell recently wrote a blog post on “Understanding and learning outcomes” that does much to question some of the very premises that the new Pearson framework, along with the broader push for more data collection and analysis, is built upon:
As we seek to perfect the language and institutionalization of a culture of “learning outcomes,” it seems we are necessarily moving toward a strictly behaviorist paradigm of learning, away from what Jerome Bruner refers to as the “cognitive turn” in learning theory and ever more deliberately toward a stimulus-response paradigm of learning. This behaviorist turn can be very sophisticated and refined. The behaviors specified, measured, and tracked can be cognitively demanding “smart human tricks.” There can even be qualitatively measured learning outcomes, though it appears these are less frequent than quantitative metrics, for reasons I think are obvious. Yet these are still behaviors, specified with a set of what I can only describe as jawohl! statements, all rewarding the bon eleves and marching toward compliance and away from more elusive and disruptive concepts like curiosity or wonder.
Teaching and learning are difficult, sometimes bewildering activities, and it’s natural to want to have clarity about it all. It’s also natural, and to some extent a good thing too, when we seek accountability for our professional activities. Asking “what do we want to happen, and how will we know if we get there?” is an entirely fair and just thing to do. It’s when we’re forbidden to use “mushy” words like “understand” and “appreciate” because “they can’t be measured” that the trouble begins.
How different Campbell’s vision is from that of Pearson CEO John Fallon, who writes:
It is increasingly possible to determine what works and what doesn’t in education, just as in healthcare. The elements of learning can be mapped out, the variables isolated and a measurable impact on learning predicted and delivered. This can be done at every level – a single lesson, a single individual, a classroom, an institution or a whole system. It can also be done for a product or service that’s designed to help people learn.
Campbell suggests that "One metric for that generative outcome might be called ‘civilization.' The best kinds of generative outcomes might be called ‘wisdom.’” Is the latter an outcome Pearson, or any textbook publisher for that matter, can ever provide?
Image credits: William Warby