Accountability and Innovation in Higher Education

Ben Wildavsky
Director of Higher Education Studies, Rockefeller Institute of Government, State University of New York

Much of the Kauffman Foundation's work in higher education focuses on the role universities can play directly in fostering an entrepreneurial society, from offering students the opportunity to learn about entrepreneurship to streamlining the process whereby campus research discoveries are commercialized. But the Foundation also believes that it is important to look more broadly at innovative approaches to higher education. That's especially true for new ideas that have the potential to boost student achievement and thereby enhance the human capital that is so strongly associated with entrepreneurship and economic growth. As a result, we have recently taken a particular interest in exploring a fundamental question: How can we measure how much students actually learn in college?

This may sound like a simple query, but the answer turns out to be surprisingly elusive. Yet it is increasingly important at a time when more attention than ever is being paid to the issue of accountability in American higher education.

Calls for Accountability

For ordinary American parents seeking a high-quality education for their children, sifting through the competing claims made by different institutions can be exhausting—and not always enlightening. The consumer guides that have sprung up in the past two decades or so often provide useful baseline data to students and families. But their judgments about academic quality are controversial. More importantly, they are constrained by the limited availability of high-quality information on student-learning outcomes. The federal databases that compile figures from colleges on graduation and retention rates also have plenty of shortcomings. They aren't very consumer-friendly, and they lack information about large categories of students—those who transfer from one institution to another, for instance—let alone about how much undergraduates are learning.

No wonder policymakers and consumers alike are hungry for information on which colleges and universities are most educationally effective. Nearly every week or two, it seems, news articles and think tank reports describe new initiatives to bring more transparency and accountability to American higher education. U.S. Secretary of Education Margaret Spellings recently sponsored one such effort, the federal Commission on the Future of Higher Education. In a report released in September 2006, the bipartisan panel called for a range of reforms in the areas of access, affordability, quality, and accountability.

In short, nearly a quarter of a century after A Nation at Risk focused renewed attention on the shortcomings of elementary and secondary education in the United States, there is a strong case to be made that a similar moment of reckoning is at hand for higher education. And just as that earlier federal report laid the groundwork for the bipartisan push to improve our K–12 schools that swept through the states in the 1990s and culminated in the No Child Left Behind Act, many believe that the time has come for higher education to enter the age of accountability as well.

But how should that happen? University leaders point out, quite rightly, that our higher education system is successful by many measures—and certainly isn't nearly as troubled as our elementary and secondary schools. Colleges are wary of outside intervention—particularly from federal or state policymakers. Nevertheless, there's rising interest in generating accurate and useful information on what kind of learning is taking place on campus, and a growing number of forward-looking universities are themselves taking part in experiments with new approaches to measuring student learning.

In Search of the Right Measure

Several of those innovative approaches began as an alternative to existing college rankings, which have become a veritable global industry in recent years. There are now not only country-specific rankings in more than a dozen nations, but also cross-national comparisons conducted by Great Britain's Times Higher Education Supplement and China's Shanghai Jiao Tong University. In the United States, the most popular and commercially successfully rankings are those published by U.S. News & World Report, which rely on a broad range of data, including surveys of academic reputation, graduation and retention rate, spending on research and faculty salaries, class size, student selectivity, and the alumni giving rate. While the rankings can be defended on a variety of grounds—most if not all of the criteria U.S. News examines are certainly of interest to prospective students—critics often argue that they are unduly based on “input” measures that don't necessarily tell consumers much about the kind of learning experience that takes place on campus. Writing in a recent issue of the Washington Monthly, Kevin Carey, an analyst with the new think tank Education Sector—a Kauffman Foundation grantee—summed up the criticisms leveled at U.S. News as well as other college guides: “What's missing from all the rankings is the equivalent of a bottom line.”

One closely watched attempt to create a new kind of bottom line is the National Survey of Student Engagement (NSSE), administered by Indiana University researchers, which asks college freshmen and seniors a series of questions related to the quality of their undergraduate experience. NSSE (pronounced “Nessie”) zeroes in on factors that are believed to be associated with student learning, from contact with professors outside class to internship opportunities to number of books read, and so forth. The survey has now been used by close to 1,000 colleges, each of which receives a report summarizing its own results, as well as how it stacks up against other institutions on the same measures of “student engagement.” But NSSE isn't perfect. Schools with high scores on certain NSSE questions may simply be those that enroll large numbers of highly motivated incoming students. Also, most NSSE schools won't release their results to the public, preferring to use them internally as a tool for self-improvement. Perhaps most crucial, while NSSE may be useful in many ways, it provides only an indirect gauge of student learning: By definition, its survey approach relies on students' subjective assessment of their college experience.

The Vision of the Collegiate Learning Assessment

What if it were possible to measure what undergraduates learn more directly? That's the goal of the Collegiate Learning Assessment (CLA), which was developed by the nonprofit Council for Aid to Education—originally an affiliate of the RAND Corporation—with support from a range of major foundations, including the Kauffman Foundation. The CLA aims to look beyond discipline-specific knowledge to measure the kind of critical thinking, problem-solving, and writing skills that most educators agree undergraduates, regardless of major, should be acquiring during their time on campus. The CLA asks students to answer a series of open-ended questions about imaginary but realistic scenarios (an airline crash, for example), analyzing documents and data to draw persuasive conclusions and make recommendations. It also gives students essay questions that require them to make—and critique—arguments on different topics.

Perhaps the greatest promise of the Collegiate Learning Assessment is its effort to assess “value-added”—the relative success of different institutions at improving the academic skills of their students, whether or not those undergraduates entered college as high-achievers. By testing a sample—or all—of a college's freshman and seniors, then comparing their scores (after entering qualifications are held constant), CLA analysts can determine, first, how much higher seniors score than freshmen, and then answer the all-important questions of which colleges do the most to boost scores. The CLA also uses data on incoming students' SAT and ACT scores to measure whether undergraduate gains at a particular school are more or less than would be predicted given those baseline qualifications and relative to the gains made by similarly qualified students at other institutions.

The Kauffman Foundation's support for the CLA began with a grant to help launch a pilot project at a range of campuses across Missouri. Our assistance recently grew with a new grant intended to help the CLA expand from 121 participating universities nationwide to a target of 400 to 500 over the next three years.

Of course, like every other assessment system, the CLA has its shortcomings and critics. Some fear that it or similar tests might be imposed on unwilling institutions by policymakers, while other detractors complain that it is a one-size-fits-all instrument that can't possibly measure all the kinds of learning that take place on a college campus. Also, like NSSE, the CLA has so far been used largely for institutional self-examination and the results have typically not been released to the public (one refreshing exception: the giant University of Texas system). There's clearly a long way to go before ordinary consumers can get access to widely accepted measures of actual student learning at different campuses. Nevertheless, at a moment when the world of higher education is under unprecedented scrutiny, the appeal of directly measuring student learning—in a way that, unlike grades, is comparable across campuses—seems undeniable, particularly when compared with the available alternatives. As Marc Chun, research scientist at the Council for Aid to Education writes:

To assess students' abilities in ballet . . . one could count the number of toe shoes they've gone through, one could ask an external expert how strong the program is, and one could have the students complete a survey to capture what they think about their skills and how much they've grown: or, alternatively, one could have them actually dance.

As the movement for higher-education accountability continues to gather steam—with important long-term implications for entrepreneurship and the economy—no doubt a variety of other innovative collegiate assessment systems will come along to be tested, critiqued, and refined. What seems clear is that our desire to understand and improve the learning that takes place on American campuses is only likely to grow.

Ben Wildavsky was formerly editor of the U.S. News & World Report college guides and recently served as a consultant to the federal Commission on the Future of Higher Education.