Observers of higher education policy might be forgiven a sense of surprise at recent developments in the funding of state higher education systems. At the turn of the century, after indifferent results and occasional policy debacles, it was easy to find commentary from chastened proponents on the declining commitments to performance-based funding and budgeting systems for public higher education. It seemed high time to add the performance-funding approach to the moldering remains of zero-based budgeting, quality circles, six-sigma, and numerous other “good policy ideas gone bad.”
Yet in recent years, performance funding has risen from the near dead, returning forcefully to the policy and political agendas of many states. What factors have driven this interest in performance-based funding for higher education? In light of the already high—and still rising—importance of state performance-funding approaches to the financing of public higher education in the United States, understanding the factors responsible for the growth of these policy “reforms” seems vital as well.
History of Performance Funding
Before the 1980s, references to accountability in public higher education systems usually referred to the challenging role of statewide authorities in balancing needed public oversight of institutions with the valued traditions of campus autonomy. Should campuses have their own boards, or should boards have authority over several campuses? Which powers of oversight and control should reside at the campus level, and which should be vested in state-level boards of higher education (state coordinating and governing boards) and in other executive-branch agencies? For instance, to what extent should decisions about tuition rates and budgeting be left to campus-level leadership? Who should decide on program approvals and closures? These earlier questions remain important, but the focus of state officials has shifted in recent years to outcomes as opposed to decision-making authority and processes: how have institutions performed on key metrics?
This “new accountability” movement took shape as incentive systems have been designed to link campus funding levels to desired institutional performance outcomes in such areas as student retention and graduation rates, undergraduate access, measures of institutional efficiency, student scores on licensure exams, job placement rates, faculty productivity, campus diversity, and, increasingly, student learning.
New accountability efforts have taken three distinct forms. Performance funding links state funding directly and formulaically to the performance of individual public campuses on various indicators. By the beginning of the twenty-first century, well over half of the states had adopted a performance-funding program. The performance-budgeting approach is less directive, permitting state officials to consider campus performance indicators in determining allocations. Performance reporting simply mandates that institutions and systems provide performance information to policy makers and the public, without formally linking that information to eventual allocations. The appeal of these three approaches has risen dramatically since the 1970s, and virtually every state now has some kind of performance-driven policy in place. Performance-funding policies, however, remain the most controversial and substantively forceful manifestation of the performance-accountability movement.
The first formal performance-funding program arose in Tennessee in 1979–80. Several years after the Tennessee system was established, Connecticut followed suit, adopting a performance-funding system in 1985. After another lull, Missouri (1991) and Kentucky (1992) adopted similar systems. By 2001, twenty-one additional states had adopted performance-funding systems. Moves to adopt such systems have sometimes been followed by retreats, however, and the current number of states with active systems is appreciably lower than the number that adopted such systems at some earlier point. As of February 2013, the National Conference of State Legislatures counted twelve states with active systems, four in the process of implementing new systems, and nineteen discussing implementation of a new system.
The story of the initiation of Tennessee’s pioneering program and its current reformulation is illustrative of the factors driving the initial and now resurging interest in the performance-funding approach. The state’s goal in establishing the first performance-funding system was to address widespread dissatisfaction with enrollment-based funding formulas and a growing public concern over performance assessment. With support from the federal Fund for the Improvement for Postsecondary Education, the Ford Foundation, and the Kellogg Foundation, the policy was implemented at several pilot campus sites, with close involvement of the Tennessee Higher Education Commission. The pilot’s success propelled legislative action. At the time, campus leaders hoped that by demonstrating the higher education community’s commitment to active performance assessment they could forestall the imposition of a more restrictive state accountability system.
From its earliest years, the Tennessee program had several features that made it attractive to other states, as Joseph C. Burke and associates noted in their 2002 study Funding Public Colleges and Universities for Performance: (1) it featured twin goals of external accountability and institutional improvement, (2) it focused on a set of performance indicators that were varied in scope but limited in number, (3) it specified a phased implementation and periodic reviews afterward, (4) it stressed institutional improvement over time, (5) it provided limited but still significant supplementary funding for institutions, and (6) it maintained reasonable stability in its priorities and program requirements. Not surprisingly, the innovation spread.
That spread was primarily regional at first. In 1997, states adopting the approach were clustered mostly in the South and Midwest. By 2000, however, the adopting states had become more evenly spread across the country. Adoption of performance-budgeting schemes followed similar patterns.
Intriguingly, much volatility emerged in the states’ performance-accountability schemes over time: there are numerous instances of states adding and dropping accountability emphases and features. Some of this volatility no doubt has related to the difficulties of translating the theoretical and policy attractiveness of the programs into effective, efficient implementations: in reality, these programs are extremely difficult to design and maintain, both fiscally and politically.
Burke concluded in his essay in Funding Public Colleges and Universities for Performance that the extent of political influence in the design and development of performance-funding approaches played a significant role in the ultimate stability of the programs. Specifically, the least stable programs have been those in which legislators, governors, businesspeople, and community leaders have been most influential, while the most stable ones exhibit the greatest involvement of state higher education officials. Political, corporate, and community leadership can play an important role in both the adoption and the long-term success of performance regimes, but effective leadership in this arena may be as much about informed deference as about command.
South Carolina is most often cited as an example of a state that has pursued an overreaching and ultimately unsuccessful performance initiative. That state initially attempted to base 100 percent of its higher education appropriations on performance and to use a rather uniform allocation approach that only weakly distinguished among institutions’ missions. Perhaps unsurprisingly, implementation proved extraordinarily controversial and costly in political and economic terms. Such problems, plus sharp drops in tax funds available for higher education and the absence of evidence that performance systems enhance institutional performance in a cost-effective way, prompted retreat from such approaches in South Carolina and many other states.
Interestingly, however, we are witnessing the early signs of a resurgence in state performance approaches, perhaps rooted in wisdom and experience gained from the earlier problems in this arena yet influenced unmistakably by the changed political context for higher education in many states. The Lumina Foundation funded quality-improvement efforts in eleven states, each featuring substantial commitment to what is being termed “Performance Funding 2.0,” a systematic effort to tie state funding explicitly and significantly to quality improvements on various dimensions of campus performance. In parallel, a number of states have made their own commitments to move along similar lines without foundation support.
The newer movement has several distinctive features. First, the funding of degree production for the emerging economy has been much more strongly emphasized than in earlier efforts. Second, the development of workforces specifically prepared for the states’ perceived future needs has become a greater focus. Third, there is increasing recognition that missions, measures, and incentives must be more tightly and efficiently linked. Fourth, these newer efforts have begun incorporating into performance-appraisal systems certain “throughput” indicators of success, as well as output or outcome measures. Such throughput indicators have included, for example, rates of student completion of “gateway” courses (like those in biology, chemistry, mathematics, or psychology), where poor academic performance by students often creates bottlenecks impairing student transition to upper-level curricula and contributes to student dropout.
Finally, and most importantly, the financial and political stakes have become appreciably higher. Again, Tennessee provides an example. In its first three decades, Tennessee’s policy provided that core state funding would be supplemented with additional funds based on a campus’s scores on its individually prescribed performance indicators. Over time, the percentage of an institution’s state appropriations based on performance funding grew but remained limited. In 2010, though, the state dropped its enrollment-based core funding approach in favor of an output-based approach, thus providing an incentive for campuses to build staffing and services for improving graduation rates, including fast-track majors, increased advising, expanded tutoring and remediation efforts, and expanded course offerings.
Other states have begun taking similar approaches. In 2008, Ohio adopted an approach that over time will lead to all state appropriations being based in higher education outputs, principally course and degree completions. Colorado and Arkansas have implemented performance-funding programs that eventually will allocate up to 25 percent of state funding for higher education on the basis of formulas that reward institutional success in degree production. In Texas, lawmakers continue to fine-tune an initiative passed in the 2011 legislative session that redirected up to 10 percent of the state’s enrollment-driven funding for allocation to colleges and universities based on certain performance metrics, such as the six-year graduation rates of an institution’s undergraduate students, the total number of bachelor’s degrees awarded, the number of degrees awarded in certain “critical fields,” and the number of degrees awarded to “at-risk” students. Legislation debated in Texas in 2013 called for the share of performance-based funding to increase further, to 25 percent of total state funding for higher education.
What Is Driving the Movement?
What conditions of the states have influenced adoption of this distinctively new kind of accountability mandate for higher education? Secular trends alone, such as the weakened fiscal capacity of the states or the heightened calls for accountability seen throughout American government and society, seem an inadequate basis for explaining the trend. Not all states have initiated performance-funding systems, which inevitably raises the question of which specific factors have driven certain states (and not others) to adopt these programs at the times they do. Consequently, and as part of a broader research collaboration around the factors driving policy change for higher education across the fifty states, we have conducted a series of quantitative, longitudinal analyses on both the intrastate and the interstate determinants of state adoption of performance-funding initiatives.
Our investigations have built conceptually on several distinct strands of social science theory and research into the determinants of policy adoption and change in the American states, including literature in the fields of comparative state politics and policy, policy innovation and diffusion, and higher education studies. Based on a unique data set containing hundreds of indicators of the conditions of states and of their higher education systems over the past thirty years, our event history analyses of state adoption of performance funding have examined numerous economic, socio-demographic, political, organizational, and policy influences.1 Because economic development patterns of the states have long been associated with certain policy outcomes, our empirical efforts have sought to account for a wide array of these influences, including indicators of state economic activity, unemployment, wealth, and perturbations in the fiscal climates of states. Our longitudinal analyses have also taken into account possible demographic influences— in particular, population changes, the racial and ethnic composition of states, and levels and changes in the demand for postsecondary education in the states.
A third category of possible influences on these new accountability mandates for higher education, political determinants, had rarely been systematically tested in the field of higher education studies when we began our line of investigations almost ten years ago. Drawing heavily on research in the area of comparative state politics, we have analyzed the roles of political institutions and actors in the rise of performance-funding systems for higher education. For instance, our event history analyses have incorporated numerous indicators of state political leadership over time, including partisan control of legislatures, gubernatorial strength, election timing, term limitations, and certain design features of legislatures (such as length of legislative terms, member pay, and staffing), among a host of other important aspects of states’ political climates. Finally, we have examined the influence of a variety of organizational and policy characteristics of states, such as the type of postsecondary governance regime that may exist in a state and whether a state already has adopted other kinds of policies that may correlate highly with the probability that the state will adopt a performance-based system for funding postsecondary education.
The conceptual framework above has driven our most recent exploratory analyses with higher education researcher Austin Lacy exploring the factors associated with state adoption of performance-funding policies. Perhaps unsurprisingly, our work suggests that partisan politics have been playing a prominent role in performance-funding adoptions, in the context of statistical controls for a variety of other factors: all else being equal, states with greater Republican representation in the legislature have been significantly more likely to adopt performance funding. It appears that stricter accountability policies and the use of market-like incentives have greater appeal on the political right, making the adoption of a performance-funding policy more likely in “red” states.
We have found evidence of an association of performance-funding policy adoption with more intensive electoral competition—that is, having survived tight races for election, leaders may be especially likely to seek issues that appeal to broad swaths of the electorate, as opposed to issues stirring deep partisan divides. To the extent that performance funding equates in voters’ minds with educational quality assurance, it is hard to view it as a potentially divisive issue.
Structural factors also appear to play a noteworthy role. Our analyses suggest that states with more centralized governing boards, often called consolidated governing boards, have been less likely to adopt performance-funding policies. Our interpretation of these findings lies in the special nature of these governance arrangements. Arising out of decades-old reform efforts to shield colleges and universities from shifting political winds and arbitrary leadership in the states, such boards are generally well staffed with skilled analysts and tend to be led by veteran, politically seasoned figures. Those strengths can buffer associated institutions from seemingly intrusive, autonomy-threatening policies. With both political connections and analytic capacity, centralized boards can often deflect major reforms originating elsewhere. Conversely, less centralized state governance arrangements (“coordinating board” structures) may more frequently become secondary influences as external stakeholders exert more direct, unmediated control over institutional affairs and funding.
In addition, our analyses found that rapid tuition increases in the state’s flagship institution appear to have encouraged policy makers to respond by initiating performance-funding regimes. It may be that such tuition increases raise doubts among political leaders about the efficiency and market sensitivity of institutional leaders.
Intriguingly, we found no effects of demographic, social, or economic conditions in the states, of states’ ongoing ideological tendencies on social issues, of the design of legislatures, or of governors’ constitutional powers in the state. And, in contrast to numerous qualitative studies of state policy adoption, we have found no quantitative evidence of state-to-state diffusion. That is, we have found no indication that the actions of contiguous or closely related states influenced the actions of a given state in the performance-funding policy arena.
Our findings are certainly not definitive. For one thing, we have focused primarily on the adoption of programs, and thus we have ignored the ups and downs of performance-funding programs over the years. In an important recent quantitative analysis, Alexander Gorbunov, whose 2013 dissertation at Vanderbilt University focused on performance funding, has contributed to an understanding of precisely that pattern. In addition to finding support for our findings on the active role of Republican legislatures in adoptions and on the limited role of diffusion processes in adoption, Gorbunov’s work has uncovered evidence suggesting that the existence of nearby successful performance-funding systems seems to reduce the rate of policy abandonment and improve the rate of policy readoption. Work like this, incorporating the full life span of policies into sophisticated quantitative modeling, promises to greatly expand our knowledge about policy adoptions.
In perhaps the most prolific line of work on the life cycles of states’ engagement in performance funding, sociologist Kevin Dougherty and colleagues have found that policy abandonment stems from a combination of fiscal, political, and implementation issues, principally whether legislators and governors impose such systems on higher education from “without” or work with higher education leadership from “within,” thereby gaining long-term support of stakeholders.
The Future of State Performance Funding
Performance-driven financing of postsecondary education is an innovation that will not disappear soon. It can clarify state and institutional priorities, raise the visibility of campus performance, increase transparency, and possibly even improve productivity, at least under some definitions.
Increasingly, though, some questions are arising, along with some discontent. Do allocations under such policies adequately reflect the major differences in institutional missions and the kinds of students served, or are they exacerbating inequalities in institutional funding? Are available data sufficient for the task of making funding distinctions? Such approaches may tend to highlight certain performance indicators at the expense of others for their ease of measurement, rather than their importance to the public and their value in serving the public good. Also, these approaches may be seen on campuses as undermining campus autonomy and the professional judgments of on-campus leaders, faculty, and staff. Indeed, performance-funding mandates present an interesting paradox: the programs, coming on the heels of an era during which many states sought to empower campuses by decentralizing certain functions formerly overseen by state-level authorities, serve as a mechanism whereby state officials have recently strengthened their hand over the direction of campuses. There is a risk, too, that quality may actually decline under such regimes if indicators value output volume more than output quality—take the simple example of eased graduation standards producing larger graduating classes that, in turn, lead to decreased per-student educational expenditures.
Nonetheless, as recent debates in Texas, Florida, and other states show, political (and perhaps public) support for performance funding appears to be advancing rather than retreating. For campus administrators, faculty, and staff, the key question may be the extent to which they can guide, or at least substantively influence, the emergence of their state’s performance-funding frameworks. It’s one more front in the widening battle to ensure that academic priorities and values, and not overtly political priorities and values alone, continue to play a central role in higher education policy design and implementation.
Michael K. McLendon is the Simmons Centennial Professor of Education Policy and Leadership and associate dean of the Annette Caldwell Simmons School of Education and Human Development at Southern Methodist University. He served as a staff member in both the Florida House of Representatives and the US Senate. James C. Hearn is a professor and interim director in the Institute of Higher Education at the University of Georgia.
Note
1. Event history analysis is a regression-like technique that today is widely used in the policy sciences for studying dynamic change processes. Because the technique focuses on the duration of time that units (in our research, the American states) spend in a given state of being before experiencing a particular event (in our work, state adoption of a given policy), event history analysis has enabled us to study how variation in the values of independent variables influence state policy behavior. The distinct advantage of this technique is that its coefficient estimates can be used to calculate predicted probabilities that a state with certain attributes will adopt a policy in a given year. Back to text