Beyond overhead: Better capturing nonprofit performance
Can nonprofits play Moneyball? The goal is laudable and long overdue, but the burgeoning movement faces significant obstacles.
For those unfamiliar with the Moneyball phenomenon: Beginning in the 1970s, a cadre of enterprising statisticians, frustrated by the shortcomings of the traditional approach to talent evaluation in professional baseball, aimed to develop a series of better tools to measure player performance. The 2002 Oakland A’s put these tools into practice with considerable success, and complex statistics such as Wins Above Replacement are now standard measures of player value.
In a recent article in The Atlantic, veteran government administrators John Bridgeland and Peter Orszag asked an intriguing and timely question: “Can Government Play Moneyball?” They lamented that “less than $1 out of every $100 of government spending is backed by even the most basic evidence that the money is being spent wisely.”
A similar theme is bubbling up in the nonprofit sector. Not long before Bridgeland and Orszag’s article was published, executives from three of the largest information sources about charitable organizations released an open letter to “the donors of America,” calling for a move beyond reliance on overhead costs as a proxy for effectiveness:
The percent of charity expenses that go to administrative and fundraising costs—commonly referred to as “overhead”—is a poor measure of a charity’s performance… We ask you to pay attention to other factors of nonprofit performance: transparency, governance, leadership, and results.
This is a welcome change in philosophy. For decades, the three organizations behind it – Guidestar, Charity Navigator, and the BBB Wise Giving Alliance – promoted the use of overhead costs to evaluate the performance of charities.
Granted, overhead ratios can be useful tools when examined at the extremes—clearly, an organization spending a mere sliver of its budget on its stated cause is a poor candidate for scarce donation dollars. But focusing too heavily on administrative expenses is worse than ineffective— it can be counterproductive. A plethora of activities considered useful or even necessary fall under what could reasonably be characterized as administrative costs, including:
- Staff training
- Outreach to ensure an organization’s services are reaching target constituencies
- Evaluation to determine and improve the effectiveness of the organization in achieving its mission
- Strategic planning to acknowledge and adapt to changing circumstances
- Conducting financial audits and preparing reports required by institutional and government funders and the IRS
A 2004 report by the Urban Institute’s Nonprofit Overhead Cost Project characterized these costs as “infrastructure,” finding that “nonprofits that spend too little on infrastructure have more limited effectiveness than those that spend more reasonably.”
Furthermore, there is far from a consensus on what even constitutes administrative costs. A paper published last year by my colleagues Erwin DeLeon, Sarah Pettijohn, and Carol DeVita identified at least four different definitions of administrative expenses required by the federal government for Community Services Block Grant grantees.
In addition to proposing a more nuanced standard for administrative expense levels that varies by organizational size, the report calls for “government agencies that issue guidelines…to clarify the distinction between administrative and program expenses. Expert review of guidelines across issuing authorities is needed to reduce or eliminate ambiguous and conflicting guidelines and improve the quality of reporting.”
Better performance measures more closely aligned to whether an organization is achieving its stated outcomes would be ideal. But getting to measures that can be applied uniformly across the whole sector is a challenging endeavor, at best. When it comes to measurement, the charitable sector is different from baseball in two major ways.
There is no obvious and universal outcome for nonprofit organizations—no equivalent of wins and losses. Should a soup kitchen, a symphony orchestra, and a cancer research center be expected to have the same measures of success?
As well, data collection is a simple matter in Major League Baseball. The games are highly publicized and take place at no more than 15 locations at any given time, and all necessary data has been religiously collected for decades. Most charities have none of those advantages—their tight budgets often leave little room to conduct the types of data collection necessary to effectively manage their performance, and their outcomes are often difficult to measure and/or cannot be isolated from external influences.
It is fantastic that large nonprofit information organizations such as Guidestar and Charity Navigator have committed themselves to collect the data necessary to generate more effective performance measures, but that process is Herculean. It will likely take many years to make significant progress.
In the meantime, there is no replacement for due diligence on the part of organizations and their donors. Prospective donors should take the time to research the details of organizations they are considering supporting, and ideally speak with organizational stakeholders before making a commitment. Meanwhile, organizations should do everything they can to identify, track, and publicly report progress toward concrete performance goals. Existing resources such as the PerformWell website can help in this regard.