PROJECTCapital for Communities Scorecard User Guide

Navigation
  • Project Home
  • Why We Created the Scorecard
  • How the Scorecard Can Be Used
  • How the Scorecard Is Organized
  • What You Need to Get Started
  • Interpreting and Applying Results
  • Beta Testing the Opportunity Zone Community Impact Assessment Tool
  • Creating the Scorecard

  • Creating the Scorecard

    With the information from the beta testing period, we began a year-long process to revise and relaunch the Opportunity Zone Community Impact Assessment Tool as the Capital for Communities Scorecard. The objectives for this relaunch were (1) to improve the ability of the tool to support more racially equitable outcomes, (2) to make the tool more user friendly, and (3) to improve its utility across a broader range of projects and places, including those outside of Opportunity Zones.

    For the first objective, to improve the racial equity dimensions of the tool, we consulted eight racial equity experts from Urban Institute and PolicyLink who work across a wide range of policy domains to review the tool. We revised based on their feedback. For example, in the jobs section, we revised a question about hiring practices to ask whether a project’s hiring guidelines include antidiscrimination protections beyond those required by federal and state law. In the housing section, we added questions about units dedicated to permanent supportive housing for people who are experiencing homelessness or are unstably housed and about whether the project sponsor commits to renting units to voucher holders if they qualify (i.e., not discriminating based on source of income). And in the community engagement section, we added a question about whether the project sponsor has taken affirmative steps to engage members of the community who (1) have historically experienced discrimination, (2) face the greatest barriers to accessing economic opportunities, (3) have been disproportionately exposed to environment harms, or (4) have been left out of past community engagement activities.

    For the second objective, we carefully analyzed user feedback from the beta testing phase, and we worked with a team of technology, data, and design experts at the Urban Institute to improve the user experience and revamp the tool’s interface. In response to user feedback, we added a new function that allows users to edit their responses to more easily demonstrate changes that have been made to a proposed project in response to public feedback. We also created a more streamlined login process, provided clearer instructions and definitions, and we added navigation features that allow users to track their progress in completing the tool.

    Finally, we reviewed every question in the tool, individual question point totals, and our scoring framework to determine whether we needed to make adjustments so the tool could be used across a broader range of projects and places. We reviewed the point totals assigned to each question and revised these totals to ensure questions weren’t over- or underweighted and accurately conveyed their importance within a specific social impact area. We also reviewed questions ensure project sponsors weren’t being asked questions about factors they have no control over in the development process.

    Scoring

    The scoring system for the assessment tool has four components:

    1. Individual question scoring. Each question is assigned a score range based on its relative importance within an impact area. The only questions that do not receive individual scores are the narrative response questions, questions in the General Information section, and the priority ranking question at the end of the Community Goals and Priorities section.
    2. Bonus questions. Some impact areas contain bonus questions whose points are added after the individually weighted questions have been tallied.
    3. Impact area scoring. Scores for each impact area and for the community goals and priorities section are calculated by totaling the scores for responses to individual questions and dividing by the maximum possible score (excluding the bonus questions).
    4. Impact area weighting. Each impact area is then weighted by a user-generated ranking based on community priorities to calculate the overall project score.

    INDIVIDUAL QUESTION SCORING

    Within each impact area and in the community goals and priorities section, users are asked a set of questions tailored to the project type. Not all project types are asked the same questions or number of questions. For each question, we assign a score range, with each possible answer assigned a number within that range. Certain questions have larger score ranges than others based on relative importance. For instance, in the “accessible, high-quality jobs” category, the score range for the long-term jobs question is 4, compared with a range of 1 for the short-term jobs question. Score ranges were refined after the beta testing period to ensure comparability across project types.

    BONUS QUESTIONS

    Some questions in the tool ask about project features that may deliver substantial community benefit, but these features are relatively uncommon or not always applicable. We treat these questions as “bonus” questions, meaning that their points are added after the individually weighted questions have been tallied using the process described previously. For most bonus questions, a simple affirmative response generates bonus points, but for some questions where multiple choices or ranges are provided, bonus points are generated for responses above a certain threshold of a community benefit. In either of these cases, the bonus points from the response will increase a project’s score in that impact area (up to the maximum score possible without bonus questions), but a project that does not provide this benefit will not see its score decrease. Questions scored as bonuses are listed in table A.2.

    Table A.2

    IMPACT AREA SCORING

    To calculate scores for each impact area, we total the individual questions' scores (including bonus questions) and divide by the maximum possible score (excluding bonus questions). This process standardizes scores to ensure the total score is not influenced by the number of questions posed for a project type in an impact area. Each impact area’s score (before the weighting described in the next section) is normalized to a scale of 1 to 10, shown in the graphic on the project’s scorecard, and summed for a total unweighted score.

    The housing impact area is treated slightly differently. For projects that do not include a residential component, users are asked substantially fewer questions in this impact area. These projects default to a “do no harm” score—a score that falls at the median of possible standardized score ranges (i.e., 5 out of 10). A “do no harm” score is intended to demonstrate that the impact area’s questions do not apply for a specific project type, so the project’s overall score should not be affected by this impact area. In responding to the four possible questions that are asked of commercial and industrial projects in this section, users will receive a score lower than the “do no harm” score if responses to questions suggest they may cause displacement or exacerbate affordability challenges. These projects, along with Operating Businesses, also could receive a higher score than the “do no harm” score if their response to two bonus questions are affirmative. For projects without a residential component, we encourage users to take caution in extrapolating based on limited inputs in this impact area.

    Further, because there are only two questions in the health, social services, and cultural amenities impact area, we score that section for all project types as follows: if a project provides a needed service and doesn’t displace existing services it receives a 10/10; if a project provides a needed service and displaces an existing service or doesn’t provide a service and doesn’t displace an existing service, it receives a 5/10; if a project doesn’t provide a service and displaces an existing service it receives a 0/10.

    We present the unweighted impact area scores on the project scorecard’s front page to show how a project fares across impact areas. A project sponsor can use the disaggregated impact area scores to prioritize areas for improvement, especially when the community has prioritized those areas.

    IMPACT AREA WEIGHTING

    In the community goals and priorities section, we ask users to rank the six impact areas against one another (1 being of most relative importance, 6 being of least relative importance). This assigns each impact area a weight that increases the scores of the most important impact areas while diminishing those of the least important. These weights are applied in descending order of relative importance: 2x, 1.5x, 1x, 0.75x, 0.5x, 0.25x (the community goals and priorities area always receive a 1x and is not included in weighting). The process of weighting, grounded in community priorities, allows the tool to adapt to different community environments. This proved to be an important factor in feedback from various communities, particularly for assuring that the tool was responsive to local conditions and needs. After the weights are applied, the scores across all seven impact areas are summed.

    Research Areas Economic mobility and inequality Neighborhoods, cities, and metros
    Tags Capital flows Community and economic development Community development finance and CDFIs Equitable development Impact investing Inclusive recovery Neighborhood change Racial and ethnic disparities Racial inequities in neighborhoods and community development
    Policy Centers Research to Action Lab Metropolitan Housing and Communities Policy Center