Advancing States' Child Indicators Initiatives:

Growing an Outcomes-Based Culture with Communities

David Murphey, Senior Policy Analyst
Vermont Agency of Human Services

May 30-June 1, 2001

This paper by David Murphey is based on a presentation to a Meeting of the HHS State Grantees: Advancing States’ Indicator Initiatives, May, 2001.

This paper addresses the tasks involved in creating capacity at a community level to use indicators of child and family well-being. The "lead partner" in this work may be a part of state government, a statewide non-profit organization (such as Kids Count), or some other entity. On the community side, I will use the term, "community coalition," to refer to the group that will take charge of collecting and reporting on the indicators. On the state side, I will refer to the "state partner."

1. Get local, broad-based buy-in on an outcomes-and-indicators framework (conceptual level) (with flexibility).

The first step is to ensure that there is a common understanding of what an outcomes-and-indicators approach is. This includes agreement at a conceptual level about what outcomes and indicators are--i.e., statements about well-being ends rather than means, and specific measures of those. It may be additionally helpful to agree on a common terminology, since language is often a stumbling block in this work. For example, some communities use "results" rather than "outcomes," or "benchmarks" rather than "indicators." However, the most important distinction to convey in this work is the one between descriptions or measures of process or activity (means), and descriptions or measures of aspects of well-being (ends).

There can be value in a community's adopting an outcomes-and-indicators framework, even if the outcomes and indicators it identifies differ from those of the state partner. Indeed, it is essential that the community "own" the outcomes and indicators--that is, see them as expressing the community's values and vision for itself. A community might adopt some or all of a "core" set of outcomes and indicators proposed by the state partner, but supplement that set with some locally chosen outcomes and/or indicators. What matters is that a community articulate outcomes, and agree on specific ways it will measure progress toward those outcomes.

Several key learnings have emerged from this sort of work as it has unfolded in hundreds of communities. One has to do with the related ideas of simplicity and communication. A common temptation is to build an elaborate, all-encompassing, outcomes-and-indicators structure, one that grants a nod to virtually every "good idea," that fills many pages, and which cannot possibly be a useful tool for practical work. It's clearly preferable to exercise the discipline necessary to craft a few well-chosen outcomes and associated indicators. Not only will this make the coalition's work more effective, but it will greatly increase the likelihood that the wider community will pay attention.

A related idea is to keep language simple and accessible. Both outcomes and indicators should pass what Mark Friedman calls "the public square test": their meaning should be transparent to the average citizen, avoiding technical terms as much as possible. "High school dropout rate" communicates readily; "percent of students meeting criterion on standards-based assessments," much less so.

Another important lesson from this work is to acknowledge the expertise that resides in communities. The relationship between the state partner and the community coalition must be one that recognizes what each has to offer. Specifically, the community is in a privileged position with respect to "the story behind the data"--the risk (and protective) factors that may be of particular significance, and any anomalies of local practice that may affect data quality or interpretation (for example, local differences in case-finding efforts, referral practices, and so on).

2. Encourage outcomes-based collaborations ("set the table"); avoid "turf" issues.

Of central importance is fostering a "stakeholder" group that is as broadly representative of the community as possible. It should include not only service providers and other "professionals," but community members of all ages and walks of life--youth, elders, service system "clients," and members of various "sub-communities" of faith, business, advocacy groups, and so on. It is helpful to have on the group people affiliated with higher education, who can tap academic expertise around data and statistics, community planning, action-research, and evaluation. Health care organizations are another source of expertise, and moreover often have a vested interest in promoting community wellness, as well as in identifying unmet needs.

This is the coalition that will be the "keepers of the flame" of the outcomes and indicators, who will hold responsible the community generally, and perhaps particular partners (who may include state partners) specifically, for progress. This is the group, also, that will revisit on some regular basis the outcomes and indicators, identifying issues of data quality and availability, monitoring indicator trends and developing theories of change, and using the indicators as catalysts for forming new partnerships to implement the change-strategies they identify as promising. Because this is not a short-cycle, but an explicitly long-term agenda (see point 10), it is important that this coalition group be assembled thoughtfully and with attention to its sustainability.

It may be that rather than a single community coalition overseeing the outcomes and indicators, there are multiple groups, each focused on a single, or a few, outcomes. The essential point is that however such groups are constituted they should avoid creating the impression that any single program or institution (e.g., schools, hospitals, social service agency) "owns" an outcome, or even a single indicator. Instead, there should be explicit recognition that progress on any of the outcomes will require contributions from many sectors of the community, including informal citizen efforts.

A useful metaphor (thanks again to Mark Friedman) is that of "setting a table" for a particular outcome or indicator. For example, a group might convene a "teen pregnancy table," to which it would invite all members of the community who felt they had something to contribute to making progress on this indicator. Members of the "table" would meet for as long as the issue held sufficient priority, but no longer than that; subsequently, or simultaneously, they might join other "tables" formed around other priority issues.

It is likely that such community coalitions will need assistance at one or more points in their development. There is a body of learning around coalition-building, group facilitation, organizational growth and development, outcomes-based strategic planning, negotiation, and so on, that is invaluable and which, if shared, will help the coalitions become functional much more quickly than they would otherwise. State partners may be able to provide or "broker" this training and assistance.

3. "Hold up the mirror" of community indicators.

In some cases, it will be the role of the state partner to provide community-level indicator data. In other cases, the community coalition will have this task, or at least will supplement what the state partner supplies with additional, locally-derived indicators. In either case, the initial purpose is to provide a portrait of community well-being and to provoke reactions to it. Like holding up a mirror, the data should objectively reflect community conditions and raise the question, "Do we like what we see?"

It may be that an early task in this work is to convince community stakeholders that the kind of measurement approach implied by indicators is worthwhile. In fact, many people are suspicious, if not fearful, of data and all that the term connotes for them. Others may insist they have no need for the data because they "already know what the problems are."

There are several responses that can be made to build a rationale for using a data-driven approach. One is that the information gathered may certainly confirm existing judgments (in which cases it can reinforce existing efforts), but it may also suggest that some existing judgments should be revised. What are seen as less-serious problems may be revealed as more serious, and vice-versa; wholly neglected issues may be brought to attention for the first time. Second, data typically add credibility to a coalition's efforts. Potential partners tend to be favorably impressed when arguments are accompanied by facts. Third, having data will help the coalition prioritize its efforts. Presumably, they are not in a position to address every possible indicator; having data can assist a rational prioritization process (see point 4).

Several generalizations can be offered regarding effective presentation of indicator data. Whenever possible, multiple years of data should be shown. Not only is this important contextual information (are we getting better, worse, or staying the same?), but it mitigates the drawbacks of the small numbers typical of community-level data, by suggesting the range within which annual data occur. Data for a single time-point provide only a "snapshot" of conditions, telling little about whether the rate shown is "good" or "bad." This is one reason why a trend line almost always makes for a more interesting chart--it contains more information. Of course, when a data source is new, there cannot be multiple time-points; instead, there will be a "baseline" year, from which future progress can be measured.

All analysis is about making comparisons of one kind or another. The promotion of thoughtful (rather than simplistic) comparisons should be one function of indicator presentation. In general, the comparison that is most valid is that which shows the community's progress against its own record. However, it may also be useful to show the comparable indicator data for a county or state, or even for the nation. Less advisable, in most cases, is explicitly to rank a community against others. Although intuitively a powerful device, it is more likely to promote either complacency or shame than it is to motivate community-building efforts. The differences between one community and another are usually sufficiently many to make this sort of comparison inherently unfair. There may be some cases, however, when a community may identify one or more other communities as "peers," from whose experience it believes it has something to learn. In such cases, the state partner can assist in developing criteria for "matching" a community with one or more others.

In any case, any treatment of comparisons should be accompanied by some discussion of the statistical significance of observed differences. Essentially, the point to make is that not all differences in rates between one point in time and another, or between one community and another, or between a community and the state, can be considered reliable markers of "true" disparity; some differences are in fact due to "chance" factors that should not inform community prioritization or planning efforts. An important role for the state partner can be to convey this concept, in simple terms, while acknowledging that statistical significance is not the same as practical significance.

Another consideration for reporting community data is to include positive as well as negative indicators. The movement to identify community assets, as well as developmental assets of individuals, has provided a vigorous counterpoint to the emphasis indicators have traditionally placed on problems or needs. Although communities are certainly used to playing the "needs" game if necessary (to compete for grants, for example), an indicators-framework that includes "assets" often broadens the buy-in from potential stakeholders who may be turned off by an exclusive focus on deficits and risks. This is particularly true when it comes to indicators about youth, who have seen themselves portrayed in overwhelmingly negative terms.

Lastly, whether community indicator data are supplied by the state partner, or generated by the community coalition, they should be presented in a format that is clear and communicative. Chart-making software has unfortunately made it easy to create graphics that are high on "glitz" and low on communication power. Effective formatting usually avoids extraneous "clutter," communicating essential information in ways that don't require a non-technically-trained reader to have to labor to interpret what he or she is seeing.

One of the potential pitfalls of community indicators projects is that they expend unnecessary energy ferreting out and assembling data. Many times, the data already exist, perhaps even in useful formats, but community members simply don't know how to locate them. The state partner can help the community coalition put its hands on already-published data, which typically are accessed through multiple offices in multiple departments throughout government, and occasionally through non-governmental agencies.

One data source that promises to be of key importance to community indicator projects is the American Community Survey (ACS) of the U.S. Census Bureau. Now in its early stages, by 2010 the ACS will take the place of the decennial census "long-form." The kind of information now collected via the "long form" (e.g., on income, education, and family structure) will be available at a community level on an annual basis (though some multi-year averaging for smaller geographic areas may be used). This will be a huge improvement over having to wait 10+ years for updated information on critical indicators such as child poverty.

Because communities may have interests in some indicators that cannot be met through existing data, they may need to collect new information. Here, the state partner can help the community coalition identify whether surveys, interviews, focus groups, direct observations, or some other method is best to collect data for a particular indicator. Surveys, in particular, are often superficially attractive candidates for community coalitions, and state partners can help by conveying the many real obstacles to effective survey-use (ranging from reading levels and other sources of bias in construction, to sampling issues, to analysis concerns).

In Vermont, communities were ahead of the state in wanting measures of positive youth development. Individually, a few of them contracted with the Search Institute of Minneapolis to use its assets survey. Later, the state human services agency and the department of education were able to obtain funding to offer the survey to communities statewide.

4. Promote a rational local review of the indicators, leading to prioritization (requires a comfort-level with data).

The purpose of identifying and reporting on indicators is to inform decision-making and, more importantly, action on behalf on community well-being. Yet it is often exactly this step, of moving from data to action, that proves most difficult for communities to take. The unfortunate result of this is that an indicator report sits on a shelf, rather than infusing itself into day-to-day practice.

In some cases, coalition members may need a "refresher" course in understanding some basic data concepts, such as percents and other rates; interpreting charts; and the issues attending "small numbers." People may not realize that in most cases a sophisticated understanding of statistics is not required to do this work. Nevertheless, for a number of reasons, a review of the data is not everyone's favorite task.

One way to address this obstacle is to encourage users of the data to see them, not as an end, but as a beginning. Most indicators can only be suggestive of community conditions; further follow-up is needed to either corroborate or challenge what they are saying. Often this means collecting qualitative data--through interviews, focus-groups, and the like. Every community has "key informants," who may be people in key public roles (social worker, school principal, agency director), but also may be long-time neighborhood residents who have seen a good cross-section of community life. Follow-up may also include consulting other sources of quantitative data (perhaps local or regional offices of state departments) to see whether their own data confirm what the coalition already has. It's likely that each data source will add a perspective that is at least somewhat different.

In Vermont, one region's early childhood services coordinator approached local schools with information showing disparities in the percent of children entering kindergarten with all recommended immunizations. She was able to make the case that this was a real "school readiness" issue that schools should be concerned about.

Particularly when the indicators concern identifiable sub-populations (e.g., youth, the elderly, welfare recipients), it's important to get their perspectives on what "the data" show. Numbers don't "speak"; only people can tell the story behind the data.

Just as they were useful in setting a context for the initial reporting of community data, comparisons (if well-chosen) are a useful tool for making decisions about which indicators should command the priority of the community. However, the same cautions cited earlier apply here. Comparisons with county or state or national norms can be helpful, but should not be the sole criteria. Differences that are statistically significant should not be confused with differences that have "practical" significance. Indicators that have data for only one time-point should be treated cautiously, as should those based on non-representative samples.

Indicators alone will not provide the rationale for determining community priorities. However, they should inform the discussion. Thoughtful comparisons should play a role. So should some consideration of the relative numbers of people directly affected by an indicator--which may be different from the indicator(s) on which the community is "high." For example, a community may have "low" rates of smoking, compared to state norms, but still the data suggest that there are hundreds of residents who are smokers. In contrast, a community's "high" rate of low birthweight may be based on a few such infants per year. Of course, there is no "right" or "wrong" priority here, and certainly no "formula" for making these judgements; rather, this is a matter for consideration.

Another factor that may influence priorities is the existence of local, state, or national goals for particular indicators. For example, Healthy People 2010 is a project of the federal Department of Health and Human Services, that has identified specific national targets for a number of public health indicators. Similar efforts exist in many states, not only in the public health arena, but around diverse social indicators. Presumably, along with setting targets, these jurisdictions have also designed campaigns to mobilize new action to achieve the goals.

Sometimes, a review of available indicators will suggest priorities by showing similar trends among several related indicators. These might be related, for example, to economic well-being, or to young children, or to teens. Identifying a related "cluster" of indicators could be a powerful argument for selecting a priority area.

Other "tests" to apply to an indicator review stem from a practical perspective. One has to do with the state of knowledge about what is effective to influence positive change on an indicator. In fact, we know more about how to "turn the curve" on some indicators than on others. To take one example, there are relatively few well-established strategies for reducing rates of low birthweight, in spite of its being a strong predictor of many other important childhood outcomes. On the other hand, we know a good deal about how to prevent youth substance abuse.

Finally, a community needs to consider if it has sufficient resources to address a particular indicator. Of course, these include financial and human resources, but they also include "political will," a degree of consensus as to the urgency of the indicator and the appropriateness of a response, given the values of the community, the relationships among the major players, and so on.

Once priorities have been selected, an important function for the community coalition is to develop and sustain "the message." This has to do with the clear and compelling presentation of information to the community--information about the indicators, and about what is being done (or proposed) to address them. In particular, presenting the data in ways that capture attention (without distortion) is important. Here, the assistance of someone skilled in graphic design, writing, and layout can be a valuable resource.

5. Foster strategies to measure program outcomes as well as community outcomes.

If the community coalition is doing its work successfully, the outcomes and indicators will be widely recognized in the community and, more importantly, community members will feel a sense of "ownership" around them. In these circumstances, it can happen that an agency or program feels it has been identified (or it identifies itself) as singly "responsible" for progress on one or more indicators.

This is an opportunity for a critical clarification: as referred to here, indicators refer to community (i.e., population-based) outcomes. Programs can have impact only on the people they serve; thus, in nearly all cases, single programs can make only a partial contribution to community outcomes. A few "programs" come close to serving a universal population (the public schools, some aspects of the public health system, for example), and thus can be considered major contributors to one or more indicators. But even in these cases, it is likely that real and sustained progress will depend on the contribution of efforts from multiple sectors of the community (parents, businesses, civic associations) that are not limited to programs or agencies.

This is not to say that programs shouldn't be accountable for their own contributions, via the people they serve, to the broader community outcomes. All programs are capable of designing and implementing performance measures that assess not only the quantity and quality of services they deliver, but the difference those services make in the well-being of their participants. A logic-model approach, such as that promoted by The United Way of America, has proven to be a helpful tool in this regard. Mark Friedman has also developed an easy-to-use framework for developing these sorts of performance measures.

In fact, it is helpful for a community coalition to develop, for specific indicators, its "theory of change," that should include diverse elements. A theory of change is a "map"--always provisional, since it is subject to the community's new knowledge and experience--that shows the coalition's current best-thinking about what it believes will be required to "turn the curve" on an indicator. As such, it may refer to risk and protective factors, both those generally experienced, and those that may be specific to this community. The theory of change also takes into account what is known, through best-practice wisdom as well as academic research, about what makes for effective programs and strategies.

A theory of change should identify roles for many different community players--individuals, as well as agencies and institutions; policymakers, citizen advocates, parents, youth, business owners, and others. Effective strategies to improve indicators are not limited to programs, but may include policy changes, voluntary changes in "the way we do business," informal citizen efforts, and so on. The idea here is that accountability for improvement on social indicators is shared--no single program or agency "owns" an indicator; rather, it is expected that contributions from many directions will be required.

6. Identify "turn the curve" strategies with specific who/what/by-when action-steps.

As a theory of change is developed, it becomes a roadmap for directing action. The community coalition should use it to identify points of leverage. Again, these may be particular institutions or agencies, they may be specific individuals, or, more diffusely, at-large citizen efforts. The aim is to specify concrete, do-able steps, consistent with the theory of change, that can be taken within the near-term. This process works best when these are kept to a reasonable number, and when specific assignments are made as to when each step will be accomplished, and who will be responsible for assuring that it is accomplished. There should be agreement on a reporting-back mechanism to determine actual progress.

It's likely that in the course of specifying action-steps, systemic barriers to progress will be identified. Examples of these are the rules (sometimes "unwritten") about how agencies or programs do business--establishing eligibility for services, for instance, or referral practices. Often these procedures work at cross-purposes to the kinds of comprehensive, coordinated approaches that the coalition has identified as important in its theory of change.

Some of these barriers may in fact be immutable, but chances are that at least some of them can be re-examined and adjusted. The Georgia Policy Council for Children and Families, working with communities in that state, asked them to identify policy barriers that prevented them from achieving the results they wanted. Together, the Policy Council and communities outlined a "barriers busting process" that was itself an action-plan, with specific accountability for resolving the impediments--if necessary, by state or federal legislation.

"Low-cost/no-cost" actions can include the systems changes referred to above (although sometimes such changes come at a high cost of political capital). However, this category also encompasses the creative, typically informal (i.e., not organized into a program) efforts of community residents. Individually or in groups, "ordinary" citizens are capable of making significant contributions to community well-being, through volunteer efforts, charitable donations, and old-fashioned neighborliness.

7. Consider negotiating for greater funding flexibility in exchange for improved outcomes.

No one doubts that money, though never sufficient, is a necessary ingredient of most community change efforts. Likewise, it is usually true that unless the "money trail" falls into line with the new ways of doing this work (outcomes-driven, comprehensive approaches), the work can quickly become an exercise of window-dressing only. Implied in the devolution "bargain" is that funds will be spent in new, locally-determined ways, in order to achieve results mutually agreed upon by funders and communities.

An example will illustrate what is still a relatively rare implementation of this concept. Managers in one of the human services districts in Vermont (Lamoille) had worked for some time at improving the system of supports and services they provided for families facing the prospect of having a child removed from parental custody, sometimes to expensive out-of-state settings. These family preservation efforts typically involved teams of providers from multiple agencies working collaboratively to address these families' needs holistically rather than in piecemeal fashion.

Providers in Lamoille believed their efforts were paying off, but they needed to consider how to sustain a model that was, in truth, expensive (though not nearly as expensive as out-of-home, and particularly out-of-state, placement). They offered a proposal to the state department responsible for child welfare issues: If we can reduce the number of out-of-home placements in our region, can we lay claim to the "savings" that will accrue to the department, and use these to re-invest in our prevention efforts?

Normally, the child welfare department counted on being able to redirect such "savings" to districts whose track-records in this arena were "worse"--an all-too-common system of perverse incentives. Instead, the department negotiated a compromise with Lamoille: they would return half the "savings" to that region, for them to re-invest. This was in fact a very significant--indeed unprecedented--move for this department. It demonstrated that a performance-based (i.e., outcomes-based) approach was insinuating itself within the budgeting process--the lifeblood of most organizations.

8. Engage the local media around the outcomes and indicators.

Indicators, if originally selected in part on the basis of their "communication power" for a broad audience, are natural material for a "feature" story. If local media are familiarized with an outcomes-and-indicators approach for the community, they can take up this theme, ideally on an ongoing basis. For instance, an "indicator of the month" story could highlight not only the data, but local efforts to address progress, and can serve as a reminder of the broader focus on multiple outcomes and indicators. Local media have an interest in promoting the community as a place to live, work, and visit; the message that this is a community intent on monitoring and improving well-being ought to be a positive one, even if not all indicators are heading in the "right" direction.

In Vermont's Lamoille County, the regional partnership sponsored a series of full-page ads in the local monthly newspaper. Each display highlighted a different issue (e.g., youth substance abuse, teen pregnancy, youth violence, etc.), and included indicator data (local as well as state-level comparisons), together with suggestions for what community members could do to help make a difference, and contact information for local organizations working on the issue. The format was informal, with cartoon drawings, and hand-drawn charts.

The media like to highlight comparisons--generally, who's ranked "best," "worst," etc. Comparisons do make for interesting stories. But it needs to be a role of the community coalition to help steer the media toward comparisons that are meaningful, and away from those that simply make good "copy." In general, as we've already seen, comparisons between one community and another are not appropriate; there are too many subtle and not-so-subtle differences in demographics, in size and location, social history, and so on that make this an apples-and-oranges exercise. It is far more important to highlight the community's own change over time, or perhaps the disparity between what current data show and the goals the community has identified for itself.

9. Keep "holding up the mirror." No "high stakes," but gentle reflection.

The state partner is more likely to have the capacity, at least in the short-term, to keep indicator data flowing to communities on a regular basis. Indeed, this work will only bear fruit if there is a sustained commitment to this kind of information-sharing. The first time community data are reported, and perhaps even for the next couple of annual iterations, there will be those who will ignore it, or those who expect (hope?) it will go away if they can just wait it out. Only after five years or so of annual reporting is it likely that most people will see this as something with staying power.

During this period, and perhaps even afterward, the state partner may need to do a certain amount of interpretation of the indicator data for communities. Some community coalitions may be ready to do this on their own, but many will feel reluctant, not just because they doubt their ability to do so, but because they feel anxious over what they might discover. The state partner can provide helpful modeling by "keeping it simple": highlighting some areas of encouraging results, and other areas of heartening progress, as well as indicators that are frankly troubling. In Vermont, the head of the human services agency sends a letter to each regional partnership coordinator, following the release of that year's Community Profile. The letter essentially delivers the "good news/bad news" message, but in a way that's supportive, and that lets communities know that state leaders are paying attention to them.

It is particularly important to focus not only on point-in-time data, but on indicators that are moving in the right direction. This allows those communities who have unusual, usually long-lived challenges to be recognized for progress. In any case, the communication from the state partner, a message that shares with communities observations of what's going well, and what's not, should avoid any suggestion that the indicators are some sort of "high stakes" test of community worthiness, with associated implications of either punishment or reward. Instead, the idea is to encourage ongoing reflection by the community coalition.

Not all indicators (or, more precisely, ways of presenting indicator information) are created equal. One role for the state partner can be to provide for communities examples of indicators that "pack more bang for the buck"--what might be called "value-added" indicators. Sometimes the careful selection of "numerator" and "denominator" can make for a more useful indicator. For example, reporting on voter turnout typically uses actual voters as a proportion of those registered to vote. Reporting, instead, actual voters as a proportion of those of voting age highlights more sharply the unfinished business of democracy.

Another function of indicators can be to identify sub-populations that are of particular interest, for example, because of their high-risk status. Thus, it is probably less meaningful to report average school attendance figures, than it is to report (if available) the percentage of students who miss, say, more than 20 days of school per year. Other indicators can focus on "compound" factors: for example, "new families at risk" was a composite indicator that the national Kids Count organization developed a few years ago, that was defined as the percentage of first births where the mother was an unmarried teenager with fewer than 12 years education. Kids Count's analysis showed that children in these families were ten times more likely to experience poverty in subsequent years than children born into families with none of these risk factors.

As can indicator construction, so can the reporting of data over time convey "value-added" information. For instance, there might be school survey data from multiple grades from multiple years. Instead of (or in addition to) displaying the trends "cross-sectionally," one might show the progress of a "cohort" progressing through several grades, over time. Of course, one would include the appropriate cautions that such groups are unlikely to remain purely intact, since some proportion of students leave the system, and others join.

10. Stay in this for the long haul.

Using indicator data to track and guide community-level change requires a long-term perspective that is often unfamiliar to those who do this work. Whether it's to meet the demands of external funders, or simply to respond to our own impatience with longstanding challenges, we typically want (even expect) to see results quickly. But experience with social indicators tells us they don't turn on a dime. Most reflect long-established patterns of influence, and accordingly it will take time to see them change course.

The very concept of regular, public reporting of indicators is new enough that several years may be required just to have community members used to seeing their community through this lens. Still more years will be needed to mobilize concerted efforts aimed at influencing particular indicators, to evaluate and adjust programs and strategies to increase effectiveness, and to revise theories of change on the basis of community experience and the state of best-practice knowledge. So, it is best to enter into this work with an explicit acknowledgement of the appropriate time-perspective.

Community coalitions may take comfort in some findings from complex-systems theory that has begun to be applied to community change as well. Complexity theory says that even small changes to a system, if properly timed, can have large impacts. This is akin to the idea of "tipping point" or "critical mass." A number of change-efforts can accumulate with few or no apparent results--until their accumulation reaches a point where it "tips the balance" from one state to another, for example, from dysfunction to wellness. This implies that, rather than expecting a one-to-one relationship between action and impact, communities should maintain a certain amount of trust that the "right" efforts, when they reach a "critical mass," will indeed pay off.

In many ways, communities have been leaders in the movement around outcomes and indicators. Communities grasp the intuitive sense of this approach, and realize its potential for strengthening civic life on many levels. State partners know that communities are the places where change-efforts will succeed or fail. What is required in moving forward with this agenda is a commitment from both communities and state partners to building collaboratively their capacities for sustained effort.


Where to?

Top of Page

Home Pages:
Advancing States' Child Indicators Initiatives
Human Services Policy (HSP)
Assistant Secretary for Planning and Evaluation (ASPE)
U.S. Department of Health and Human Services (HHS)