The Perils of Confusing Performance Measurement with Program Evaluation

A group of researchers recently published a paper critiquing the child outcomes performance indicator for Part C and Part B 619. They also presented some of their thoughts in a recent webinar sponsored by the Association of University Centers on Disabilities (AUCD). The researchers’ critique is based on several faulty assumptions and consequently unfairly discredits the system for measuring child outcomes and the use of the data. Let’s look at our concerns with their critique.

First, the authors have confused performance measurement with program evaluation.

Their primary argument is that the child outcomes measurement requirement produces misleading information because it is based on a flawed evaluation design. The researchers’ critique wrongly assumes that the child outcomes indicator is designed as an evaluation. The child outcomes measurement is not a program evaluation; it is one performance indicator embedded within a larger performance measurement system that is required by the Individuals with Disabilities Education Act (IDEA). States report on a number of performance indicators that address compliance with federal regulations and program results. As such, these indicators yield information that supports program improvement and ongoing monitoring of program performance. Performance measurement systems are common in both the public (for example, Maternal and Child Health) and the private sector (for example, the Pew framework for home visiting). The Office of Special Education Programs (OSEP) implemented the child outcomes indicator in response to the Government Performance and Results Act which requires all federal agencies report on results being achieved by their programs. OSEP also uses the child outcomes indicator data to monitor states on results achieved, consistent with the strong emphasis in IDEA to improve results for children with disabilities.

The Government Accounting Office has produced a succinct summary that highlights some of the differences between the performance measurement and program evaluation. Performance measurement refers to ongoing monitoring and reporting of program accomplishments. Performance measures may address program activities, services and products, or results. The OSEP child outcomes indicator is a performance measure that addresses results. Examples of other results performance measures are teen pregnancy rates, percentage of babies born at low birth weight, 3rd grade reading scores, and high school graduation rates. In contrast, program evaluationsare periodic or one time studies usually conducted by experts external to the program and involve a more in depth look at a program’s performance. Impact evaluations are a particular type of program evaluation that determine the effect of a program by comparing the outcomes of program participation to what would have happened had the program not been provided.

Performance Measurement Compared to Program Evaluation

Feature Performance Measurement Program Evaluation
Data collected on a regular basis, e.g.,  annually Yes No
Usually conducted by experts to answer a specific question at a single point in time No Yes
Provides information about a program’s performance relative to targets or goals Yes Possibly
Provides ongoing information for program improvement Yes No
Can conclude unequivocally that the results observed were caused by the program No Yes, if well designed impact evaluation
Typically quite costly No Yes

A major difference between measuring outcomes in a performance measure system versus a program evaluation is that a well-designed impact evaluation is able to conclude unequivocally that the results observed were caused by the program. Performance measures cannot rule out alternative explanations for the results observed. Nevertheless, performance measurement data can be used for a variety of purposes including accountability, monitoring performance, and program improvement. Data on performance measures such as the Part C and Part B Section 619 child outcomes indicator can be used to track performance compared to a target or to compare results from one year to the next within programs or states. They can be used to identify state or local programs that could benefit from additional support to achieve better results. Comparing outcomes across states or programs should be done with an awareness that they might serve different population which could contribute to different outcomes. The solution to this is not to conclude that results data are useless or misleading but rather to interpret the results alongside other critical pieces of information such as the performance of children at entry to the program or the nature of the services received. Two of OSEP’s technical assistance centers, the Center for IDEA Early Childhood Data Systems (DaSy) and the Early Childhood Technical Assistance Center (ECTA, have developed a variety of resources to support states in analyzing child outcomes data including looking at outcomes for subgroups to further understand what is contributing to the results observed. Just like tracking 3rd grade reading scores or the percentage of infants who are low birth weight, there is tremendous value in knowing how young children with disabilities are doing across programs and year after year.

Second, the authors incorrectly maintain that children who did not receive Part C services would show the same results on the child outcomes indicator as children who did.

The researchers’ claim that the results states are reporting to OSEP would be achieved even if no services had been provided rests on a flawed analysis of the ECLS-B data, a longitudinal study of children born in 2001. For their analysis, the authors identify a group of 24 months olds in the data set who they label as “Part C eligible children who did not receive Part C services.” These children

  • Received a low score on a shortened version of the Bayley Scales of Infant Development (27 items) administered at 9 months of age by a field data collector; and
  • Were reported by a parent when the child was 24 months old as not having received services to help with the child’s special needs.

Few would argue that the determination of eligibility for Part C could be replicated by a 27-item assessment administered by someone unfamiliar with infants and toddlers with disabilities. Furthermore, data from the National Early Intervention Longitudinal Study show that very few children are identified as eligible for Part C based on developmental delay at 9 months of age. The first problem with the analysis is assuming all of these children would have been Part C eligible. The second problem is that it is impossible in this data set to reliably identify which children did and did not receive Part C services. Parents were asked a series of questions about services in general; they were not asked about Part C services. As we and others who have worked with national data collections have learned, parents are not good reporters of program participation for a variety of reasons. The only way to confirm participation in Part C services is to verify program participation which the study did not do. Given that children who received Part C services cannot be identified in the ECLS-B data, no one should be making conclusions about Part C participation based on this data set.

The authors also argue that a measurement phenomenon called “regression to the mean” explains why Part C and Part B 619 children showed improved performance after program participation. In essence this argument says that improvements seen in the functioning of the children are not real changes but are actually due to measurement error. One can acknowledge the reality of errors in assessment results but to maintain that measurement error is the sole or even a major explanation for the progress shown by children in Part C and Part B 619 programs is absurd.

Moving Forward

State Part C and 619 programs are required by IDEA to report on multiple performance indicators including child outcomes as part of a larger performance measurement system. The child outcomes indicator was developed with extensive stakeholder input in order to maximize its utility to local programs, state agencies, and the federal government. The process of building the infrastructure needed to collect and use child outcomes data has been complex which is why states have been working on it for over ten years. State agencies continue to identify and implement strategies for improving the data collection and use of the data. We know that the data collection processes are not perfect and more work needs to be undertaken to address data quality and other concerns. Building a national system for measuring the outcomes for young children with disabilities receiving IDEA services is a long-term undertaking that requires ongoing effort to make the process better. Disparaging the performance indicator and the data reported by states based on incorrect assumptions and flawed analyses is not productive. Instead, the field needs to collectively engage in ongoing dialogue around critical issues of data quality, data analysis, and appropriate use of the data based on an informed understanding of what the child outcomes indicator is and is not. Part C and Part B 619 state agencies and OSEP are on the forefront of collecting and using early childhood outcomes data to improve programs – which is exactly what performance measurement is intended to do.

Source: DaSy: The Center for IDEA Early Childhood Data Systems

Available at: http://dasycenter.org/the-perils-of-confusing-performance-measurement-with-program-evaluation/ 

Webinar: Place and Race Matter: Head Start and CCDBG Access by Race, Ethnicity, and Location

12/14/2016

Time: 1 – 2pm EST

Join the Center for Law and Social Policy (CLASP) and diversitydatakids.org for a webinar discussing racial, ethnic, and native disparities in Head Start and child care access at the state and neighborhood levels. Featuring original analyses from CLASP and diversitydatakids.org, the webinar will highlight key data and provide a range of policy recommendations to ensure equitable access to federal early childhood programs. High-quality child care and early education can build a strong foundation for young children’s healthy development; however, many low-income children, cannot access to early childhood opportunities. While these gaps in access to child care and early education are widely recognized, less is understood the role of race and ethnicity. This webinar will present CLASP’s analysis of Head Start, Early Head Start (EHS), and the Child Care and Development Block Grant (CCDBG) administrative data, as well as a diversitydatakids.org neighborhood-level analysis of Head Start, showing how access differs based on race, ethnicity, and nativity. Presenters will include: -Stephanie Schmit, Senior Policy Analyst, CLASP -Dr. Dolores Acevedo-Garcia, Project Director, and Erin Hardy, Research Director, diversitydatakids.org -Additional speakers to be announced.

Source: CLASP and diversitydatakids.org

Register at: https://attendee.gotowebinar.com/register/534786341756134657 

Early Childhood Privacy Data

6/2015

States, communities, and local providers are using data to serve the needs of children and families participating in early childhood programs (e.g., Head Start, child care, preschool). Data sharing can support efficient, effective services for children. However, the benefits of data sharing and use must be balanced with the need to support privacy. To support this, PTAC has assembled the following resources about privacy and data sharing with Early Childhood programs in mind. This list is just a start and additional resources will be added as they are developed.

Source: Privacy Technical Assistance Center (PTAC), U.S. Department of Education

Available at: http://ptac.ed.gov/early-childhood-data-privacy

Being Black is Not a Risk Factor: Read the Reports

5/2015

NBCDI’s State of the Black Child initiative is focused on creating resources that challenge the prevailing discourse about Black children-one which overemphasizes limitations and deficits and does not draw upon the considerable strengths, assets and resilience demonstrated by our children, families, and communities. We are deeply grateful for support from the W.K. Kellogg Foundation, the Walmart Foundation and the Alliance for Early Success, as well as our data partners, including CLASP and NCCP, and, of course, our Affiliate network and partners on the ground in the states.

Each of the reports, national and state-based, are designed to address the needs of policymakers, advocates, principals, teachers, parents and others, by weaving together three elements:

  1. Essays from experts that focus on using our children’s, families’ and communities’ strengths to improve outcomes for Black children
  2. “Points of Proof” from organizations that serve not as exceptions, but as examples of places where Black children and families are succeeding
  3. Data points that indicate how our children and families are doing across a range of measures

Source: National Black Child Development Institute

Available at: http://www.nbcdi.org/beingblack

Linking Head Start Data with State Early Care and Education Coordinated Data Systems

3/2015

Head Start programs are a critical component of early care and education in our country. They serve more than one million young children and employ more than 230,000 staff members. When linked to other early care and education data systems, data collected by Head Start programs on their children, program services, and workforce can inform key decisions by state policymakers and guide efforts to improve early childhood program responsiveness and effectiveness. However, although Head Start data are a vital component for any comprehensive early childhood data system, only a handful of states are presently linking Head Start data with data from other early care and education programs.

A fully coordinated early childhood data system, inclusive of data from Head Start, state pre-k, child care, early childhood special education, and other publicly-funded early care and education (ECE) programs, provides a comprehensive picture of a state’s early childhood systems. State policymakers gain a full picture of the status of young children and their progress over time, early childhood services, program quality, and the early childhood workforce. Armed with this knowledge, states can reap many benefits, such as enhancing access to high-quality programs for all children, improving program quality, building a more effective ECE workforce, and ultimately, improving child outcomes.

At present, there is no requirement for local Head Start programs to link or share their data with other state data systems. However, several states are making advances toward linking and/or sharing data across their state’s K-12 data system or other services’ data systems. In this process, states have encountered some challenges and have had to tackle issues related to data privacy and security, among others. To better understand some of the challenges, successes, and strategies behind this work, the Early Childhood Data Collaborative (ECDC) contacted and interviewed a sample of Head Start and state early childhood leaders in a dozen states.

Source: The Early Childhood Data Collaborative

Available at: http://www.childtrends.org/wp-content/uploads/2015/03/ecdc-head-start-brief.pdf

Learning for New Leaders: Head Start A to Z: New Sessions Available on the ECLKC!

November 20, 2014

Learning For New Leaders: Head Start A to Z is a collection of sessions and resources designed to address the unique needs of new Head Start and Early Head Start leaders. Use the materials to orient and support new directors and managers. They also can be used in individual professional development, face-to-face group settings, and for distance learning.

Head Start A to Z Sessions

Each session includes everything needed to get started on your own or facilitate a training with other leaders. Find a description with background information, key messages, outcomes, ideas for planning ahead, and trainer notes. A PowerPoint presentation, video, and handouts also are provided.

Leader’s Role in School Readiness

Review the policies and standards related to school readiness. This session outlines the four strategic steps and overall process for developing school readiness goals. Leaders will learn the relationship between school readiness and program goals, and how the 10 Head Start management systems support them.

Fiscal Management

Explore the role of leadership in overseeing a program’s financial management system. This includes establishing a clear mission and key results, as well as programming. Leaders also are responsible for budgeting, ensuring financial controls, accounting, financial reporting and review, and auditing.

Leader’s Role in Data

Discover the roles that leaders play in fostering the use of data-driven decision-making in their programs. This session actively engages you in understanding the difference between data and information. It also describes the Head Start Program Planning Cycle and shows how the use of data is integrated into and supports that cycle.

More sessions will be added in the coming months. Remember to check back often!

Additional Resources

Care Package

Find tools that encourage leaders to step back and reflect. The resources serve as a reminder and offer tips to help you take care of yourself.

Insights for New Directors

Experienced directors share useful insights based on their experience in this series of short video clips.

Access this Resource

Head Start A to Z was developed by the National Center on Program Management and Fiscal Operations (NCPMFO). Select the link to read the overview and start exploring the sessions: http://eclkc.ohs.acf.hhs.gov/hslc/tta-system/operations/learning/learning.html

Questions?

For more information, contact NCPMFO at PMFOinfo@edc.org or (toll-free) 1-855-763-6647.

 

Free Modules and Lessons

FPG sponsors or contributes to several free online offerings in the form of Modules and Lessons. These self-paced and easily navigated resources offer important content in engaging formats.

Modules and Lessons include:

In addition, FPG offers Shared Discussion and Learning Areas.

Source: FPG Child Development Institute

Head Start Performance Information Report

7/28/2014

The Office of Head Start within the Administration for Children and Families, United States Department of Health and Human Services, is proposing to renew authority to collect information using the Head Start Program Information Report, monthly enrollments, contacts, locations, and reportable conditions. All information is collected electronically through the Head Start Enterprise System HSES. The PIR provides information about Head Start and Early Head Start services received by the children and families enrolled in Head Start programs. The information collected in the PIR is used to inform the public about these programs, to make periodic reports to Congress about the status of children in Head Start programs as required by the Head Start Act, and to assist the administration and training/technical assistance of Head Start programs.

Source: Federal Register, Volume 79 Issue 144

Available at: http://www.gpo.gov/fdsys/pkg/FR-2014-07-28/html/2014-17654.htm

Best Practices in Data Governance and Management for Early Care and Education: Supporting Effective Quality Rating and Improvement Systems

7/14/2014

Quality Rating and Improvement Systems QRIS currently serve as a centerpiece of many states’ early care and education ECE activities. However, QRIS can only strengthen ECE program quality if they are built on quality data. Intentional and rigorous data management and governance practices are essential for data gathered exclusively for the QRIS such as program observation scores as well as for external data accessed by the QRIS such as workforce registry data.  The purpose of this brief is to illustrate the need for and benefits of building strong ECE data governance structures and implementing system-wide data management policies and practices, using the example of QRIS. The brief first describes existing QRIS data systems and the common challenges to data coordination and integrity in these data systems. The brief then provides guidance on best practices related to data governance and the development of integrated data systems that can support QRIS implementation, monitoring and evaluation. As additional resources, the appendices include the interview protocol used with states, as well as specific state and local case studies and a glossary of terms related to coordinated data systems.

Source: Office of Planning, Research & Evaluation, Administration for Children and Families

Available at: http://www.acf.hhs.gov/programs/opre/resource/best-practices-in-data-governance-and-management-for-early-care-and-education-supporting-effective-quality-rating-and

Best Practices in Ensuring Data Quality in Quality Rating and Improvement Systems QRIS

7/14/2014

Collecting and using data are core activities in a well-functioning Quality Rating and Improvement System QRIS. Yet, data used in a QRIS are frequently housed in different systems, using different data management techniques. Ensuring a high level of QRIS data quality involves implementing a number of best practices drawn from established practices used in other fields. The purpose of this brief is to describe the specific strategies QRIS data stakeholders can use to improve upon the collection, management, and dissemination of QRIS data. The audience for this brief includes QRIS program administrators, technical assistance providers, data managers, and researchers.

Source: Office of Planning, Research & Evaluation, Administration for Children and Families

Available at: http://www.acf.hhs.gov/programs/opre/resource/best-practices-in-ensuring-data-quality-in-quality-rating-and-improvement-systems-qris