The Perils of Confusing Performance Measurement with Program Evaluation

A group of researchers recently published a paper critiquing the child outcomes performance indicator for Part C and Part B 619. They also presented some of their thoughts in a recent webinar sponsored by the Association of University Centers on Disabilities (AUCD). The researchers’ critique is based on several faulty assumptions and consequently unfairly discredits the system for measuring child outcomes and the use of the data. Let’s look at our concerns with their critique.

First, the authors have confused performance measurement with program evaluation.

Their primary argument is that the child outcomes measurement requirement produces misleading information because it is based on a flawed evaluation design. The researchers’ critique wrongly assumes that the child outcomes indicator is designed as an evaluation. The child outcomes measurement is not a program evaluation; it is one performance indicator embedded within a larger performance measurement system that is required by the Individuals with Disabilities Education Act (IDEA). States report on a number of performance indicators that address compliance with federal regulations and program results. As such, these indicators yield information that supports program improvement and ongoing monitoring of program performance. Performance measurement systems are common in both the public (for example, Maternal and Child Health) and the private sector (for example, the Pew framework for home visiting). The Office of Special Education Programs (OSEP) implemented the child outcomes indicator in response to the Government Performance and Results Act which requires all federal agencies report on results being achieved by their programs. OSEP also uses the child outcomes indicator data to monitor states on results achieved, consistent with the strong emphasis in IDEA to improve results for children with disabilities.

The Government Accounting Office has produced a succinct summary that highlights some of the differences between the performance measurement and program evaluation. Performance measurement refers to ongoing monitoring and reporting of program accomplishments. Performance measures may address program activities, services and products, or results. The OSEP child outcomes indicator is a performance measure that addresses results. Examples of other results performance measures are teen pregnancy rates, percentage of babies born at low birth weight, 3rd grade reading scores, and high school graduation rates. In contrast, program evaluationsare periodic or one time studies usually conducted by experts external to the program and involve a more in depth look at a program’s performance. Impact evaluations are a particular type of program evaluation that determine the effect of a program by comparing the outcomes of program participation to what would have happened had the program not been provided.

Performance Measurement Compared to Program Evaluation

Feature Performance Measurement Program Evaluation
Data collected on a regular basis, e.g.,  annually Yes No
Usually conducted by experts to answer a specific question at a single point in time No Yes
Provides information about a program’s performance relative to targets or goals Yes Possibly
Provides ongoing information for program improvement Yes No
Can conclude unequivocally that the results observed were caused by the program No Yes, if well designed impact evaluation
Typically quite costly No Yes

A major difference between measuring outcomes in a performance measure system versus a program evaluation is that a well-designed impact evaluation is able to conclude unequivocally that the results observed were caused by the program. Performance measures cannot rule out alternative explanations for the results observed. Nevertheless, performance measurement data can be used for a variety of purposes including accountability, monitoring performance, and program improvement. Data on performance measures such as the Part C and Part B Section 619 child outcomes indicator can be used to track performance compared to a target or to compare results from one year to the next within programs or states. They can be used to identify state or local programs that could benefit from additional support to achieve better results. Comparing outcomes across states or programs should be done with an awareness that they might serve different population which could contribute to different outcomes. The solution to this is not to conclude that results data are useless or misleading but rather to interpret the results alongside other critical pieces of information such as the performance of children at entry to the program or the nature of the services received. Two of OSEP’s technical assistance centers, the Center for IDEA Early Childhood Data Systems (DaSy) and the Early Childhood Technical Assistance Center (ECTA, have developed a variety of resources to support states in analyzing child outcomes data including looking at outcomes for subgroups to further understand what is contributing to the results observed. Just like tracking 3rd grade reading scores or the percentage of infants who are low birth weight, there is tremendous value in knowing how young children with disabilities are doing across programs and year after year.

Second, the authors incorrectly maintain that children who did not receive Part C services would show the same results on the child outcomes indicator as children who did.

The researchers’ claim that the results states are reporting to OSEP would be achieved even if no services had been provided rests on a flawed analysis of the ECLS-B data, a longitudinal study of children born in 2001. For their analysis, the authors identify a group of 24 months olds in the data set who they label as “Part C eligible children who did not receive Part C services.” These children

  • Received a low score on a shortened version of the Bayley Scales of Infant Development (27 items) administered at 9 months of age by a field data collector; and
  • Were reported by a parent when the child was 24 months old as not having received services to help with the child’s special needs.

Few would argue that the determination of eligibility for Part C could be replicated by a 27-item assessment administered by someone unfamiliar with infants and toddlers with disabilities. Furthermore, data from the National Early Intervention Longitudinal Study show that very few children are identified as eligible for Part C based on developmental delay at 9 months of age. The first problem with the analysis is assuming all of these children would have been Part C eligible. The second problem is that it is impossible in this data set to reliably identify which children did and did not receive Part C services. Parents were asked a series of questions about services in general; they were not asked about Part C services. As we and others who have worked with national data collections have learned, parents are not good reporters of program participation for a variety of reasons. The only way to confirm participation in Part C services is to verify program participation which the study did not do. Given that children who received Part C services cannot be identified in the ECLS-B data, no one should be making conclusions about Part C participation based on this data set.

The authors also argue that a measurement phenomenon called “regression to the mean” explains why Part C and Part B 619 children showed improved performance after program participation. In essence this argument says that improvements seen in the functioning of the children are not real changes but are actually due to measurement error. One can acknowledge the reality of errors in assessment results but to maintain that measurement error is the sole or even a major explanation for the progress shown by children in Part C and Part B 619 programs is absurd.

Moving Forward

State Part C and 619 programs are required by IDEA to report on multiple performance indicators including child outcomes as part of a larger performance measurement system. The child outcomes indicator was developed with extensive stakeholder input in order to maximize its utility to local programs, state agencies, and the federal government. The process of building the infrastructure needed to collect and use child outcomes data has been complex which is why states have been working on it for over ten years. State agencies continue to identify and implement strategies for improving the data collection and use of the data. We know that the data collection processes are not perfect and more work needs to be undertaken to address data quality and other concerns. Building a national system for measuring the outcomes for young children with disabilities receiving IDEA services is a long-term undertaking that requires ongoing effort to make the process better. Disparaging the performance indicator and the data reported by states based on incorrect assumptions and flawed analyses is not productive. Instead, the field needs to collectively engage in ongoing dialogue around critical issues of data quality, data analysis, and appropriate use of the data based on an informed understanding of what the child outcomes indicator is and is not. Part C and Part B 619 state agencies and OSEP are on the forefront of collecting and using early childhood outcomes data to improve programs – which is exactly what performance measurement is intended to do.

Source: DaSy: The Center for IDEA Early Childhood Data Systems

Available at: http://dasycenter.org/the-perils-of-confusing-performance-measurement-with-program-evaluation/ 

IM 15-03 Policy and Program Guidance for the Early Head Start-Child Care Partnerships (EHS-CCP)

8/6/2015

INFORMATION MEMORANDUM

TO: Early Head Start – Child Care Partnership Grantees and Partners

SUBJECT: Policy and Program Guidance for the Early Head Start-Child Care Partnerships (EHS-CCP)

INFORMATION:This Information Memorandum (IM) reinforces the purpose and vision of the Early Head Start – Child Care Partnerships (EHS-CCP) and provides policy and program guidance for grantees and their partners.1 This IM specifically addresses various issues and questions raised by grantees during the EHS-CCP orientations and start-up phase of the grants.

The EHS-CCP program will enhance and support early learning settings to provide full-day/full-year, seamless, and comprehensive services that meet the needs of low-income working families and those in school; increase access to high-quality, full-day child care (including family child care); support the development of infants and toddlers through strong relationship-based experiences; and prepare them for the transition into Head Start and preschool. The EHS-CCP is a unique opportunity which brings together the best of Early Head Start and child care through layering of funding to provide comprehensive and continuous services to low-income infants, toddlers, and their families. The EHS-CCP grants will serve as a learning laboratory for the future of high-quality infant/toddler care.

All infants and toddlers attending an EHS-CCP site will benefit from facilities and homes that are licensed and meet safety requirements. All children in classrooms with EHS-CCP-enrolled children will benefit from low teacher-to-child ratios and class sizes, qualified teachers receiving ongoing supervision and coaching to support implementation of curriculum and responsive caregiving, and broad-scale parent engagement activities. While only enrolled EHS-CCP children will be eligible for direct family-specific benefits such as home visits, health tracking and follow-up, and individualized family support services, EHS-CCP programs must operationalize services to ensure there is no segregation or stigmatization of EHS-CCP children due to the additional requirements or services.

The long-term outcomes of the program are:

  1. Sustained, mutually respectful, and collaborative EHS-CCP
  2. A more highly educated and fully qualified workforce to provide high-quality infant/toddler care and education
  3. Increased community supply of high-quality early learning environments and infant/toddler care and education
  4. Well-aligned early childhood policies, regulations, resources, and quality improvement support at national, state, and local levels
  5. Improved family and child well-being and progress toward school readiness

The EHS-CCP brings together the strengths of child care and Early Head Start programs. Child care centers and family child care providers respond to the needs of working families by offering flexible and convenient full-day and full-year services. In addition, child care providers have experience providing care that is strongly grounded in the cultural, linguistic, and social needs of the families and their local communities. However, many child care centers and family child care providers lack the resources to provide the comprehensive services needed to support better outcomes for the nation’s most vulnerable children. Early Head Start is a research-based program that emphasizes the importance of responsive and caring relationships to support the optimal development of infants and toddlers. Early Head Start provides comprehensive family centered services that adhere to the Head Start Program Performance Standards (HSPPS)2 to support high-quality learning environments. Integrating Early Head Start comprehensive services and resources into the array of traditional child care and family child care settings creates new opportunities to improve outcomes for infants, toddlers, and their families.

Attachment A provides topical policy and program guidance around:

  • Seamless and Comprehensive Full-Day/Full-Year Services
  • Partnership Agreements
  • Layered Funding
  • Child Care Subsidies
  • Citizenship and Immigration Status
  • Child Care Center Ratios and Group Sizes
  • Staffing and Planning Shifts for Staff
  • Staff Qualifications and Credential Requirements
  • Federal Oversight and Monitoring

Please share this IM with your partners and direct any questions to your Administration for Children and Families (ACF) Regional Office.

Thank you for your efforts on behalf of infants and toddlers and their families.

/ Linda K. Smith /
Linda K. Smith
Deputy Assistant Secretary for Early Childhood Development
Administration for Children and Families

/ Blanca Enriquez /
Dr. Blanca Enriquez
Director
Office of Head Start

/ Rachel Schumacher /
Rachel Schumacher
Director
Office of Child Care

Source: Administration for Children and Families, Office of Head Start, and Office of Child Care

Available at: http://eclkc.ohs.acf.hhs.gov/hslc/standards/im/2015/resour_ime_003.html

Innovation in monitoring in early care and education: Options for states

4/2015

Executive Summary

Ensuring children are in safe environments that promote health and development is a top priority of families, state regulators, the federal government, and national organizations that accredit early care and education programs (ECE). This paper examines monitoring across ECE settings and considers lessons learned from the analogous sectors of child welfare and health. Although professional organizations in partnership with federal agencies developed national guidelines for health and safety, there is wide variation in state and local regulations around the minimum health and safety requirements for children in care. Areas of regulatory variation include: 1) thresholds for the number of children in licensed care at ECE facilities located in family child care homes (FCCs); 2) the comprehensiveness of background checks for ECE provider staff and individuals residing at FCCs; and 3) the frequency of monitoring visits.

ECE providers may receive funding from one or more public sources including, the Child Care and Development Fund (CCDF), Head Start/Early Head Start (HS/EHS), State Pre-Kindergarten (State Pre-K), Child and Adult Care Food Program (CACFP), Early Intervention and Special Education, and the Department of Defense Child Care. Providers funded by more than one public source are subject to multiple accountability systems that are not always aligned. ECE providers seeking national accreditation engage in yet another layer of accountability and quality improvement. Some states that are building or reforming Quality Rating and Improvement Systems (QRIS) are attempting to create unified early learning standards and consistent ECE program ratings across funding streams and provider types.

Many states use differential monitoring to improve the efficiency of monitoring systems and technical assistance (TA) systems. As opposed to “one size fits all” systems of monitoring, differential monitoring determines the frequency and comprehensiveness of provider monitoring based on the provider’s history of compliance with standards and regulations. Providers that maintain strong records of compliance are inspected less frequently, while those with a history of non-compliance may be subject to more announced and unannounced inspections. This paper includes case studies from states involved in various stages of implementing differential monitoring approaches.

Implementation of the Child Care and Development Block Grant Act of 2014 (CCDBG), which was signed into law in November 2014, will likely result in more uniformity in state practice in some of the components of monitoring. Using examples from states reforming their child care licensing systems, this paper outlines the challenges and possibilities of building accountability systems that support positive child and family outcomes while reducing the burden on individual providers within multiple funding streams. This paper provides a general overview of the current monitoring system, and highlights several examples of promising state practices that are already underway. It offers a vision for accountability that addresses compliance with a minimum floor of health and safety standards, and promising strategies for continuous quality improvement. The goal of this paper is to inform upcoming changes in licensing and monitoring systems that will take place in the context of the reauthorized CCDBG implementation.

Source: Office of the Assistant Secretary for Planning and Evaluation, U.S. Department of health and Human Services.

Available at: http://aspe.hhs.gov/sites/default/files/pdf/108601/rpt_ece_monitoring.pdf

Oversight, Monitoring, and Support for the Five Year Grant Period – Head Start

5/2013

Birth the Five Institute Leadership Institute Plenary

Oversight, Monitoring, and Support for the Five Year Grant Period The move to five year project periods provides an opportunity to implement changes in the Office of Head Start OHS funding practices. View this webcast from the OHS 2nd National Birth to Five Leadership Institute to learn more about how the five year grant period affects oversight of Head Start programs.

Source: Early Childhood Learning and Knowledge Center

Available at: http://eclkc.ohs.acf.hhs.gov/hslc/hs/calendar/li-2013/2nd-nat-0-5-lead-inst/OversightMonito.htm

Five Year Grant Periods – Head Start

8/2013

The Office of Head Start (OHS) is moving from indefinite project periods to five year project periods for all Head Start grantees. This requires changes in OHS funding practices and oversight of Head Start programs. The main purpose of improved oversight is to demonstrate the quality of program services, the effectiveness of management systems, and the achievement of outcomes for children, families, and communities.

Source: Early Childhood Learning and Knowledge Center

Available at: https://eclkc.ohs.acf.hhs.gov/hslc/hs/grants/5-yr-cycle?utm_medium=email&utm_campaign=New%20Content%20E-blast%20for%20July&utm_content=New%20Content%20E-blast%20for%20July+CID_30ee5ad9937c9730611342af4be147d8&utm_source=CM%20Eblast&utm_term=Five-Year%20Grant%20Periods

Office of Head Start Monitoring for Early Head Start (EHS) Home-Based Program Option: Maternal and Infant Early Childhood Home Visiting (MIECHV) EHS Home Visiting Model – Head Start

6/2013

This webinar provides information on the Office of Head Start’s Monitoring Process for all funded programs. It focuses on Early Head Start (EHS) home-based programs that are expanding their EHS services through Maternal, Infant, and Early Childhood Home Visiting (MIECHV) funding.

Source: Early Childhood Learning and Knowledge Center

Available at: http://eclkc.ohs.acf.hhs.gov/hslc/tta-system/ehsnrc/Early%20Head%20Start/multimedia/webinars/MIECHVWebinar.htm

Office of Head Start Policy

7/1/2013

The Office of Head Start (OHS) is moving from indefinite project periods to five-year project periods for all Head Start grantees. This requires changes in OHS funding practices and oversight of Head Start programs. Changes in oversight will include improved communication between federal staff and grantees, as well as ongoing analysis of data to determine the type of support needed by grantees. The main purpose of improved oversight is to demonstrate the quality of program services, the effectiveness of management systems, and the achievement of outcomes for children, families, and communities.

Source: Office of Head Start

Available at: http://eclkc.ohs.acf.hhs.gov/hslc/standards/IMs/2013/resour_im_002_070113.html

Reports to Congress – Head Start

The following are reports made to Congress by the Office of Head Start pursuant to the Head Start Act.

Report to Congress on Head Start Monitoring, Fiscal Year 2008 [PDF 2.3MB]

Biennial Report to Congress, 2005 [PDF 1.3MB]

Biennial Report to Congress, 2003 [PDF 2.7MB]

Source: Early Childhood Learning and Knowledge Center

Available at: http://eclkc.ohs.acf.hhs.gov/hslc/mr/rc

 

Monitoring Reviews – Head Start

Welcome to FY 2012 Head Start Monitoring. On this page, you will find links to helpful information and tools related to Monitoring including the FY 2012 Grantee Webcast, the On-Site Review Protocol, information on Monitoring 360, and much more. On the right-hand side of this page, you will find additional tools and information that may be helpful for your program.

Source: Early Childhood Learning and Knowledge Center

Available at: http://eclkc.ohs.acf.hhs.gov/hslc/mr/monitoring

Protocolo de Revisión y Guías para el AF 2012 de Office of Head Start – Head Start

Bienvenidos a la página de la Revisión de Head Start para el Año Fiscal 2012. Podrán descargar el documento pulsando en el enlace que aparece más abajo.

Protocolo de Revisión

Source: Early Childhood Learning and Knowledge Center

Available at: http://eclkc.ohs.acf.hhs.gov/hslc/Espanol/Monitoreo%20de%20Head%20Start/ProtocolodeRevi.htm