The Perils of Confusing Performance Measurement with Program Evaluation

A group of researchers recently published a paper critiquing the child outcomes performance indicator for Part C and Part B 619. They also presented some of their thoughts in a recent webinar sponsored by the Association of University Centers on Disabilities (AUCD). The researchers’ critique is based on several faulty assumptions and consequently unfairly discredits the system for measuring child outcomes and the use of the data. Let’s look at our concerns with their critique.

First, the authors have confused performance measurement with program evaluation.

Their primary argument is that the child outcomes measurement requirement produces misleading information because it is based on a flawed evaluation design. The researchers’ critique wrongly assumes that the child outcomes indicator is designed as an evaluation. The child outcomes measurement is not a program evaluation; it is one performance indicator embedded within a larger performance measurement system that is required by the Individuals with Disabilities Education Act (IDEA). States report on a number of performance indicators that address compliance with federal regulations and program results. As such, these indicators yield information that supports program improvement and ongoing monitoring of program performance. Performance measurement systems are common in both the public (for example, Maternal and Child Health) and the private sector (for example, the Pew framework for home visiting). The Office of Special Education Programs (OSEP) implemented the child outcomes indicator in response to the Government Performance and Results Act which requires all federal agencies report on results being achieved by their programs. OSEP also uses the child outcomes indicator data to monitor states on results achieved, consistent with the strong emphasis in IDEA to improve results for children with disabilities.

The Government Accounting Office has produced a succinct summary that highlights some of the differences between the performance measurement and program evaluation. Performance measurement refers to ongoing monitoring and reporting of program accomplishments. Performance measures may address program activities, services and products, or results. The OSEP child outcomes indicator is a performance measure that addresses results. Examples of other results performance measures are teen pregnancy rates, percentage of babies born at low birth weight, 3rd grade reading scores, and high school graduation rates. In contrast, program evaluationsare periodic or one time studies usually conducted by experts external to the program and involve a more in depth look at a program’s performance. Impact evaluations are a particular type of program evaluation that determine the effect of a program by comparing the outcomes of program participation to what would have happened had the program not been provided.

Performance Measurement Compared to Program Evaluation

Feature Performance Measurement Program Evaluation
Data collected on a regular basis, e.g.,  annually Yes No
Usually conducted by experts to answer a specific question at a single point in time No Yes
Provides information about a program’s performance relative to targets or goals Yes Possibly
Provides ongoing information for program improvement Yes No
Can conclude unequivocally that the results observed were caused by the program No Yes, if well designed impact evaluation
Typically quite costly No Yes

A major difference between measuring outcomes in a performance measure system versus a program evaluation is that a well-designed impact evaluation is able to conclude unequivocally that the results observed were caused by the program. Performance measures cannot rule out alternative explanations for the results observed. Nevertheless, performance measurement data can be used for a variety of purposes including accountability, monitoring performance, and program improvement. Data on performance measures such as the Part C and Part B Section 619 child outcomes indicator can be used to track performance compared to a target or to compare results from one year to the next within programs or states. They can be used to identify state or local programs that could benefit from additional support to achieve better results. Comparing outcomes across states or programs should be done with an awareness that they might serve different population which could contribute to different outcomes. The solution to this is not to conclude that results data are useless or misleading but rather to interpret the results alongside other critical pieces of information such as the performance of children at entry to the program or the nature of the services received. Two of OSEP’s technical assistance centers, the Center for IDEA Early Childhood Data Systems (DaSy) and the Early Childhood Technical Assistance Center (ECTA, have developed a variety of resources to support states in analyzing child outcomes data including looking at outcomes for subgroups to further understand what is contributing to the results observed. Just like tracking 3rd grade reading scores or the percentage of infants who are low birth weight, there is tremendous value in knowing how young children with disabilities are doing across programs and year after year.

Second, the authors incorrectly maintain that children who did not receive Part C services would show the same results on the child outcomes indicator as children who did.

The researchers’ claim that the results states are reporting to OSEP would be achieved even if no services had been provided rests on a flawed analysis of the ECLS-B data, a longitudinal study of children born in 2001. For their analysis, the authors identify a group of 24 months olds in the data set who they label as “Part C eligible children who did not receive Part C services.” These children

  • Received a low score on a shortened version of the Bayley Scales of Infant Development (27 items) administered at 9 months of age by a field data collector; and
  • Were reported by a parent when the child was 24 months old as not having received services to help with the child’s special needs.

Few would argue that the determination of eligibility for Part C could be replicated by a 27-item assessment administered by someone unfamiliar with infants and toddlers with disabilities. Furthermore, data from the National Early Intervention Longitudinal Study show that very few children are identified as eligible for Part C based on developmental delay at 9 months of age. The first problem with the analysis is assuming all of these children would have been Part C eligible. The second problem is that it is impossible in this data set to reliably identify which children did and did not receive Part C services. Parents were asked a series of questions about services in general; they were not asked about Part C services. As we and others who have worked with national data collections have learned, parents are not good reporters of program participation for a variety of reasons. The only way to confirm participation in Part C services is to verify program participation which the study did not do. Given that children who received Part C services cannot be identified in the ECLS-B data, no one should be making conclusions about Part C participation based on this data set.

The authors also argue that a measurement phenomenon called “regression to the mean” explains why Part C and Part B 619 children showed improved performance after program participation. In essence this argument says that improvements seen in the functioning of the children are not real changes but are actually due to measurement error. One can acknowledge the reality of errors in assessment results but to maintain that measurement error is the sole or even a major explanation for the progress shown by children in Part C and Part B 619 programs is absurd.

Moving Forward

State Part C and 619 programs are required by IDEA to report on multiple performance indicators including child outcomes as part of a larger performance measurement system. The child outcomes indicator was developed with extensive stakeholder input in order to maximize its utility to local programs, state agencies, and the federal government. The process of building the infrastructure needed to collect and use child outcomes data has been complex which is why states have been working on it for over ten years. State agencies continue to identify and implement strategies for improving the data collection and use of the data. We know that the data collection processes are not perfect and more work needs to be undertaken to address data quality and other concerns. Building a national system for measuring the outcomes for young children with disabilities receiving IDEA services is a long-term undertaking that requires ongoing effort to make the process better. Disparaging the performance indicator and the data reported by states based on incorrect assumptions and flawed analyses is not productive. Instead, the field needs to collectively engage in ongoing dialogue around critical issues of data quality, data analysis, and appropriate use of the data based on an informed understanding of what the child outcomes indicator is and is not. Part C and Part B 619 state agencies and OSEP are on the forefront of collecting and using early childhood outcomes data to improve programs – which is exactly what performance measurement is intended to do.

Source: DaSy: The Center for IDEA Early Childhood Data Systems

Available at: http://dasycenter.org/the-perils-of-confusing-performance-measurement-with-program-evaluation/ 

Early Childhood State Advisory Councils Final Report

5/2015

Every day, more Americans recognize the value of high-quality early childhood education and its contribution to the ability of American children to succeed in the classroom, thrive in the work- force, and compete globally. Research studies provide evidence that children who attend high-quality early childhood programs that promote optimal brain development are better prepared for school and success than children who do not attend such programs.

The Improving Head Start for School Readiness Act of 2007, Public Law (P.L.) 110-134, authorized the State Advisory Councils on Early Childhood Education and Care (SACs) grant. The American Recovery and Reinvestment Act of 2009 (ARRA), P.L. 111-5, funded the grant. The SAC grant pro- vided funds to states and territories1 to develop high-quality early childhood education systems.

Many states had already begun to develop early childhood education systems prior to receiving the grant. They used state, local, private, and federal funds to spur state innovations. Federal grant sources included the Child Care and Development Fund (CCDF), Early Childhood Comprehensive Systems (ECCS), and Project LAUNCH (Linking Actions for Unmet Needs in Children’s Health).

The SAC grant propelled further improvements in the quality of early childhood programs, better coordination among existing early childhood programs, and streamlined service delivery. The grant also provided a strategic focus on early childhood, leveraged previous early childhood systems-building investments, and informed the President’s 2013 early learning plan.

Source: Administration for Children and Families, U.S. Department of Health and Human Services

Available at: https://www.acf.hhs.gov/sites/default/files/ecd/sac_2015_final_report.pdf

QRIS State Contacts & Map 

2/2015

The following information can be found in the QRIS State Contacts spreadsheet:

  • QRIS Status,
  • State, QRIS,
  • QRIS Website,
  • Implementing Agency
  • Primary Contact Name/Email/Phone,
  • Alternate Contact Name/Email

The following information can be found in the QRIS Map:Complete U.S. map with QRIS Status color key:

  • Statewide = Blue
  • Counties/Localities/Regions = Red
  • Pilot = Green
  • Planning = Yellow
  • Requires Legislative Action to Implement a QRIS = Grey

 

Source: QRIS National Learning Network

Available at: http://qrisnetwork.org/qris-state-contacts-map

How Head Start Grantees Set and Use School Readiness Goals 

1/20/2015

This report and accompanying brief present findings from a study describing how local Head Start and Early Head Start grantees set school readiness goals, how they collect and analyze data to track progress towards goals, and how they use these data in program planning and practice to improve program functioning. The findings are based on data that was collected during the 2013-2014 school year from 73 Head Start and Early Head Start grantees.

Source: Office of Planning, Research & Evaluation, Administration for Children and Families

Available at: http://www.acf.hhs.gov/programs/opre/resource/how-head-start-grantees-set-and-use-school-readiness-goals

Foundations for Excellence: Planning in Head Start

1/2015

The Office of Head Start National Centers have produced this series of papers to support programs in developing and implementing their planning systems. They may be useful to Head Start leaders and management teams, including governing body and Policy Council members.The Head Start planning systems and related activities are an essential part of program operations. Thoughtful planning has always been critical to successful programming. However, it becomes even more important as programs shift from an indefinite grant period to a five year project period. Federal Oversight of Five Year Head Start Grants (ACF-IM-HS-14-02) and the five year grant applications require programs to describe and define:

  • Long-term goals they will accomplish during the five-year period
  • Short-term objectives
  • Expected outcomes that are aligned with the goals and objectives
  • Data tools and methods for tracking progress towards their goals, objectives, and expected outcomes

Grantees report on this progress in their yearly continuation applications over the course of the five year project period.These papers can help programs develop plans for tracking their progress in a meaningful way.

Source: Early Childhood Learning and Knowledge Center

Available at: http://eclkc.ohs.acf.hhs.gov/hslc/tta-system/operations/foundations

The Race to the Top – Early Learning Challenge Year Two Progress Reports

12/4/2015

The U.S. Departments of Education and Health and Human Services have released a report that highlights some of the work undertaken by these Phase 1 and Phase 2 States during 2013, as reported in their APRs. Download this report, “Race to the Top – Early Learning Challenge Year Two Progress Report,” by clicking on the image to the left or the link above.

Source: U.S. Department of Education

Available at: https://elc.grads360.org/#program/annual-performance-reports

New Accountability Framework Raises the Bar for State Special Education Programs

6/24/2014

To improve the educational outcomes of America’s 6.5 million children and youth with disabilities, the U.S. Department of Education today announced a major shift in the way it oversees the effectiveness of states’ special education programs.

Until now, the Department’s primary focus was to determine whether states were meeting procedural requirements such as timelines for evaluations, due process hearings and transitioning children into preschool services. While these compliance indicators remain important to children and families, under the new framework known as Results-Driven Accountability RDA, the Department will also include educational results and outcomes for students with disabilities in making each state’s annual determination under the Individuals with Disabilities Education Act IDEA.

“Every child, regardless of income, race, background, or disability can succeed if provided the opportunity to learn,” U.S. Secretary of Education Arne Duncan said. “We know that when students with disabilities are held to high expectations and have access to the general curriculum in the regular classroom, they excel. We must be honest about student performance, so that we can give all students the supports and services they need to succeed.”

Source: U.S. Department of Education

Available at: http://www.ed.gov/news/press-releases/new-accountability-framework-raises-bar-state-special-education-programs

State Performance Plan (SPP) and Annual Performance Report (APR) Forms

7/2013

As required by law, the Department has issued annual determination letters regarding states’ implementation of theIndividuals with Disabilities Education Act (IDEA).  Each state was evaluated on key indicators under Part B (ages 3 through 21) and Part C (infants through age 2) and placed into one of four categories: meets requirements, needs assistance, needs intervention, and needs substantial intervention.  Most states fell into the top categories; 38 states met requirements for Part B, and 37 states and Puerto Rico met requirements for Part C.  No states were in needs substantial intervention.  The IDEA identifies specific technical assistance or enforcement actions that the agency must undertake for states that do not meet requirements.

Source: U.S. Department of Education

Available at: http://www2.ed.gov/fund/data/report/idea/sppapr.html

An Ocean of Unknowns

7/30/2013 10 – 11:30 ET

Few things can be more polarizing than basing teachers’ evaluations on their students’ achievement. It sounds logical: Teachers should be accountable for what their students have – or have not – learned. But incorporating achievement data into teacher ratings is complicated, especially in the early grades of elementary schools when students typically do not take state standardized tests. With nudges from the federal government through programs like Race to the Top and flexibility waivers from No Child Left Behind, nearly every state is revamping its teacher evaluation system to include student achievement data as a significant factor of a teacher’s rating. Across the country, experiments abound as states and school districts struggle to find sound approaches to measure young students’ achievement for the purposes of teacher evaluation.

New America’s Education Policy Program recently released a paper, by Senior Policy Analyst Laura Bornfreund, exploring the approaches being used to include student achievement data in PreK-3rd grade teachers’ evaluations. Please join us for an expert panel who will discuss the opportunities and risks Bornfreund illuminates in her paper.

Source: The New America Foundation

Available at: http://newamerica.net/events/2013/an_ocean_of_unknowns

Head Start and Early Head Start standards raised to increase quality and accountability | Administration for Children and Families

7/2/2013

More than 150 agencies will receive grants to provide Head Start or Early Head Start services in their communities for the next five years according to an announcement made today by Office of Head Start (OHS) Director Yvette Sanchez Fuentes. The awardees were selected through a competition that compared existing Head Start grantees to other potential providers to determine which organizations could provide the best early education services to their communities.

“This competition raises the quality of Head Start programs across America,” said Director Sanchez Fuentes.  “We are holding every provider accountable to deliver high-quality comprehensive services to children and families, so we can continue to deliver on the promise Head Start makes to communities.”

As part of the Head Start reforms President Obama announced in 2011, 125 Head Start grantees that failed to meet a new set of rigorous benchmarks were required to compete for continued federal funding with other potential early childhood services providers in their communities.  Grantees that chose to compete for funding were required to demonstrate that they had corrected all deficiencies in a sustainable manner in order to be considered for funding for the next five years.  As part of this sweeping reform to the Head Start program, all grantees will be evaluated under transparent, research-based standards to ensure that programs are providing the highest quality services to children and families.

In this first round of competition, all competitors had to submit proposals detailing how they would achieve Head Start’s goal of delivering high-quality early childhood services to the nation’s most vulnerable infants, toddlers and preschoolers.  These proposals were subjected to an extensive evaluation process, including review by a panel of independent early childhood professionals and assessment by Certified Public Accountants to determine a potential grantee’s ability to implement Head Start’s mission and standards in their community.  In a few cases, the panel determined that an existing grant would be more effective if it was split up amongst multiple agencies, bringing the total number of grants yielded from the first round of competition to 153. A full list of the selected grantees is available at http://eclkc.ohs.acf.hhs.gov/hslc/hs/grants/dr/cohort-1-awards-result….

As a result of the 2011 reforms, all Head Start grants are now awarded in five-year increments.  Grantees will be subject to strengthened grant terms and conditions to ensure every Head Start child across the country receives consistent, high-quality education and services.  Grantees are expected to meet OHS’ high quality benchmarks in order to be renewed for an additional five years, or face their grants being opened for recompetition.

A second group of Head Start grantees was notified in January that the grants for their service areas would also be open to competition.  The competitive process for those service areas will open to the public later this summer.

For more information on Head Start, please visit www.acf.hhs.gov/programs/ohs/.

Source: Administration for Children and Families, U.S. Department of Health and Human Services