The Perils of Confusing Performance Measurement with Program Evaluation

A group of researchers recently published a paper critiquing the child outcomes performance indicator for Part C and Part B 619. They also presented some of their thoughts in a recent webinar sponsored by the Association of University Centers on Disabilities (AUCD). The researchers’ critique is based on several faulty assumptions and consequently unfairly discredits the system for measuring child outcomes and the use of the data. Let’s look at our concerns with their critique.

First, the authors have confused performance measurement with program evaluation.

Their primary argument is that the child outcomes measurement requirement produces misleading information because it is based on a flawed evaluation design. The researchers’ critique wrongly assumes that the child outcomes indicator is designed as an evaluation. The child outcomes measurement is not a program evaluation; it is one performance indicator embedded within a larger performance measurement system that is required by the Individuals with Disabilities Education Act (IDEA). States report on a number of performance indicators that address compliance with federal regulations and program results. As such, these indicators yield information that supports program improvement and ongoing monitoring of program performance. Performance measurement systems are common in both the public (for example, Maternal and Child Health) and the private sector (for example, the Pew framework for home visiting). The Office of Special Education Programs (OSEP) implemented the child outcomes indicator in response to the Government Performance and Results Act which requires all federal agencies report on results being achieved by their programs. OSEP also uses the child outcomes indicator data to monitor states on results achieved, consistent with the strong emphasis in IDEA to improve results for children with disabilities.

The Government Accounting Office has produced a succinct summary that highlights some of the differences between the performance measurement and program evaluation. Performance measurement refers to ongoing monitoring and reporting of program accomplishments. Performance measures may address program activities, services and products, or results. The OSEP child outcomes indicator is a performance measure that addresses results. Examples of other results performance measures are teen pregnancy rates, percentage of babies born at low birth weight, 3rd grade reading scores, and high school graduation rates. In contrast, program evaluationsare periodic or one time studies usually conducted by experts external to the program and involve a more in depth look at a program’s performance. Impact evaluations are a particular type of program evaluation that determine the effect of a program by comparing the outcomes of program participation to what would have happened had the program not been provided.

Performance Measurement Compared to Program Evaluation

Feature Performance Measurement Program Evaluation
Data collected on a regular basis, e.g.,  annually Yes No
Usually conducted by experts to answer a specific question at a single point in time No Yes
Provides information about a program’s performance relative to targets or goals Yes Possibly
Provides ongoing information for program improvement Yes No
Can conclude unequivocally that the results observed were caused by the program No Yes, if well designed impact evaluation
Typically quite costly No Yes

A major difference between measuring outcomes in a performance measure system versus a program evaluation is that a well-designed impact evaluation is able to conclude unequivocally that the results observed were caused by the program. Performance measures cannot rule out alternative explanations for the results observed. Nevertheless, performance measurement data can be used for a variety of purposes including accountability, monitoring performance, and program improvement. Data on performance measures such as the Part C and Part B Section 619 child outcomes indicator can be used to track performance compared to a target or to compare results from one year to the next within programs or states. They can be used to identify state or local programs that could benefit from additional support to achieve better results. Comparing outcomes across states or programs should be done with an awareness that they might serve different population which could contribute to different outcomes. The solution to this is not to conclude that results data are useless or misleading but rather to interpret the results alongside other critical pieces of information such as the performance of children at entry to the program or the nature of the services received. Two of OSEP’s technical assistance centers, the Center for IDEA Early Childhood Data Systems (DaSy) and the Early Childhood Technical Assistance Center (ECTA, have developed a variety of resources to support states in analyzing child outcomes data including looking at outcomes for subgroups to further understand what is contributing to the results observed. Just like tracking 3rd grade reading scores or the percentage of infants who are low birth weight, there is tremendous value in knowing how young children with disabilities are doing across programs and year after year.

Second, the authors incorrectly maintain that children who did not receive Part C services would show the same results on the child outcomes indicator as children who did.

The researchers’ claim that the results states are reporting to OSEP would be achieved even if no services had been provided rests on a flawed analysis of the ECLS-B data, a longitudinal study of children born in 2001. For their analysis, the authors identify a group of 24 months olds in the data set who they label as “Part C eligible children who did not receive Part C services.” These children

  • Received a low score on a shortened version of the Bayley Scales of Infant Development (27 items) administered at 9 months of age by a field data collector; and
  • Were reported by a parent when the child was 24 months old as not having received services to help with the child’s special needs.

Few would argue that the determination of eligibility for Part C could be replicated by a 27-item assessment administered by someone unfamiliar with infants and toddlers with disabilities. Furthermore, data from the National Early Intervention Longitudinal Study show that very few children are identified as eligible for Part C based on developmental delay at 9 months of age. The first problem with the analysis is assuming all of these children would have been Part C eligible. The second problem is that it is impossible in this data set to reliably identify which children did and did not receive Part C services. Parents were asked a series of questions about services in general; they were not asked about Part C services. As we and others who have worked with national data collections have learned, parents are not good reporters of program participation for a variety of reasons. The only way to confirm participation in Part C services is to verify program participation which the study did not do. Given that children who received Part C services cannot be identified in the ECLS-B data, no one should be making conclusions about Part C participation based on this data set.

The authors also argue that a measurement phenomenon called “regression to the mean” explains why Part C and Part B 619 children showed improved performance after program participation. In essence this argument says that improvements seen in the functioning of the children are not real changes but are actually due to measurement error. One can acknowledge the reality of errors in assessment results but to maintain that measurement error is the sole or even a major explanation for the progress shown by children in Part C and Part B 619 programs is absurd.

Moving Forward

State Part C and 619 programs are required by IDEA to report on multiple performance indicators including child outcomes as part of a larger performance measurement system. The child outcomes indicator was developed with extensive stakeholder input in order to maximize its utility to local programs, state agencies, and the federal government. The process of building the infrastructure needed to collect and use child outcomes data has been complex which is why states have been working on it for over ten years. State agencies continue to identify and implement strategies for improving the data collection and use of the data. We know that the data collection processes are not perfect and more work needs to be undertaken to address data quality and other concerns. Building a national system for measuring the outcomes for young children with disabilities receiving IDEA services is a long-term undertaking that requires ongoing effort to make the process better. Disparaging the performance indicator and the data reported by states based on incorrect assumptions and flawed analyses is not productive. Instead, the field needs to collectively engage in ongoing dialogue around critical issues of data quality, data analysis, and appropriate use of the data based on an informed understanding of what the child outcomes indicator is and is not. Part C and Part B 619 state agencies and OSEP are on the forefront of collecting and using early childhood outcomes data to improve programs – which is exactly what performance measurement is intended to do.

Source: DaSy: The Center for IDEA Early Childhood Data Systems

Available at: http://dasycenter.org/the-perils-of-confusing-performance-measurement-with-program-evaluation/ 

Hiring and Retaining Qualified Staff Is Not Mission Impossible

Katherine Falon
Senior State Training and Technical Assistance Specialist
National Center for Early Childhood Development, Teaching,& Learning

Are you having trouble finding and keeping dedicated, enthusiastic, and well-prepared staff members? Do you feel stumped about how to find great teachers, and what you can do to keep them? In this webinar, Katherine Falen will provide walk with you through an overview of some of your biggest staffing challenges, and some ideas for overcoming those challenges. This webinar will unique approaches to recruitment, screening, and retention practices that will help improve your odds of making what seems impossible possible.

You will leave this session empowered with new strategies for:

• Recruiting, screening, and hiring.
• Keeping staff members engaged and invested in their jobs
• Discovering valuable staff development resources

All sessions are 1.5 hours long, and include a brief announcement from our sponsor.

UCLA Head Start Management Fellows Program

 

June 18–29, 2018
Los Angeles, CA

Apply Online Now!

Applications are now available for the 2018 UCLA Head Start Management Fellows Program. This 12-day intensive leadership and management development training session is for Head Start and Early Head Start directors and managers. Designed from a strategic planning perspective, the curriculum focuses on applying current management concepts to Head Start needs and interests. More than 80 hours of classroom instruction include lectures, group discussions, case studies, and workshops. The program is June 18-29, 2018.

Graduates of the Fellows Program are awarded a certificate from UCLA. They are also given the option to receive academic-level credits at an additional cost. The program has enhanced the management and leadership capabilities of more than 1,500 Head Start directors and managers nationwide.

Target Audience

This program is open to Head Start and Early Head Start directors and managers who have been in their current position for a minimum of two years, and have experience in a leadership role at a local, state, or regional community organization. Participation in a community organization does not need to be current.

In addition, participants must identify a “co-participant” who will attend the final two and a half days of the program. The co-participant is identified as the participant’s supervisor or board chair. Two-person teams from the same program also are eligible to apply (limited number selected).

Cost for Participants

The National Center on Program Management and Fiscal Operations (PMFO) will defray the majority of program costs for both the participant and co-participant. This will include tuition, training materials, lodging, and most meals. Participants are responsible for a registration fee of $3,100. Participants and co-participants are also responsible for their travel expenses to and from Los Angeles.

How to Apply

Selected participants will be notified by May 4, 2018.

Questions?

If you have questions, please contact Jeanette Boom at jeanette.boom@anderson.ucla.edu or 310-825-6306.

The UCLA Head Start Management Fellows Program is offered by PMFO, in partnership with UCLA’s Anderson School of Management.

Webinar: Boosting Your Program’s Bottom Line: Ideas for Differentiation and New Revenue Streams

January 24, 2017
2:00 – 3:30pm ET

Let’s face it: Quality early childhood programs are expensive to operate, and the competition for enrollment can be fierce. We’re always looking for new ideas to boost our program’s revenue and make our program stand out from the crowd. Child care marketing genius, Kris Murray, will join us to help you learn the ingenuity you need to earn additional revenue and differentiate your program with solutions families will crave.

In this session, you will learn:

  • How to define your program’s “key value differentiators” to attract more families to your program;
  • How to identify additional products and services that will bring in more revenue than tuition and other funding without “fundraising”,
  • Strategies for launching your new products and services;
  • How to locate resources to support your new revenue boosting campaigns.

All sessions are 1.5 hours long, and include a brief announcement from our sponsor.

Can’t participate in our webinars at the appointed time? Never fear! All of the webinars are recorded. To view the recording, simply register now and you will receive an email with a link to the recording when it is ready to be viewed. You can still download the certificate by watching the recording to the end when the certificate link is announced and displayed on the screen.

Only 1,000 people at one time can attend our webinars, but registration often tops 4,000. Only the first 1,000 people to click the link to attend the webinar will be able to get in. We start the webinars 30 minutes in advance of the start time. Arrive early to make sure you get in.

Please be advised that you will only be eligible for the great door prizes if you participate in the live session.

You can earn .2 CEUs for each webinar. The cost is $15 paid to University of Oklahoma online when you apply. Learn more here: Continuing Education Units (CEUs) from University of Oklahoma

OHS Head Start Program Performance Standards Talk

Wednesday, Dec. 14, 2016

2–3:30 p.m. EDT

Register Online Now!

Join the Office of Head Start (OHS) in this conversation for Head Start grantees’ management and staff members, T/TA System staff, and other stakeholders about the newly released Head Start Program Performance Standards (HSPPS).

Join us this month to discuss supporting implementation of the HSPPS, as well as hot topics we are hearing from the field.

Learn more about:

  • Update on background checks
  • Using the Program Management and Fiscal Operations (PMFO) Management Systems Wheel as a guiding tool
  • Developing an HSPPS implementation process utilizing the four stages of the Implementation Science Framework
  • Suggested planning processes
  • The role of the governing body and Policy Council
  • Task Functional teams

Before the webcast, please read HSPPS Sections 1302.70, 1302.72, 1302.101(b), and 1302.103.

Who Should Participate?

The webcast will benefit an array of audience members, including Head Start and Early Head Start executive leadership, program directors, managers, and staff members. Please call in with other colleagues in your organization where possible.

How to Register

Select the link to register: https://goto.webcasts.com/starthere.jsp?ei=1125845

This registration is only valid for the webcast on Dec. 14.

Space is limited. Sign up today to attend the session from your office or conference room. You will receive a confirmation email with instructions on how to join. The webcast will be accessible via computer, tablet, and other Internet-connected devices. Phone access is available for those requiring alternative accommodations. Send an email to webcasts@hsicc.org to receive telephone access.

Save the Date!

Register early for next month’s OHS Head Start Program Performance Standards Talk on Wednesday, Jan.18, 2016: https://goto.webcasts.com/starthere.jsp?ei=1125886.

Questions?

Send your questions to webcasts@hsicc.org.

Uniform Administrative Requirements, Cost Principles, and Audit Requirements 2014 Regulations and Related Resources

10/2015

The new federal fiscal regulations (also referred as Uniform Guidance or the “Supercircular”) took effect for awards and award increments received on or after Dec. 26, 2014. The Office of Head Start (OHS) recommends that grantees transition to the new fiscal regulation throughout 2015. This website will maintain information pertaining to the former regulations through the end of 2015. Thereafter, the former regulations will be archived, and this website will reflect only the new federal fiscal regulations.

Read more about transitioning grants to the new regulations

You are strongly encouraged to review the resources below. Use them to become familiar with the expectations and impact of the new Uniform Administrative Requirements, Cost Principles, and Audit Requirements 2014 Regulations on your organization. If you are unsure about how to apply the new fiscal regulations and requirements within your organization, please contact your Regional Office for assistance.

Source: Office of Head Start, Early Childhood Learning and Knowledge Center

Available at: http://eclkc.ohs.acf.hhs.gov/hslc/tta-system/operations/mang-sys/fiscal-mang/ug-resources

Early Head Start-Child Care Partnerships Baseline Assessment Tools

6/30/2015

The purpose of the baseline is to understand the grantees and partners’ current capacity. Baseline information will be used to identify technical assistance needs or other supports. This includes additional start-up funding that may be needed to ensure grantees and partners are on track to meet Early Head Start requirements at 18 months. The baseline will gather information from the following areas: environmental health and safety; fiscal management systems; governance; program management systems including eligibility, recruitment, selection, enrollment, and attendance (ERSEA); and comprehensive services.

Source: Early Childhood Learning and Knowledge Center, Office of Head Start

Available at: http://eclkc.ohs.acf.hhs.gov/hslc/grants/monitoring/additional-resources.html

Early Childhood Privacy Data

6/2015

States, communities, and local providers are using data to serve the needs of children and families participating in early childhood programs (e.g., Head Start, child care, preschool). Data sharing can support efficient, effective services for children. However, the benefits of data sharing and use must be balanced with the need to support privacy. To support this, PTAC has assembled the following resources about privacy and data sharing with Early Childhood programs in mind. This list is just a start and additional resources will be added as they are developed.

Source: Privacy Technical Assistance Center (PTAC), U.S. Department of Education

Available at: http://ptac.ed.gov/early-childhood-data-privacy

Health Manager’s Orientation Guide

2/2015

Whether you are new to Head Start, are new to the role of health manager, or have been a health manager for a while, this guide was developed to be a resource tool for you. This section provides a brief overview of Head Start health management. It also looks at the important role Head Start plays in fostering a culture of health and wellness for Head Start children, families, and staff.

Source: Early Childhood Learning and Knowledge Center, National Center on Health

Available at: http://eclkc.ohs.acf.hhs.gov/hslc/tta-system/health/health-services-management/health-managers-orientation-guide

Learning for New Leaders: Head Start A to Z

September 8, 2014

The National Center on Program Management and Fiscal Operation PMFO is pleased to offer Learning for New Leaders: Head Start A to Z. It is a collection of sessions and resources designed to address the unique needs of new Head Start and Early Head Start leaders. New directors, managers, and other leaders may use these materials for individual professional development. They also can be used in face-to-face group and distance learning settings to orient and support new directors and managers.

Source: National Center on Program Management and Fiscal Operations and the Early Childhood Knowledge and Learning Center

Available at: http://eclkc.ohs.acf.hhs.gov/hslc/tta-system/operations/learning/learning.html