top of page

Reporting Meaningful Outcomes in School Behavioral Health


In April 2019, my student Deanna Al-Hammori and I presented a workshop on single-case design research and program evaluation at the Southeastern School Behavioral Health Conference in Myrtle Beach, SC.

Background

One of the most likely areas mental health professionals can have organizational impact in schools is program evaluation. Program Evaluations ask the question: How well are services meeting local needs? The primary purpose of evaluation is to inform local decision making by providing feedback to stakeholders. In this way, program evaluation differs from research in that it is much more likely to focus on pragmatic outcomes.

There are two main type of evaluation. The first is formative evaluation, which examines implementation fidelity to an evidence-based practice with feedback to the implementers (i.e., "audit feedback"). The second is summative evaluation, which examines the clinical impact of evidence-based practices in the field, typically occurring at the end of a program cycle and often considering cost-benefit tradeoffs.

An example of thoughtful program evaluation is Positive Behavior Interventions and Supports (PBIS). PBIS uses several tools to collect data about program quality to keep implementers apprised of program functioning. Many of these tools are migrating to online dashboards, and examples can be found at http://pbisapps.org

Single-Case Analysis and Program Evaluation

We recommended the use of two evaluation elements to inform program evaluation. The first is single-case effect sizes at the individual, group, classroom, and school levels. To assist in this process, we directed audience members to our website, www.schoolpsychologytech.org, and our single-case analysis workbook. The effect sizes derived from single case designs can be aggregated to overview program effectiveness, consistent with examples provided by Burns (2015).

Our second recommendation was to record intervention "ingredients" to provide the data necessary for benefit-cost analyses (Levin, McEwin, Belfield, Bowden, & Shand, 2018). Pressures are mounting on researchers to analyze program cost, relative to benefit, and this may be a sign of things to come for schools. And in truth, an evidence-based program delivered with low-fidelity can be less effective than an idiosyncratic program delivered with high-fidelity. Another reality is that an idiosyncratic program at low cost can be more valuable than evidence-based program at high cost. For these reasons, it is critical to record personnel time, training needs, facility use, equipment, and other inputs when documenting interventions.

References and Further Reading

Burns, C.E. (2015). Does my program really make a difference? Program evaluation utilizing aggregate single-subject data. American Journal of Evaluation, 36, 191-203. doi: 10.1177/1098214014540032

Forman, S.G., & Burke, C.R. (2008). Best practices in selecting and implementing evidence-based school interventions. In A. Thomas & J Grimes (Eds.). Best Practices in School Psychology V (pp. 799-811).

Horner, R.H., Todd, A.W., Lewis-Palmer, T., Irvin, L.K., Sugai, G., & Boland, J.B. (2004). The school-wide evaluation tool (SET): A research instrument for assessing school-wide positive behavior support. Journal of Positive Behavior Interventions, 6, 3-12.

Kidder, D.P., & Chapel, T.J. (2018). CDC’s program evaluation journey: 1999 to present. Public Health Reports, 133, 356-359. doi: 10.1177/0033354918778034

Levin, H.M., McEwan, P.J., Belfield, C., Bowden, A.B., & Shand, R. (2018). Economic Evaluation in Education: Cost-effectiveness and Benefit-cost Analysis (3rd ed.). Thousand Oaks, CA: Sage.

Morrison, J.Q., & Harms, A.L. (2018). Advancing Evidence-based Practice through Program Evaluation: A Practical Guide for School-based Professionals. New York: Oxford.

Schultz, B.K., Mixon, C., Dawson, A., Spiel, C., & Evans, S.W. (2017). Evaluating school mental health programs. In K.D. Michael & J.P. Jameson (Eds.). Handbook of Rural School Mental Health. Cham, Switzerland: Springer International.

Sugai, G., & Horner, R. (2006). A promising approach for expanding and sustaining school-wide positive behavior support. School Psychology Review, 35, 245-259.

Warren, J.S., Edmonson, H.M., Turnbull, A.P., Sailor, W., Wickham, D., Griggs, S.E., & Beech, S.E. (2006). Schoolwide positive behavior support: Addressing behavior problems that impede student learning. Educational Psychology Review, 2, 187-198.

Yarbrough, D.B., Shulha, L.M., Hopson, R.K., & Caruthers, F. (2010). The Program Evaluation Standards: A Guide for Evaluators & Evaluation Users (3rd Ed.). Sage: Los Angeles, CA.

Featured Posts
Recent Posts
Search By Tags
Follow Us
  • Facebook Classic
bottom of page