top of page

Disseminating Evidence-Based Interventions for Students with ADHD via Teleconsultation


In February 2016, my lab presented preliminary results of our school teleconsultation study at the annual convention of the National Association of School Psychologists. A summary of that presentation is below:

Introduction

School-based behavior consultation is a well-established treatment delivery model for students with behavior difficulties in schools (Reddy, Barboza-Whitehead, Files, & Rubel, 2000), yet resource and travel demands can make it difficult for behavior consultants (e.g., school and clinical psychologists) to be readily available to teacher consultees. In lieu of these supports, teachers are left to devise classroom management strategies ‘on the job,’ with only minimal pre-service training on issues relating to childhood behavior disorders (State, Kern, Starosta, & Mukherjee, 2011). Moreover, progress monitoring during behavior interventions is often imprecise. Student behavior concerns and circumstances vary widely and, as a result, progress is difficult to define. Teachers commonly rely on ad hoc measures (e.g., count of classroom rule violations) or discrete behavior ratings (e.g., daily progress reports), but it is often unclear how to use these data to inform intervention decisions (e.g., Saeki et al., 2011).

Children with attention-deficit/hyperactivity disorder (ADHD) experience behavior difficulties at school that can lead to significant academic impairment. In particular, students with ADHD often exhibit disorganization, materials mismanagement, poor study skills, poor time management, disruptions in the classroom, and strained relationships with their teachers. Such impairments often persist from elementary into secondary schools, resulting in disproportionately high rates of Special Education referrals (Schnoes, Reid, Wagner, & Marder, 2006), disciplinary actions (Atkins et al., 2002), failing grades (Schultz, Evans, & Serpell, 2009), and school dropout (Barkley, Fischer, Smallish, & Fletcher, 2006). Despite the clear risks, few interventions have been designed or tested to support secondary students and their teachers (Wolraich et al., 2005).

With technological advances in online communication, several viable options now exist for professionals to collaborate via online videoconferencing. The use of videoconferencing has been widely studied in psychiatry, leading to a subfield referred to as “telepsychiatry.” Studies examining the feasibility and acceptability of telepsychiatry indicate that patients and providers are highly satisfied with the technology, with satisfaction data comparable to that of traditional face-to-face service provision (Garcia-Lizana & Munoz-Mayorga, 2010). Telepsychiatry services have consistently proven effective in assessing and diagnosing clients remotely, with some emerging research suggesting that treatment via videoconferencing is comparable to face-to-face care. Taken together, the findings suggest that telepsychiatry is a safe, effective, and inexpensive alternative to in-person services that can potentially expand service delivery to previously underserved areas (Bloch & Diamond, 2010; Hilty et. al, 2013).

Our primary goal in the present study was to assess feasibility and acceptability among teacher consultees and parents, but we also used the opportunity to collect and analyze data regarding treatment fidelity and preliminary efficacy to inform future modifications.

Method

We recruited participants at two local middle schools in eastern North Carolina (see demographic data summarized in the table below). We began by sending flyers to all families at one site, and direct contact (e.g., Open Houses) was used at both sites. In all, 18 participants were recruited over two years, with 6 participants treated in year one (feasibility study) and the other 12 in the second year (pilot study). Teachers were recruited at the start of each school year to serve as "mentors" (i.e., consultees). In all, 16 separate teachers served as mentors over the two years of the study.

Once mentors were recruited, we started an A-B-A-B single-subject design planned for the entire school year, with each participant receiving the comparison condition (A – traditional face-to-face consultation) as well as the experimental condition (B – videoconferencing). Each phase lasted approximately nine weeks. (Note: Due to the timing of this presentation, the present discussion only includes the first two phases, A-B.) Our design is well-suited to research questions relating to feasibility and acceptability because all teacher consultees experience both conditions and can directly compare those procedures. To assess treatment fidelity, each teacher consultee tracked weekly indicators of the student’s performance during the intervention. We examined the treatment-response trajectories across both experimental conditions using a Bayesian approach to single-subject analysis that quantifies the strength of consistencies or differences in trends over time (de Vries, Hartogs, & Morey, 2015; de Vries & Morey, 2013).

We evaluated three primary outcomes. First, we examined success rates (i.e., completed sessions) when using videoconferencing to communicate with teachers using consultants' progress notes over time. Second, to evaluate teacher perceptions of videoconferencing technologies (i.e., acceptability), we administered the Distance Communication Comfort Scale (DCCS; Schneider, 1999) and the Technology Acceptance Model Instrument (TAMI; Chin, Johnson, & Schwartz, 2008) at the beginning and end of each school year. Third, to assess treatment integrity over time, we compared permanent product data across each phase during the school year. All interventions were derived from the Challenging Horizons Program (CHP; Schultz & Evans, 2015), which provides "training interventions" designed to improve ADHD-related coping skills. Skills developed by the CHP include organization, assignment tracking, note taking skills, study skills, and social problem solving. Permanent products from these interventions include organization checklists and other tracking procedures collected by teacher consultees. To assess whether there were differences between phases in the permanent product data, Bayesian single-cases analyses were conducted. Bayesian single-case analysis adjusts for many of the complicating factors in single-case data (e.g., autocorrelation, missing data) to derive a principled estimate of change across phases. In this instance, we hypothesized that there would be little change from phases A to B, and the strength of this similarity can be quantified using Bayes factors. Bayes factors can be interpreted similar to odds ratios, with values suggesting the degree to which the null hypothesis (i.e., no change) is supported. A Bayes factor of 3.0, for example, would suggest that the data support the null hypothesis over the alternative hypothesis at a rate of 3:1.

Results

In the pilot study year (our primary focus), 49 consultation sessions were completed during the initial A phase and 37 were completed during the first B phase. We noted a slight increase in the amount of sessions that were rescheduled, but we cannot conclude that increased rescheduling was related to the teleconsultation procedures. There were, however, four "failed" sessions in the B phase (i.e., the session had to be discontinued due to technological problems), and another six sessions were noted as problematic (e.g., video stopped working, audio was inconsistent). The net impact was that the average number of sessions between consultant and consultee dropped from 4.2 to 3.1 from phases A to B. Among completed sessions, no differences were noted in terms of the total minutes spent per session between phase A (M = 14.0, SD = 6.0) and phase B (M = 14.7, SD = 7.3). There were also no differences noted in terms of the proportion of sessions in which consultees reported new data to the consultant (roughly 75% of the time in each phase).

In terms of teacher acceptability, we noted meaningfully increased comfort with videoconferencing from pre to post on the DCCS (d = 1.47), as well as the TAMI (d = 1.47), after adjusting for the within-subject nature of these data. These results replicate similar findings in the telehealth literature suggesting that users find videoconferencing highly acceptable after use.

When comparing the permanent product data from the interventions across the first two phases (AB Design), Bayes factors ranged from 0.69 (anecdotal evidence for the alternative hypothesis) to 6.4 (moderate support for the null hypothesis). In addition, we estimated effect sizes from phase A to B and calculated a range of likely estimates. A subset of results for those students receiving the organization intervention is provided in the figure below. Note that the range of possible effect sizes (left column) includes 0 (no change), and in most cases trends slightly in the positive direction, suggesting that where potential differences exist, the teleconsultation phase (B) outperformed the face-to-face phase (A).

Discussion

Although preliminary, our pilot data suggest that teacher perceptions of teleconsultation improve after attempting it, consistent with the research on telehealth in general. It also appears that teleconsultation is successful in most attempts, but we noted a 9.8% session failure rate and problems in an additional 16.2% of sessions. The result is that, on average, sessions in a 9-week period drop from 4.2 to 3.1 per consultee after switching to teleconsultation, but session length does not meaningfully change. We interpret these findings to suggest that there may be a loss of communication between consultants and consultees, but some of these issues may be related to the fact that we did not constrain our consultants to the office when conducting teleconsultation. Sessions attempted from home, for example, appeared to account for most of the problematic sessions. In the future, we intend to limit teleconsultation sessions to a "hardwired" Internet connection, but note that our findings may have implications for the flexibility of teleconsultation in the field.

Perhaps most importantly, our findings suggest that the switch to teleconsultation does not appear to coincide with any credible loss of treatment fidelity (adherence) or treatment efficacy. Based on these findings, it seems safe to conclude that consultants can switch from in-person to teleconsultation after nine weeks of intervention without negative impacts on the treatment.

Several limitations of the present study are noted. First, our datasets are small and, in some cases, violate the normality assumption and data requirements of the JZS-AR Bayes Factor (see de Vries & Morey, 2013). Second, the data analyzed so far (AB Design) is confounded with time (e.g., calendar events), so other events may better explain our findings to date. Our design also ignores intervention changes/modifications that consultants and consultees make over time; however, given our research questions, the lack of deterioration in phase B suggests that teleconsultation does not hinder such modifications. Third, we have not yet examined external outcomes to determine whether the interventions have been successful by (e.g., grades, parent perceptions), so our data only speak to consistency across the competing delivery models and not to overall effectiveness. Finally, our phases are limited to nine weeks each, so the likelihood of intervention “drift” over longer periods is unknown.

In the future, we will complete the full A-B-A-B reversal planned in the present study to answer our research questions completely. From there, we plan to examine teleconsultation over longer periods, perhaps using a multiple baseline design. To test the full potential of the technology, we also hope to integrate teleconsultation into a broader tele-behavioral health delivery model, potentially including physicians and other health care providers.

References

Auerbach, C., & Zeitlin, W. (2014). SSD for R: An R Package for Analyzing Single-Subject Data. New York: Oxford.

Atkins, M.C., McKay, M.M., Frazier, S.L., Jakobsons, L.J., Arvanitis, P., Cunningham, T., et al. (2002). Suspensions and detentions in an urban, low-income school: Punishment or reward? Journal of Abnormal Child Psychology, 30, 361-371.

Barkley, R.A., Fischer, M., Smallish, L. & Fletcher, K. (2006). Young adult outcome of hyperactive children: Adaptive functioning in major life activities. Journal of the American Academy of Child and Adolescent Psychiatry, 45, 192-202.

Bloch, R.M., & Diamond, J.M. (2010). Telepsychiatry assessments of child or adolescent behavior disorders: A review of evidence and issues. Telemedicine and eHealth, 16, 712-716, doi:10.1089/tmj.2010.0007.

Chin, W.W., Johnson, N., & Schwartz, A. (2008). A fast form approach to measuring technology acceptance and other constructs. MIS Quarterly, 4, 687-703.

de Vries, R.M., Hartogs, B.M., & Morey, R.D. (2015). A tutorial on computing Bayes factors for single-subject designs. Behavior Therapy, 46, 809-823.

de Vries, R.M., & Morey, R.D. (2013). Bayesian hypothesis testing for single-subject designs. Psychological Methods, 18, 165-185. doi: 10.1037/a0031037

Evans, S.W., Langberg, J.M., Schultz, B.K., Vaughn, A., Altaye, M., Marshall, S.A. & Zoromski, A.K., (2016). Evaluation of a school-based treatment program for young adolescents with ADHD. Journal of Consulting and Clinical Psychology, 84, 15-30.

Garcia-Liz, F., Munoz-Mayorga, I. (2010). What about telepsychiatry? A systematic review. The Primary Care Companion of the Journal of Clinical Psychiatry, 12(2), doi: 10.4088/PCC.09m00831whi.

Hilty, D. M., Ferrer, D. C., Parish, M. B., Johnston, B., Callahan, E. J., & Yellowlees, P.M. (2013). The effectiveness of telemental health: A 2013 review. Telemedicine and e-Health, 19(6), 444-454, doi:10.1089/tmj.2013.0075

Reddy, L.A., Barboza-Whitehead, S., Files, T., & Rubel, E. (2000). Clinical focus of consultation outcome research with children and adolescents. Special Services in the Schools, 16, 1-22, DOI: 10.1300/J008v16n01_01

Riley-Tillman, T.C., & Burns, M.K. (2009). Evaluating Educational Interventions: Single-Case Design for Measuring Response to Intervention. New York: Guilford Press.

Saeki, E., Jimerson, S. R., Earhart, J., Hart, S. R., Renshaw, T., Singh, R. D., & Stewart, K. (2011). Response to intervention (RtI) in the social, emotional, and behavior domains: Current challenges and emerging possibilities. Contemporary School Psychology, 15, 43-52.

Schneider, P.L. (1999). Mediators of distance communication technologies in psychotherapy. Paper presented at the annual convention of the American Psychological Association. Retrieved online March 17, 2014 at http://www.studio5d.com/paul/research/DCCS.html

Schnoes, C., Reid, R., Wagner, M., & Marder, C. (2006). ADHD among students receiving special education services: A national survey. Exceptional Children, 72, 483-496.

Schultz, B.K., & Evans, S.W. (2015). A practical guide to implementing school-based interventions for adolescents with ADHD. New York: Springer.

Schultz, B. K., Evans, S. W., & Serpell, Z. N. (2009). Preventing academic failure among middle school students with ADHD: A survival analysis. School Psychology Review, 38, 14-27.

State, T.M., Ker, L., Starosta, K.M., & Mukherjee, A.D. (2011). Elementary pre-service teacher preparation in the area of social, emotional, and behavioral problems. School Mental Health, 3, 13-23.

Wolraich, M.L., Wibbelsman, C.J., Brown, T.E., Evans, S.W., Gotlieb, E.M., Knight, J.R., et al. (2005). Attention deficit hyperactivity disorder in adolescents: A review of the diagnosis, treatment, and clinical implications. Pediatrics, 115, 1734-1746.

Featured Posts
Recent Posts
Search By Tags
Follow Us
  • Facebook Classic
  • Twitter Classic
  • Google Classic
bottom of page