Rapid-cycle evaluations generate quick, usable evidence about the effects of interventions at key stages of development and refinement.
Iterative development and improvement can be significantly strengthened through rigorous rapid-cycle evaluations. Rapid-cycle experiments, in particular, yield reliable information to guide decisions about what to do next to improve your program or strategy, or to improve the way it is implemented by end users.
Rapid-cycle experiments are studies that use random assignment to determine the impact of a program or a program improvement quickly—over days, weeks, or months, rather than years.
AIR's approach uses rapid-cycle experiments to inform continuous improvement of programs and strategies. The purpose is to provide more actionable information quickly to developers and stakeholders in the field about what works, for whom, and under what conditions.
Our R&D framework based on the Multiphase Optimization Strategy (MOST), emphasizes the utility and broad applicability of rapid-cycle experiments. In the context of MOST, rapid-cycle experiments can be used to develop and optimize an intervention. The goal is to tease out which components of interventions are most effective. Rapid-cycle experiments also can evaluate the optimized intervention, providing quick-turnaround information about its effectiveness.
In the optimization phase of development, rapid-cycle experiments can be used to examine which features or versions of a product or program are most effective before implementing and evaluating the final version at scale. Rapid-cycle experiments also can be used to optimize the implementation of a product or program for different types of users.
Rapid-cycle experiments used in the R&D process are shorter (e.g., weeks or months instead of years); test specific aspects of an intervention with the goal of informing further development; and focus on more proximal outcomes, such as initial uptake or compliance with an antismoking regimen, or student engagement with an online learning program.
In the evaluation phase, we can use rapid-cycle experiments to examine the efficacy of an intervention on near-term (proximal) outcomes, and to evaluate the effectiveness of a fully developed intervention, if the intervention has a short implementation cycle.
Having appropriate measures of whether the intervention is having the intended effect in short intervals is a critical challenge for rapid-cycle studies. Our approach identifies the outcome measures best aligned with the goals of your intervention, with an emphasis on novel, unobtrusive measurement opportunities that make use of readily accessible user data.
Sometimes iterative design, testing, and refinement is informed by simple descriptive information. For example, tracking trends in pre-defined interim indicators of success are a hallmark of some formative evaluation approaches, including those used in some “Plan-Do-Study-Act” cycles.
Although simply tracking trends may be useful in some cases, iterative development and improvement may be significantly strengthened with rigorous rapid-cycle evaluation designs, such as A-B testing, short-duration experiments, factorial designs, and sequential multiple assignment randomized control trial (SMART) studies. The more we systematically test ideas and innovations, the stronger our basis of knowledge about what to do next, how to improve, and how to maximize impact.
Mainly, their time frame. The actual time frame depends on the context: Some trials can be completed in mere hours, but more typical turnarounds in education or health contexts might be several weeks or months.
In contrast, more traditional experimental trials typically aim to examine the impact of a fully developed program on “distal” (long-term) outcomes. For example, in education research, end-of-year student achievement is a common outcome.
Another important difference is the experience of participants in traditional randomized field trials, versus rapid-cycle experiments. For example, in a traditional trial, a treatment group might gain access to job training program and the control group would not. But in rapid-cycle experiments, this is not necessarily the case—all involved may participate in the program, but with systematic variations in features, components, or program implementation.
In just short periods of time, this sort of variation will yield a wealth of information about the potential value of the different features, components, or implementation approaches, for achieving the program's goals. Although critically important, traditional randomized trials sometimes do not tell us enough about why an intervention did or did not work, and given the long duration (typically a year or more), make the process of improving and retesting interventions lengthy and costly.
AIR specializes in innovative designs that can drive intervention improvement as part of a rapid-cycle evaluation process. Innovative study designs used by AIR include SMARTs and factorial experiments. These designs can help improve interventions, rapid or not.