Improving classroom teaching and learning is the primary goal of research in education, educational psychology, and the learning sciences. However, a common complaint about traditional research using experimental and quasi-experimental design points at the gap between
educational research and educational practice. Introduced in 1992 to address the theoretical and methodological challenges in creating complex interventions in classrooms (Brown 1992), design experiment, an initially unorthodox method, was eventually adopted quite widely as the method of choice for studying teaching and learning in the classroom setting. Between 1992 and the early 2000s, there was increasing interest in this research method, as shown by the increasing number of citations to Brown's article listed in Thompson's ISI Web of Knowledge (Figure 1). The success likely derives from the fact that the design experiment combined two existing functions of educational psychology: explanation and guidance of practice (Solomon 1996). Although Ann L. Brown and others credit Allan Collins for coining the term, it is through Brown's work generally and through her introductory article in the Journal of the Learning Sciences specifically that learning scientists have come to know about this method. There has been an increasing interest in the design experiment, as indicated by the number of references to the article that introduced the method (Brown 1992).
The term design experiment was modeled on the design sciences aeronautics and artificial intelligence, in which research and development are combined. It refers to interventions in which educational environments are engineered and where experimental studies of those innovations are conducted simultaneously. Design experiments differ from classical laboratory experiments and quasi-experiments in that the intervention itself is changed in response to problems that the ongoing (interpretative, qualitative, ethnographic) research reveals. Design experimenters focus on understanding teaching and learning in complex, designed settings rather than reduce them to their constituent building blocks: The ultimate purpose of a design experiment is to bring about a lasting instructional change, which it can only achieve, as experience has shown, when the intervention is adapted to the contingencies of each setting. Thus, design experiments aim at arriving both at (a) the best possible form of the intervention in each setting, that is, instruction and learning, and (b) theoretical articulations that delineate why an intervention works across settings and thus makes it consistently repeatable.
An analysis of ISI Web of Science focusing on the authors and journals that cite Brown's article on the design experiment shows that the method is particularly of interest to learning scientists—Journal of the Learning Sciences, Cognition and Instruction, and Educational Psychologist account for over 20% of the citations—and science educators (three science education journals) and educational technology/instructional science account for another 16% and 7% of the citations, respectively. The common allegiance of the researchers is to the idea of research and development as design rather than to a particular epistemology, though a quick survey of articles citing Brown (1992) shows that the most common commitments are to social constructivist (constructionist), sociocultural, and cultural-historical theories of knowing and learning. This common allegiance may explain the large proportion of designers of (computing) technology-based learning environments—with their cultural-historical practices of design, testing, and revising alpha, beta, and gamma versions of their artifacts—among those who employ the design experiment as method.
Structure of Experimental and Quasi-Experimental Designs. Design experiments are useful for moving instructional design through all the phases of development and implementation and especially in the final, gamma phase of designing instructional reform, that is, the widespread use with minimal support, which is a measure of reform longevity. Despite the increasing interest and use of design experiments, there continues to exist a lack of clarity concerning its methodological and epistemological features (Bell, 2004).
Classical laboratory experiments test hypotheses about relations that causally link independent and dependent variables. Causation can be established only when the variance between treatment and control group (different or no treatment [placebo]) is reliably attributable to the treatment. Random assignment to treatment and control conditions has the function of drawing participant samples that (a) are representative of the target population and (b) are comparable, both within the limits of sampling error (Cook & Campbell, 1979). Whereas random assignment is possible in psychological laboratory experiments, which often have the problem that their results do not translate into real settings, educational research in real classroom settings, though more realistically addressing the context of learning, generally cannot randomly assign students to treatments and control. A common quasi-experimental design has the following structure
where the O1 refers to observations (e.g., literal observations, written tests, or responses in computer presented tasks) and X refers to the treatment (e.g., “using computers” or “using peer teaching”). In other words, there are two groups (one above, one below the dotted line), which are observed/tested at one point in time (O1). One group receives treatment (“X”) the other does not. After the treatment has ended, both groups are observed again (O2). Everything else being equal, any post-treatment differences can be attributed to the treatment. This structure addresses the comparability of non-equivalent control groups by collecting relevant information (e.g., pre-tests) that allows researchers to statistically adjust for the possible non-equivalence of experimental and control groups that may exist at the outset of the research. Although different quasi-experimental designs differ in their weaknesses and strengths, a creative mixture of designs within the same study may significantly increase confidence in the causes underlying the phenomena under study.
Structure of Design Experiments. Design experiments differ substantially from traditional psychological experiments and quasi-experiments because these systematically vary the interventions, using each iteration as an experiment that assists in evolving and testing theory in a naturalistic setting. Rather than having a previously specified constant treatment, design experiments change the intervention on the basis of emergent understandings so that the X (treatment) in the structure of the experiment no longer is the same from the beginning to the end of the intervention. It is therefore no longer possible to establish causal relations between, for example, particular interventions—for example, Brown's reciprocal teaching—with other interventions. This does not prevent design researchers from conducting experimental (laboratory) studies within their design experiments to test hypotheses about causal relations. Design researchers frequently choose to deepen the study of emergent aspects by means of formal laboratory studies or classroom studies with random assignment of students to conditions (treatments). Such studies, then, allow the establishment of cause and effect but always relative to the theoretically interesting features that emerge in the course of the larger study.
This feature of design experiments is associated, for many psychologists, with substantial drawbacks (weaknesses) because it substantially alters the conception of what constitutes relevant knowledge and how it is derived. However, the design experiment can be understood through the analogy with an interrupted time series design because researchers go through considerable efforts in documenting learning prior to, during, and following the changes in instructional design. Design experiments therefore are characterized by the structure
where each “O” represents an observation and each “X” an intervention. Note that the treatment changes in the course of the experiment (X becomes X′ becomes X″ and so on) based on the information collected during the observations. However, in contrast to interrupted time series in which treatment episodes follow non-treatment episodes (to verify that the treatment rather than something else makes the difference), design research does not withdraw treatment but continually seeks to improve teaching (i.e., the treatment) and therefore learning. This structure of the design experiment, however, provides opportunities for a Bayesian approach. In a Bayesian approach, the already-generated quantitative and qualitative information is combined between the phases of the implementation to generate adjusted estimates of the success of the intervention in the future (Gorard, Roberts, & Taylor, 2004). These adjusted predictions are better estimates for the impact of future interventions because they explicitly take into account previous findings. Clearly, design experiments play into the hands of those interested in optimizing the learning environment by acting upon contingently emerging problems and understandings.
Criticisms and Responses. Even critics recognize that the strengths of design experiments lie in their ability to generate and test theories in situ, to cross the theory-practice gap by adaptively changing the intervention of interest. But the data generated before and during the intervention generate immense datasets, which frequently leads authors to use narrative forms that do not and cannot provide the kinds of warrants required for establishing the veracity of claims made (Shavelson, Phillips, Towne, & Feuer, 2003). Here Brown's introductory article countered some common objections to design experiments. For example, the very specificity of learning predictable in design experiments makes them immune to false claims due to the Hawthorne effect (positive effects merely because of attention researchers pay to research participants rather than because of treatment in and of itself). More so, addressing what Brown called the reality principle (the continued positive effects [shelf life] of an intervention), design experiments tend to achieve longevity and widespread adoption with minimal additional intervention. Such interventions that are the products of good design experiences are adaptive and fit the contingencies of the different settings. Design experiments, much more than other inventions of the past, have lasting and widespread effect because of the close collaboration of participants and researchers.
By their very nature, design experiments require researchers to become familiar with and understand the setting, encouraging them to become ethnographers interested in how particular cultures make sense. Students tend to become researchers responsible for defining relevant expertise, and teachers tend to become researchers. It is not surprising that (a) teachers become design experimenters, (b) researchers become teachers to ascertain that a best-case scenario is studied, or (c) university-based design researchers become interested in the social agendas of students and teachers that their designs support. Two prototypical examples of design research are provided below that illustrate the first and third types of design research.
Teachers as Design Researchers. In the learning sciences literature, there are numerous examples of design experiments that involve researchers who not only observe but also teach during the intervention; there are also examples in which teachers themselves legitimately conduct design research. The present example is of the second kind, involving two science teachers investigating the implementation of an open-inquiry science curriculum (e.g., Roth & Bowen, 1995) that used the same guiding principles that also motivated the design work of Brown and Collins during the late 1980s and early 1990s—that is, cognitive apprenticeship and community of learners. Students are provided opportunities to enact cognitive practices that have a high degree of family resemblance with the practices of professionals. With respect to the sciences, this means that students learn to pose research questions, collect data for answering them, and use mathematical representations for analyzing the data and for representing the data in reports to substantiate research claims. In this model, the two teachers, both with graduate degrees in the natural sciences, were experts who scaffolded students' efforts. They did so on a need-to-know and just-in-time basis, that is, precisely when knowing something significantly would advance students in their work (Pea, 1997).
The two teachers set out to study (a) problem posing and solution finding, (b) mathematization and other representational practices of science, (c) the relationship between culture, practices, and cognitive resources, and (d) differences in mathematical practices arising from open inquiry versus school tasks. They planned a cognitive anthropological study in which mathematical representations and mathematical practices oriented data collection. They videotaped all lessons in one eighth-grade class and collected all students' field notebooks, laboratory reports, unit tests, end of semester test, and final examination in both participating classes. A third eighth-grade class taught differently served as the control group. The researchers collected student responses to standard instruments such as the Constructivist Learning Environment Scale; and they interviewed, using open-ended and structured protocols, about 25% of the participating students concerning different aspects of the intervention. Their substantial database allowed the researchers to evaluate knowing and learning in quantitative and qualitative ways and correlate achievement with other measures collected as part of the research.
The researchers transcribed all videotapes in an ongoing manner with less than 48-hour delays between recording tapes and conducting initial analyses. This allowed them to (a) design particular curricular strategies when problems became evident and (b) frame tentative hypotheses, which subsequently were tested experimentally. For example, the researchers framed the hypothesis that students' choice of mathematical representations was a function of the data set: Students were more likely using graphs to find trends in large data sets, whereas they were more likely to seek trends by visual inspection of the raw data. The researchers designed three forms of a task and randomly assigned pairs of students to one of the three conditions. Based on this experiment, the researchers rejected the hypothesis. A second part of the experiment, which compared the degree of mathematization that the eighth-grade students achieved to teachers in training with at least a BSc revealed statistically reliable differences: The eighth-grade participants in open-inquiry used more-abstract representations with a reliably higher frequency than the future science teachers (Roth, McGinn, & Bowen, 1998). The experiment also proved valuable because the researchers videotaped students during their work on the assigned, textbook-like tasks. As a result, the researchers were able to study the differences between eighth-grade students' solving data analyses practices when students designed their own problems versus when the teacher-researchers set the problems.
This study shows the adaptive nature of design research both with respect to the intervention and research, thereby bettering the intervention and getting better data for understanding how students learn in an open-inquiry learning environment.
Critical Design and Social Change. Instead of simply building an artifact to help individuals accomplish a particular task or to meet a specific standard, critical design experiments focus on the development of social, technology-enhanced structures that facilitate human subjects individually and collectively in critiquing and improving themselves and the societies in which they function. Critical design experimenters agree that this substantially changes the roles of researchers, teachers, students, and administrators involved. Their point is to change the world (for the better) rather than merely to understand it. It may come as little surprise that some design researchers, as those in the example featured here, explicitly support or engage themselves in social agendas using what they come to understand for the purpose of increasing the participants' control over attendant conditions.
Quest Atlantis is a multi-user virtual environment that allows children to learn academic and social skills and to evolve social agendas as they assist the council of Atlantis in recovering the lost forms of knowledge and wisdom of the culture (Barab, Dodge, Thomas, Jackson, & Tuzun, 2007). The environment was designed to support the development of seven social commitments— personal agency, diversity affirmation, healthy communities, social responsibility, environmental awareness, creative expression, and compassionate wisdom—through (a) Quests targeted toward individual commitment and (b) the technical infrastructure of the software. Changes to the original design were based on the concept of participatory design, a Scandinavian model for bringing together computer scientists and professionals to evolve more appropriate workplaces.
On the instructional side, Quest Atlantis situates itself at the intersection of education, a set of social commitments, and entertainment. The virtual world consists of worlds, each divided into three thematically related villages associated with up to 25 Quests. The themes include healthy bodies, community power, global issues, and waterways. There were five steps to the design experiment: the study (a) initially built rich understandings, (b) focused on developing critical commitments, (c) reified commitments into the design, (d) targeted the expansion of the impact, and (e) generated theoretical claims.
To build a rich understanding, the researchers conducted a 12-month ethnographic effort including more than 200 site visits and more than 500 pages of data entries in field notebooks. They conducted open-ended interviews with children individually and collectively and carried out semistructured interviews. The participating children built personal documents, including narratives and images (photographs). Finally, the researchers themselves kept diaries designed to record a day in the life of a particular participant. The researchers conducted laboratory studies with factorial ANOVA designs to test, among others, the impact of computing tools (3D vs. 2D) and collaboration (singles vs. dyads) on the ability to transfer skills to distal-level standardized items. Such experiments demonstrated that the Quest Atlantis software supports learning; other parts of the four-year study produced theoretical conjectures, including an expanded taxonomy of motivations involved while children learn through playing games.
As a result of their work, the researchers found to have been building “petite generalizations.” Petite generalizations are refined understandings of the patterns that researchers have encountered and that others in the field may likewise encounter. Most importantly, the ultimate product expanded its impact as it was redesigned, fitted, and adapted, together with the users, to the contingencies of each local setting.
The design experiment offers many advantages to the psychologist interested in designing and studying complex interventions in their naturalistic settings. Design experiment may be understood as an integrated approach to research and development that includes qualitative and quantitative approaches. This, then, allows design scientists to simultaneously (a) adapt interventions by taking into account local contingencies and (b) test hypotheses in a scientifically rigorous way that allows weeding out chance variations from true cause-and-effect relations. Design experiments thereby provide opportunities to meet the two major goals educational psychologists and learning scientists have set themselves: understanding knowing and learning scientifically and developing interventions that have a long shelf life because they meet the needs of the participants.
Barab, S., Dodge, T., Thomas, M. K., Jackson, C., & Tuzun, H. (2007). Our designs and the social agendas they carry. Journal of the Learning Sciences, 16, 263–305.
Bell, P. (2004). On the theoretical breadth of design-based research in education. Educational Psychologist, 39, 243–253.
Brown, A. L. (1992). Design experiments: Theoretical and methodological challenges in creating complex interventions in classroom settings. Journal of the Learning Sciences, 2, 141–178.
Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field settings. Boston, MA: Houghton Mifflin.
Gorard, S., Roberts, K., & Taylor, C. (2004). What kind of creature is a design experiment? British Educational Research Journal, 30, 577–590.
Pea, R. D. (1997). Learning and teaching with educational technologies. In H. J. Walberg & G. D. Haertel (Eds.), Educational psychology: Effective practices and policies (pp. 274–296). Berkeley, CA: McCutchan.
Roth, W.-M., & Bowen, G. M. (1995). Knowing and interacting: A study of culture, practices, and resources in a grade 8 open-inquiry science classroom guided by a cognitive apprenticeship metaphor. Cognition and Instruction, 13, 73–128.
Roth, W.-M., McGinn, M. K., & Bowen, G. M. (1998). How prepared are pre-service teachers to teach scientific inquiry? Levels of performance in scientific representation practices. Journal of Science Teacher Education, 9, 25–48.
Shavelson, R. J., Phillips, D. C., Towne, L., & Feuer, M. J. (2003). On the science of education design studies. Educational Researcher, 32 (1), 25–28.
Solomon, G. (1996). Unorthodox thoughts on the nature and mission of contemporary educational psychology. Educational Psychology Review, 8, 397–417.
- Coats and Car Seats: A Lethal Combination?
- Kindergarten Sight Words List
- Child Development Theories
- Signs Your Child Might Have Asperger's Syndrome
- 10 Fun Activities for Children with Autism
- Why is Play Important? Social and Emotional Development, Physical Development, Creative Development
- Social Cognitive Theory
- GED Math Practice Test 1
- The Homework Debate
- First Grade Sight Words List