Education.com
Try
Brainzy
Try
Plus

Authentic Assessment

By
Updated on Dec 23, 2009

Authentic assessment comprises a variety of assessment techniques that share the following characteristics: (1) direct measurement of skills that relate to long-term educational outcomes such as success in the workplace; (2) tasks that require extensive engagement and complex performance; and (3) an analysis of the processes used to produce the response. Authentic assessment is often defined by what it is not: Its antonyms include: norm-referenced standardized tests; fixed-choice multiple-choice or true/false tests; fill-in-the-blank tests. Synonyms include: performance assessment, portfolios, and projects. Dynamic (Lidz, 1991) or responsive assessment (Henning-Stout, 1991) are other terms associated with authentic assessment. Authentic assessment has been a popular method for assessing student learning among specific populations of students such as those with severe disabilities (Coutinho & Malouf, 1993), very young children (Grisham-Brown, Hallam, & Brook-shire, 2006), and gifted students (Moore, 2005). In addition, specific disciplines such as the arts (Popovich, 2006), science (Oh, Kim, Garcia, & Krilowicz, 2005) and teacher education (Gatlin & Jacob, 2002) have embraced authentic assessment for its emphasis on process over product. Grant Wiggins described authentic assessments as “faithful representations of the contexts encountered in a field of study or in the real-life ‘tests’ of adult life” (1993, p. 206).

THE HISTORY OF AUTHENTIC ASSESSMENT

Authentic assessment was a significant component of the 1990s education reform zeitgeist, and Wiggins was one its most prolific and convincing proponents (Terwilliger, 1997). Wiggins (1993) asserted that traditional methods of student assessment (i.e., forced choice tests such as multiple-choice, true/false test, etc.) fail to elicit complex intellectual performance valued in real life experiences and result in a narrowing of the curriculum to basic skills, including test taking skills. At a time when standardized minimum competency tests had been largely rejected for reducing or diminishing the curriculum, and content standards emphasizing higher-ordered thinking skills were articulated within many disciplines and states, authentic assessment gained considerable traction.

Subsequently, educators may have engaged in authentic assessment to rebel against the top-down accountability of high-stakes standardized testing (Salvia & Ysseldyke, 2004). Since the 2002 No Child Left Behind (NCLB) Act, there has been a greater focus on large-scale standardized testing. There is a lack of connection between the federal and state policy makers and public school educators. In an ideal educational setting, professional educators in all arenas would guide the learners' movement toward the standards. This would be developed in an organic process with student, site, and community input. However, the current practice is that standards are developed by remote government bureaucrats in state or federal buildings far removed from the students and those who are in contact with the students on a daily basis (Henning-Stout, 1996). There is a feeling of imposition on school site educators by state and federal officials, which compounds the challenges towards the ideal development of authentic assessment.

Educators' desire for authenticity in assessment and learning is not free from the polemics of political climates that define that nature of modern education.

AUTHENTIC ASSESSMENT DATA ANALYSIS

Assessment data are used for multiple purposes, including making accountability, eligibility, and instructional decisions. The purpose of the assessment directs the analyses. For example, authentic assessment data collected for determining whether a school, district, or state is sufficiently educating students will require data to be aggregated at the systems level, as well as disaggregated by various sub-populations of students, in order to make such accountability decisions. Authentic assessment to determine whether a student meets specific state or national special education criteria must be corroborated by other types of data given the significant ramifications for the student (Lidz, 1991). Data collected to inform instruction must be analyzed relative to the curriculum and instruction provided to the students in a particular class. Authentic assessment data can be analyzed by qualitative or quantitative methods.

Table 1Table 1ILLUSTRATION BY GGS INFORMATION SERVICES. CENGAGE LEARNING, GALE.

A qualitative analysis of a student's performance typically describes skills that were demonstrated and errors that were made thereby providing a narrative of what the student knows and is able to do, and what the student needs to learn or improve upon. Narratives also allow the student's performance to be considered within the context of the assessment. For example, Alverno College is nationally recognized for its narrative assessments of eight core abilities in a manner that is contextually relevant for each discipline (Alverno College Faculty, 1994).

A quantitative analysis of authentic assessment data applies a scoring rubric or checklist to judge student responses relative to criteria within a restricted range of four or more proficiency levels (e.g., advanced proficient, proficient, partially proficient, and failure). Scoring rubrics can be either analytic or holistic. Analytic analyses require defining and assessing different dimensions of a task. For example, the spelling, sentence structure, vocabulary, accuracy, level of detail, and coherence of an essay may be judged independently. Holistic analysis assigns an overall score to a student's performance, like judging an Olympic gymnastic competition.

VARIATIONS OF AUTHENTIC ASSESSMENTS

Three variations of authentic assessments most frequently discussed are dynamic (Hilliard, 1995; Lidz, 1991), performance, and portfolio assessment (Salvia & Ysseldyke, 2004). Proponents of authentic assessment (Hilliard, 1995; Lidz, 1991; Meyer, 1992) have observed that many people think they are conducting it when in fact they are not. The multiple purposes for assessments and the general nature of many of the terms associated with authentic assessment has resulted in variation among researchers and practitioners in what is considered authentic or dynamic assessment (Cum-ming & Maxwell, 1999; Newton, 2007).

Dynamic assessment is conducted within a test-intervene-retest format or process. For example, an educator first administers a test to a student; then the adult intervenes by asking questions about the child's incorrect or unexpected answers to improve the student's cognitive processes. Finally, the adult administers the same or a similar test to the child to see if the child has developed a new strategy for solving the problem. Thus, dynamic assessment attempts to measure the student's level of modifiability.

Compared to dynamic assessment, performance and portfolio assessment are more commonly used in classroom settings (Salvia & Ysseldyke, 2004). Performance assessments require students to complete or demonstrate the behavior that educators want to measure (Meyer, 1992). For a performance task to be authentic, it must be completed within a real-world context, which includes shifting the locus of control to the student in that the student chooses the topic, the time needed for completion, and the general conditions under which the writing sample is generated (Meyer, 1992). Portfolio assessments are an accumulation of artifacts that demonstrate progress toward valued real-world outcomes, are often produced in collaboration, require student reflection, and are evaluated on multiple dimensions (Salvia & Ysseldyke, 2004).

METHODOLOGICAL STRENGTHS AND LIMITATIONS OF AUTHENTIC ASSESSMENT

A major strength of authentic assessment is its connection to real-life skills (Meyer, 1992). Proponents of authentic assessment are quick to point out that life is not a series of isolated multiple-choice questions but full of complex, embedded problems to be solved (Wiggins, 1993). Accordingly, authentic assessments require students to solve complex problems or produce multi-step projects, often in collaboration with others. In this way, higher-ordered learning skills such as synthesis, analysis, collaboration, and problem solving are assessed. In fact, the purpose of authentic assessment is to measure students' ability to apply their knowledge and thinking skills to solving tasks that simulate real-world events or activities (see Table 1, for examples; Wiggins, 1993).

Authentic assessments attempt to seamlessly combine teaching, learning, and assessment to promote student motivation, engagement, and higher-ordered learning skills (Eder, 2004). Because assessment is part of instruction, teacher and students share an understanding of the criteria for performance; in some cases, students even contribute to defining the expectations for the task. The assumption is that students perform better when they know how they will be judged. Often students are asked to reflect and evaluate their own performance in order to promote deeper understanding of the learning objectives as well as foster higher order learning skills (i.e., self-reflection and evaluation).

Authentic assessments are often described as developmental because of the focus on students' burgeoning abilities to learn how to learn in the subject (Wiggins, 1993). For example, students' shortcomings in knowledge and how they apply their knowledge can be examined through carefully analyzing of their log books or by asking probing questions, in order to identify what needs to be taught or re-taught. Thus, the process by which students arrived at their final response or product is assessed (Mehrens, 1992).

Authentic assessments also have limitations. These include subjectivity in scoring, the costliness of administering and scoring, and the narrow range of skills that are typically assessed (Mehrens, 1992). By emphasizing complexity and relevance rather than structure and standardization, inter-rater reliability can be difficult to achieve with authentic assessment. Inter-rater agreement is increased with clearly defined criteria, including exemplars and non-exemplars and initial and on-going training of the evaluators. Unfortunately educators rarely have adequate guidelines to help analyze and score student products (Ysseldyke & Salvia, 2004). The logistics and training demands of authentic assessment have made its wide-spread adoption among general education prohibitive. Selecting artifacts to include in a portfolio can also be a challenge. In order to avoid the portfolio's becoming a meaningless accumulation of student work, there needs to be some selection process that distinguishes critical works from mementos (Hass & Osborne, 2002). Lastly, the emphasis on assessing knowledge in-depth or in application, often limits the amount of content knowledge that is assessed. For example, an authentic assessment that requires students in a biology class to design the ideal zoo would not test what students know about photosynthesis. Terwilliger proposed that the specificity of authentic assessment evaluation criteria to a particular task may limit its value as a measure of general learning outcomes.

HOW AUTHENTIC ASSESSMENT INFORMS INSTRUCTION AND INTERVENTIONS

Henning-Stout (1996) stated, “Academic assessment is authentic when it reflects performance on tasks that are meaningful to the learner” (p. 234). One strength of authentic assessment is the strong connection to the development of lessons and interventions that have real-life applications. If the learners being assessed are aware of their ability to self-regulate (Dembo, 2004) and make the appropriate changes during the learning process, they will achieve the transfer of knowledge that is necessary for learning to occur (Lidz, 1991). More importantly, they should be able to solve real-world tasks and be able to process new information within the construct of that task.

When given clear standards (Henning-Stout, 1996) and reliable and valid methods (Salvia & Ysseldke, 2004) for conducting authentic assessment, teachers can inform students of the level of expected performance and provide direct feedback about students' process towards meeting those standards. With dynamic assessment students receive immediate feedback about their process and their own problem-solving skills. The portfolio assessment provides individual students with an opportunity to physically and cognitively organize and monitor their learning process.

For educators concerned with social justice in the development of curriculum, pedagogy and assessment, authentic assessment provides ways for students outside the norm of the standard assessment to express their understanding of material (Henning-Stout, 1996; Hill-iard, 1995; Louise, 2007; Newfield, Andrew, Stein, & Maungedzo, 2003). For example, the government of South Africa has moved away from high stakes standardized assessments for categorizing, labeling, and tracking students towards portfolio assessments that are developed in conjunction with local communities (New-field et al., 2003).

Authentic assessment has also been used to train professionals. School administrators and teachers have been evaluated using portfolio assessments (Gatlin & Jacobs, 2002; Meadows & Dyal, 1999) as well as school psychology graduate students (Hass & Osborn, 2002; Prus, Matton, Thomas, & Robinson-Zañartu, 1996).

BIBLIOGRAPHY

Alverno College Faculty. (1994). Student assessment-as-learning at Alverno College. Milwaukee, WI: Alverno Productions.

Coutinho, M., & Malouf, D. (1993). Performance assessment and children with disabilities: Issues and possibilities. Teaching Exceptional Children, 25(4), 63–67.

Cumming, J. J., & Maxwell, G. S. (1999). Contextualizing Authentic Assessment. Assessment in Education, 6(2), 177–194.

Dembo, M. H. (2004,). Don't lose sight of the students. Principal Leadership, April, 37–42.

Edger, D. J., (2004). General education assessment within the disciplines. Journal of General Education, 53(2), 135–157.

Gatlin, L., & Jacob, S. (2002). Standards-based digital portfolios: A component of authentic assessment for preservice teachers. Action in Teacher Education, 23(4), 28–34.

Grisham-Brown, J., Hallam, R., & Brookshire, R. (2006). Using authentic assessment to evidence children's progress toward early learning standards. Early Childhood Education Journal, 34(1), 45–51.

Hass, M., & Osborn, J. (2002). Using formative portfolios to enhance graduate school psychology programs. California School Psychologist, 7, 75–84.

Hilliard, A. G. (1995). Testing African American Students (2nd ed.). Chicago: Third World Press.

Lidz, C. (1991). Practitioner's Guide to Dynamic Assessment. New York: Guilford Press.

Meadows, R. B., & Dyal, A.B. (1999). Implementing portfolio assessment in the development of school administrators: improving preparation for educational leadership. Education, 120(2), 304–314.

Mehrens, W. A., (1992, Spring). Using performance assessment for accountability purposes. Educational Measurement: Issues and Practice, 11(1), 3–20.

Meyer, C. (1992). What's the difference between authentic and performance assessment? Education Leadership, 49(8), 39–40.

Moore, M. (2005). Meeting the educational needs of young gifted readers in the regular classroom. Gifted Child Today, 28(4), 40–47, 65.

Newfield, D., Andrew, D., Stein, P., & Maungedzo, R. (2003). ‘No number can describe how good it was’: assessment issues in the multimodal classroom. Assessment in Education, 10 (1), 61–81.

Oh, D. M., Kim, J. M., Garcia, R. E., & Krilowicz, B. L. (2005). Valid and reliable authentic assessment of culminating student performance in the biomedical sciences. Advances in Physiology Education, 29(2), 83–93.

Popovich, K. (2006). Designing and implementing ‘exemplary content, curriculum, and assessment in art education.’ Art Education, 59(6), 33–39.

Prus, J., Matton, L., Thomas, A., & Robinson-Zañartu, C. (1996). Using portfolios to assess the performance of school psychology graduate students. Paper presented at the meeting of the National Association of School Psychologists, Atlanta, Georgia.

Salvia, J., & Ysseldyke, J. E. (2004). Assessment in special and inclusive education (9th ed.). New York: Houghton Mifflin.

Terwilliger, J. (1997). Semantics, psychometrics and assessment reform: A close look at ‘authentic’ assessments. Educational Researcher, 26(8), 24–27.

Wiggins, G. (1993). Assessment: Authenticity, context and validity. Phi Delta Kappan, 75(3), 200–214.

Add your own comment