Education.com
Try
Brainzy
Try
Plus

Types of Standardized Tests

By — Pearson Allyn Bacon Prentice Hall
Updated on Jul 20, 2010

Numerous tests are used in the schools, which can be classified into different types or categories. One way to classify tests is by the construct the test purports to measure. Using this method, tests can be organized into achievement tests, aptitude or intelligence tests, personality inventories, projective techniques, interest inventories, attitude measures, and so forth. Each of these types of tests may be further divided into subcategories. For example, individual tests are designed for administration in a one-on-one situation. Group tests are designed for group administration.

Individual tests are typically used for clinical purposes, such as making a diagnosis of a disability or disorder or determining strengths and weaknesses in a specific area of functioning (e.g., intelligence, achievement). These tests are administered to one student at a time. The test user should have considerable training in test administration, scoring, and interpretation. Group tests, on the other hand, are designed primarily as instruments for mass testing (Anastasi & Urbina, 1997). They are largely pencil-and-paper measures suitable for administration to large or small groups of students at the same time. The majority of tests used in schools are group tests. The recent large-scale tests used by states are also group tests. Group tests are fairly easy to administer and score, and their use does not require much special training on the part of the examiners. Some group tests may also be computer administered and scored.

Tests may also be grouped into speed tests and power tests. A speed test is designed to measure the speed or rate of performance, rather than the acquisition or mastery of knowledge. Typically, speed tests include very easy items so every test taker knows the answers. This way, only speed of response is measured by the test. On the other hand, a power test is designed to measure the knowledge of the test taker, regardless of his or her speed of performance. Power tests contain items with varying degrees of difficulty and allow enough time for test takers to attempt all items. Performance is based on how well a student can answer the items, instead of how fast he or she can perform. However, most tests used in the schools measure both knowledge and speed as factors on test performance. Specifically, these tests are designed to measure students' knowledge in a domain of content and a time limit is set for the students to complete the test. In other words, a student's score is influenced by both accuracy and speed of his or her answers. On such tests, although a student may have the necessary knowledge required to answer the test items correctly, the student may not receive a high score if he or she works in a slow speed and is unable to complete many items within the time limit.

Another way to classify tests is by the measurement theory that underlies a test. Based on this consideration, tests may be categorized into norm-referenced tests (NRT) and criterion-referenced tests (CRT). The major difference between NRT and CRT lies in the standard used in interpreting test performance. In NRT, a student's performance is compared to that of other students in a group. Specifically, interpretation of scores is made by comparing the student's performance to the average of performance of other similar students on the test or to the norm of a standardization sample. It indicates the student's status in a group or how well the student has performed with respect to the rest of the group. NRT is by far the most common approach to test interpretation. However, it should be noted that because NRT compares a student's performance to the normative group, language, culture, and socioeconomic status differences between the student and the normative group could affect the student's score (Kubiszyn & Borich, 2003). CRT, also known as domain-referenced tests, employs a different frame of reference in test interpretation than does NRT. In this approach, a performance standard called criterion is established prior to testing to indicate mastery of the specific content domain covered by the test. A student's performance is compared to the preestablished criterion, rather than to the performance of other students. Interpretation of CRT results yield specific information regarding the student's proficiency in or mastery of the measured skills. In recent years, states have begun to develop minimum competence tests to assess students' performance, which are designed and used based on the theory of criterion-referenced measurement.

Add your own comment