Fall 1996 - Volume 4 Number 1
Learning Outcomes: A Performance Assessment Perspective, Part Two
Part One of this article began my examination of the strong relationship between assessing learning outcomes and performance by means of finding conceptual insights into the underlying framework of program standards. Part Two, which follows below, continues this discussion and examines the characteristics of "learning outcomes" statements from the performance assessment perspective.Relating Learning Outcomes to Performance Assessment
The conceptual framework underlying the above-noted recent initiatives to achieve the targeted goals of quality and consistency of college programs rests on three elements presented earlier: the requirement that all graduates of a program demonstrate the achievement of established learning outcomes, the definition of learning outcomes, and the set of indicators of an exit postsecondary level of achievement. The concept of learning outcomes is thus central to the implementation of system-wide program standards.
To this date, the Ontario CSAC documentation has provided guidance for the review of program curricula, course outlines, instructional strategies and evaluation methods, in order to implement the new initiatives. When examining this documentation for what it says about defining program standards and preferred learning outcomes in assessing students' achievement, five statements seem most significant in deciding the most suitable assessment methodology for verifying learning. They are:
Those five conceptual statements are discussed below. Understanding the conceptual framework underlying the establishment of program standards shows that aligning performance assessment with defined learning outcomes is required to ensure the quality and consistency of what students learn in community college programs.
(a) Learning outcomes are performance-based. As the definition implies, learning outcomes are performance-based, meaning that they must be demonstrated by students' performance of complex activities or tasks as evidence that they have achieved the expected learning. What is performance? The Oxford English Dictionary definition of 'perform' is "carry out, achieve, accomplish, execute." Therefore, to assess students' achievement of learning outcomes, teachers must ask them to perform some work "using a repertoire of knowledge and skills and being responsive to the particular tasks and contexts at hand" (Wiggins, 1993). By observing performance, teachers obtain a direct measure of what students have learned, and what they are able to do with their knowledge.
On the other hand, most paper-and-pencil tests and examinations used to assess students produce only an indirect measure of learning (Frederiksen and Collins, 1989). While such tests evaluate how well students can reproduce content from the textbook, or how well they can assemble information from various sources, they do not provide any evidence of the students' abilities to use their knowledge to solve problems, conduct inquiries into sensitive issues or make difficult decisions. In other words, paper-and-pencil testing does not provide a reliable indication of students' abilities to do something with their knowledge. It follows that this kind of testing is not an appropriate means of assessing achievement of performance-based learning outcomes demonstrative of cumulative learning and accomplishment.
The example in Figure 1 above illustrates a performance task which allows teachers to obtain a direct measure of student achievement for multiple learning outcomes, and which requires students to use their knowledge of theory and procedure in a real-life context. No paper-and-pencil test would have been able to provide such comprehensive information about the students' abilities.
This example also raises an important issue related to the use of performance assessment: do all performance tasks need to be as complex as the example in Figure 1 to assess the achievement of learning outcomes? In the past, performance assessment tasks assigned to students have not necessarily been as complex as the one presented above, and sometimes their context has seemed too-obviously artificial or contrived, rather than a fair simulation of recognizable real life experience. Great complexity and ingenuity in design of such tasks may be needed to adequately challenge students and assess their achievement of learning outcomes.
(b) Learning outcomes are demonstrated by the application of knowledge to complex role performances as found in work and personal lives. The concept of "role performance" in the community college system has recently appeared in the description of indicators reflecting a quality of performance that is appropriate to the college postsecondary context (CSAC 1995). The first indicator of performance quality reads: "A community college postsecondary graduate is someone who can integrate knowledge, skills and dispositions in the application of theoretical principles to complex roles performed in the workplace and in his or her personal life." There are two important elements in this indicator: role performances and the context of a performance.
The first element requires that students demonstrate performances in roles similar to those found in daily living or those assumed by practitioners in the field. These practitioners regularly perform tasks in which problems are ill-structured, part of the information is missing, collaboration with co-workers is necessary and results are presented to other individuals. In these tasks, practitioners take on many generic roles, such as problem solvers, effective producers, team workers, communicators, and also adopt roles particular to an occupation, e.g., care planner, counselor, designer, and so on.
The notion of students executing complex role performances has rarely been a feature of the educational process. Spady (1977) introduced the idea as part of his definition of the concept of "competency" in the early 1970s, but it is only in the 1990s that it received much attention. Many educational institutions (the Aurora Public Schools being one example; see Redding, 1992; Marzano, 1994), have identified broad exit outcomes in terms of role performances, e.g., self-directed learner, collaborative worker, complex thinker, quality producer, and community contributor. Wiggins (1993) provides even more concrete examples of role performances that can be used in high school programs, such as the "museum curator" - design museum exhibits on a given topic, compete for grant money with other designers; or the "product designer" - conduct research, design an ad campaign, run focus groups, and present a proposal.
Current evaluation practices rarely verify that students can successfully take on roles as they may be found in the "real" world. all too often tests, examinations, and assignments are designed as simplistic tasks which cue students for the desired solution, for the correct answer, or for an imitation of a given technique or skill. In a course on drafting techniques, for example, the tests and examinations include exercises that are self-contained, with no interference equivalent to the "noise" found in the workplace environment. In a physics course, the problems to be solved in a test require the use of only the concepts and formula taken out of the chapter(s) covered by the test; students are presented with obvious cues.
Similarly, when Health Sciences students have to simulate a specific treatment technique in a laboratory environment, only this specific technique is observed, without the demonstration integrating the ability to communicate with the patient and deal with interference typically found in the usual clinical setting. Individuals rarely perform their real-life roles and responsibilities as adult citizens and workplace practitioners with the well-defined knowledge requirements and isolated task sequences which are characteristic of classroom examinations.
The second element of this indicator of performance quality, the context of the performance, plays a significant role in the demonstration of complex role performances. Performance tasks replicating the complexity of the real life situations students will face soon after their graduation must be grounded in real world and workplace contexts. Wiggins (1993) calls for robust and authentic tasks to be used to ensure that students are prepared to face the real, "messy" uses of knowledge, and can master the various roles that competent professionals encounter in their work. He argues that such tasks are essential for assessing students' intellectual abilities, their judgment capacity and their habits of mind (with special attention to openness to ideas, persistence and willingness to admit ignorance).
The example in Figure 1 illustrates how two Photography teachers revised their traditional performance-based assessment (take and develop 10 high-quality photographs) into a much more authentic task in which were embedded multiple learning outcomes. The requirement that graduates achieve a community college level of performance, such that they can apply their knowledge and skills in complex roles as they are performed in the workplace and in one's personal life, begs the question of how to assess these abilities. Authentic performance assessment provides the necessary methodology, but carries with it the challenges of revising traditional approaches to evaluating learning. However, an important reward of such authenticity in creating performance assessment tasks is that students engaged in meaningful activities are better motivated to complete the tasks assigned with high standards of quality.
(c) The achievement of learning outcomes represents a demonstration of integrated learning. In articulating the level of achievement expected in the demonstration of exit learning outcomes, CSAC (1995) refers to the community college graduate as someone who can demonstrate the integration of knowledge, skills and dispositions. Integrated learning is clearly the expected result from community college education. Unfortunately, much of traditional educational practice provides little assistance, if any, for students to achieve integrated learning, and offer very few opportunities for teachers to assess for integration.
These traditional educational programs' curricular and instructional strategies and evaluation procedures typically break down the complex tasks which students are asked to master into the kind of discrete, simplified and independent components which prevent the making of connections among what is learned. Beane (1991) presents a helpful, if imperfect, metaphor illustrative of the fragmentation of traditional education: he suggests that no one would bother completing a jigsaw puzzle when given a pile of puzzle pieces without a representation of the image they are to make; it is the picture that gives meaning to the task of solving the puzzle.
What is integrated learning? At one level, integrated learning can be interpreted as learning that makes connections between elements of knowledge, connections within and across subject matters in a program, and connections between knowledge and everyday life situations (Drake, 1993). At another level, integrated learning can be interpreted as the connections that students make between their systems of personal meanings, and the new information they encounter in the environment. In both cases, integration of learning implies "wholeness and unity rather than separation and fragmentation" (Beane, 1991).
The Random House dictionary defines "integrate" as combining or completing to produce a whole or a larger unit, as parts do. A simple machine is much more than the sum of its parts; none of the parts randomly assembled can perform the functions of the machine as a whole. In a similar manner, a performance is composed of much more than a random assembly of its components; it also includes the interactions between the components. Instruction and assessment which focus only on the components will be unlikely to achieve integrated learning (Resnick and Resnick, 1992). The current initiatives undertaken by CSAC to establish program standards are aimed at enhancing the students' ability to achieve integrated learning as a characteristic of college-level learning. Teachers can recognize integrated learning by asking students to demonstrate an "understanding" rather than a "knowing" of what they have learned.
Generally, it is acknowledged that students "know" something when they have information in storage and they can retrieve it on call (Perkins, 1991). On the other hand, when students "understand" something, they can do a variety of thought-demanding things with it. They can explain it, make an analogy with it, make up their own example of it, and make generalizations (Perkins, 1991), and they "can employ it wisely, fluently, flexibly, and aptly in particular and diverse contexts" (Wiggins, 1993).
For example, when students know how to develop a survey, they can answer questions about the development process, and they can recognize a poor survey from a good one. But when they understand how to develop a survey, they can work with a community group to develop and administer a survey meeting the group's needs and can explain the limits of the survey results to the group. In other words, students who understand theoretical concepts or technical procedures can demonstrate that they have integrated this learning with other elements of a complex performance, requiring judgment and reasoning.
The example of an exercise in Figure 2 illustrates how a current Correctional Worker Program standard, that is the requirement to "Demonstrate an understanding of the fundamental aspects of the history, philosophy and diverse forms of rehabilitation, punishment and detention in Canada," could be assessed as a demonstration of integrated learning by means of a simulated situation. No paper-and-pencil test could come close to assessing such integrated learning. Therefore, when teachers must appraise the verification of learning outcomes representing integrated learning, the most suitable and relevant methodology is performance assessment.
(d) Learning outcomes can and should be demonstrated in a variety of contexts. One important characteristic of learning outcomes is their potential to be demonstrated in a variety of contexts. Both documents, Guidelines for the Development of Standards of Achievement Through Learning Outcomes (January 1994) and Generic Skills Learning Outcomes for Two and Three Year Programs in Ontario's Colleges of Applied Arts and Technology (May 1995) emphasize that learning outcomes must be transferable and that students must reliably demonstrate the abilities described by the learning outcomes. The recently published Generic Skills Outcome statements begin: "The graduate has reliably demonstrated the ability to ... ."
Transferability implies that students must demonstrate the achievement of integrated learning more than once, and in more than one context, which is consistent with research findings about performance assessment. Shavelson and Baxter (1992) conducted a significant study assessing hands-on science investigation skills, finding that performance is highly task-dependent and that a substantial number of investigative tasks were necessary before making any generalization about students' competencies. The limited generalizability of performance from task to task is consistent with research on learning and cognition which emphasizes that learning is situational and context-specific (Brown, Collins and Duguid, 1989).
Therefore, teachers cannot rely on a single demonstration of a learning outcome, in any one given context, to certify student learning and achievement. This condition can put an unreasonable burden on assessment procedures if the teaching and learning process is not well-integrated with the assessment process. "You cannot assess performance unless you teach performance" said a physics teacher veteran of three years' invovement in a performance assessment program (cited in O'Neill, 1992). In other words, if students are evaluated by means of performance tasks, they must learn through performance tasks by, says Wiggins (1993), "continually practicing strategies in performance contexts, by using (their) judgment as to what works (and when and why), and by continually being tested (through real or simulated performances)."
(e) Demonstrations of learning outcomes must meet the minimum acceptable level of achievement required for graduation. In order to receive their college credential, graduates must reach a minimum levels of academic achievement. Usually, a letter or percentage grade is used to rank students' relative achievement and a cut score is chosen to distinguish minimal success from failure on a test or assignment, or in a course. This cut score is often an arbitrary value which varies from course to course and program to program, and because they correspond to a global, composite picture, cut scores are very poor indicators of the minimum level of students performance.
Because students can fail on some outcomes but succeed on others in such a way that they obtain a passing grade, if the intent is to ensure that students meet a minimum level of achievement for each program standard, cut scores are clearly inadequate means to describe student performance and their use cannot ensure the consistency of credentials throughout the college system.
The challenge for Ontario community colleges is to establish among assessors a commonly-shared picture of a minimum acceptable level of achievement for the demonstration of learning outcomes. As they stand, the present CSAC statements of learning outcomes are insufficient to provide this picture, and do not sufficiently support the expected quality and consistency of successful student performance across the college system; they need to be accompanied by performance standards which fully describe what a successful performance looks like.
Because performance assessment relies so heavily on the assessors' observations and professional judgment, it is important to establish among these assessors a common basis for making those judgments. Fortunately, the performance assessment methodology contains the means for teachers to reach a common understanding of what constitutes a minimum acceptable level of performance. These means are performance criteria and performance standards.
One important component of the assessment process is the articulation of criteria representative of the significant features of an acceptable performance. Performance criteria provide a focus for the assessors' observations. As an example, we can see that the organization of content, the suitable construction of sentences, and the use of correct grammar may be suitable reference criteria when assessing writing; these measures are easy to observe and score, and are much used. However, Wiggins (1993) advocates the use of criteria such as the "impact" of the performance, that is "criteria which flow from a careful analysis of the purpose of the task" such as measuring achievement of a writing or presentation task by assessing clarity and persuasiveness (Collins and Gentner).
Once performance criteria are articulated, the minimum acceptable level of achievement then can be defined, in effect by answering the question "How good is good enough?" in relation to the criteria. Answers to this question can be provided by models and examplars (of student work) representing the minimum expected level of quality, and by rubrics describing the minimum expected level of quality and (possibly) the full range of quality performances. The use of models, examplars and rubrics is essential to establish that assessors and students share a common picture of the minimum acceptable level of achievement. These tools reduce the variation in teachers' expectations so easily noticed by students (Willms and Sirotnik, 1994) and improve the consistency of performance standards applied throughout a college system.Conclusions
The program standards conceptual framework discussed above requires that we maintain a strong alignment between learning outcomes and the assessment methodology used to verify student achievement. Performance assessment is the most suitable methodology available for assessing students' abilities to integrate and apply knowledge, reasoning skills and good judgment in authentic performance tasks.
The challenge raised by the recent Ontario College Standards and Accreditation Council initiatives is to implement high-quality performance assessment systems which can provide sound data to certify that graduates have met the standards established for each program. Much is at stake in these assessments. At the Ontario college system level, a review process will audit the assessment procedures used in each college's program in order to ensure that learning outcomes are assessed fairly, thoroughly and dependably. At the individual program level, students will demand credible and equitable assessment procedures, that validated performance standards are used, and that these standards are made public.
The high stakes attached to the assessment of learning outcomes will generate pressures on all those who are becoming increasingly assessment-literate through the use of assessment data. Despite that teachers and administrators have seen many years of research about assessment training, Stiggins (1991) concludes that "we allocate virtually no resources to train practitioners or interested others in sound methods of assessing the outcomes ... ." Relying on traditional practice and personal experience, when evaluating content-based learning community college teachers still remain most familiar with the use of paper-and-pencil-based testing and its concomitant marking and grading schemes. The performance assessment perspective necessarily challenges many of these practices and requires the development of new assessment systems and scoring strategies.
In the future, data collected through performance assessment should be directly comparable to each learning outcome, so that community college graduates can demonstrate that they have met an established academic standard common to all similar programs. Failing to obtain such assessment data may compromise the very quality and consistency of college credentials sought by the current initiatives.
Janine Huot is a consultant whose advisory and training services focus on the implementation of change in education.
• The views expressed by the authors are those of the authors and do not necessarily reflect those of The College Quarterly or of Seneca College.
Copyright © 1996 - The College Quarterly, Seneca College of Applied Arts and Technology