You may have noticed that the state of science education has been in the news lately because of the perceived problems in science, technology, engineering and mathematics education. While much of this activity has been centered on K-12 education, its impact can also be felt in higher education, where there is now greater emphasis on active engagement versus passive lecturing.
You may have noticed that the state of science education has been very much in the news of late, including reports from the National Academies (1) and editorials and articles in Science, the New York Times and the Wall Street Journal (2 – 4). Responses to the perceived problems in science, technology, engineering and mathematics education include calls for revised MCAT and College Advanced Placement exams, better science and mathematics standards (frameworks), and the appointment of prominent scientists, focused on education, to positions high in the government. While much of this activity has been centered on K-12 education, its impact can also be felt in higher education, where there is now greater emphasis on active engagement versus passive lecturing (5).
You might well ask yourself what drew so much attention to this subject – what is the evidence that our educational system is doing a bad job, that it needs reform? Early hints came from the work of Treagust and Hestenes and colleagues, together with an awareness that grades and conceptual understanding are not always correlated (6). One also can do one’s own experiments – ask students or colleagues to describe the evidence that respiration and photosynthesis share a common evolutionary origin, explain why oil and water do not mix, describe the mechanisms by which mutations lead to novel phenotypes or consider whether DNA is inherently more or less stable than protein. The answers, or more often the hemming and hawing, might surprise you.
The recent emphasis on the science education system is based in large part on the perceived need to broaden the appeal of science and deepen appreciation for the scientific approach’s value when thinking about a wide range of phenomena. While the current system is demonstrably adequate for those who succeed in it, it actively discourages the majority of students. All too often, the function of a science or math course is perceived by students (and, sadly, by some faculty) as a sorting mechanism rather than an opportunity to learn (and teach). This is a perception that can lead to the loss of important contributions and talent as well as misunderstanding of and hostility toward science within the broader community.
Recently, there have been a number of encouraging developments. For example, there is an increased emphasis on learning goals for science courses and curricula, although how far this has moved into the consciousness of most science educators is unclear. While learning goals are critical for effective instruction, they are essentially meaningless without a close link to informative assessment. Accreditation bodies, who you might think would be interested in the assessment of learning, only rarely require this type of data. Goals and assessments form complementary parts of a dialectic. The assessments needed are quite different from typical course exams (and assessments that correlate with exams are more or less superfluous). The types of assessments needed are those designed to reveal whether particular goals are realistic, whether they are being met, and if not, what is going wrong – they need to map out how students are thinking about a particular idea.
In this light, it is critical that when a learning goal is formulated it is also illustrated: What exactly does it mean to achieve that goal? What kinds of questions should students be able to answer, and what should their answers contain? Such assessments dig deeper than the typical exam for a number of reasons (6, 7) and serve to provide feedback on the learning goals themselves as well as the pedagogical strategies used to attain them. Often authentic assessments (like Socratic dialogues) are uncomfortable for both the student and the instructor, since they are designed to reveal the limits of understanding rather than to identify who is paying attention. A simple strategy, applied to a multiple-choice question, is to ask students to explain why incorrect choices are wrong. This forces students to become explicit (and instructors to hear) about their understanding of both the question and the proffered response. When carried out rigorously, this dialectic between goals and assessments often reveals that apparently simple goals are quite complex and that students may not be prepared, either by curricular prerequisites or by their current instructional experiences, to address them. It also can reveal serious holes in students’ understanding and, by implication, holes in course and curricular design.