Skip to Content

My View

Better feedback methods will help schools improve the way they teach students.

Two sets of facts unsettle me when I think about undergraduate education at MIT and elsewhere. First, according to decades of studies at schools around the nation, graduates with high grade-point averages fare no better than those with lower grade-point averages in terms of career success, civic engagement, or any other measure of attainment besides graduate school grades. Two possible explanations for this odd fact occur to me: either grades don’t accurately measure what students actually learn, or important elements are missing from the college curriculum.

The second fact that bothers me is that many graduating seniors, even from institutions like MIT, seem to have a poor understanding of elementary concepts in science and engineering. In the 1997 Annenberg/CPB video series Minds of Our Own, seniors at MIT and Harvard University, on their graduation days, were asked questions about science in everyday life: how heavy trees grow from lightweight seeds, how to light a light bulb with a battery and a piece of wire, and so on. Their answers may dismay you. Unhappily, plenty of research has shown how misconceptions about science can survive years of “good” education and good grades.

If you see a vehicle moving erratically, you might suspect problems in the control system. That seems to be the problem here: inadequate and inaccurate feedback systems in higher education. Fortunately, in recent visits to the Institute, I’ve seen examples of progress in collecting better feedback. John Belcher’s approach to teaching physics in the Technology Enabled Active Learning classroom is just one example. But there’s still a long way to go. I’ve worked for years on the challenges of improving feedback (formative evaluation) in higher education. Here are three suggestions that could help MIT improve the way it teaches its students. Each is partially implemented already. However, all three need to be adopted on a larger scale.

Feedback within courses: At most schools I visit in the course of my work, only a small fraction of the faculty have training in how to get a good reading on students’ understanding of crucial concepts and difficult materials. Many typically ask questions that can be answered by students who have memorized certain procedures. If students are to learn more-complex analysis and design skills, they need to spend plenty of time on realistic projects. And faculty need to learn more about how to assess students’ progress on such projects. As assessment improves in these ways, both faculty and students can better control what students are learning. I am proud that my old department, aeronautics and astronautics, hired a staff member to help its faculty with assessment and teaching and that most faculty took advantage of the service.

Feedback as students move from course to course: Faculty within a department must define the skills their students will most need after graduation. (Skills are different from the titles of required courses.) Students could keep electronic portfolios in which they periodically post work that demonstrates their development of those skills. Faculty and outside experts would annually assess these portfolios and give students (and faculty) feedback on the patterns of learning.

Feedback from the world of work: IBM has begun a massive research program to gauge how its products are used in the field (see “Research in Development,” TR, May 2005). MIT needs to do the same. Anecdotes from graduates are no substitute for formal research on how the most successful people in a field think and act. MIT and its alumni ought to fund such research and use the findings to improve curricula. I suspect the research would reveal that the skills that contribute to success are typically learned across a range of courses and extracurricular experiences, not just from a single course. Also, many of the skills that lead to excellence in a discipline will not be part of the content of that discipline.

These recommendations will take time and money to implement. But “flying blind” wastes money and the time of teachers and students alike. MIT attracts some of the best scholars and students in the world. With better feedback guiding the use of its resources, MIT can produce the best education in the world.

Stephen C. Ehrmann ‘71, PhD ‘78, received bachelor’s degrees in aeronautics and astronautics and urban studies and planning and a PhD in management and higher education. He is vice president of the nonprofit TLT Group,, which helps educational programs use technology to improve learning.

Keep Reading

Most Popular

It’s time to retire the term “user”

The proliferation of AI means we need a new word.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Sam Altman says helpful agents are poised to become AI’s killer function

Open AI’s CEO says we won’t need new hardware or lots more training data to get there.

A brief, weird history of brainwashing

L. Ron Hubbard, Operation Midnight Climax, and stochastic terrorism—the race for mind control changed America forever.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.