When Bradford Parkinson, SM ‘61, first started working on creating a global positioning system (GPS), popular wisdom said it was impossible. Even the U.S. Air Force, which needed it most, was skeptical that a three-dimensional positioning system could ever be accurate enough to be useful. When Parkinson began, the Air Force had the technology for a two-dimensional system in certain regions with accuracy of a quarter-mile. Even then, he knew GPS could work, and with an accuracy of a few meters – not just for military uses but also in civilian car-navigation systems.
“When I got approval to go ahead with it, most people heard the story and thought it was so incredible, they thought it was a pet rock,” Parkinson says. “They thought it was going to die and that it would never attain the performance we said it would attain.”
Parkinson came by his confidence through hard work; he earned his master’s degree at MIT and graduated in aeronautics and astronautics, with additional work in electrical engineering and radio frequencies. He earned a PhD at Stanford, but felt MIT was his intellectual base. “The MIT experience was very much a foundation for the system we architected,” he says. “It wasn’t until years later that I used it, but those skills were essential.”
He had the first GPS satellite operational in 1978 and soon proved its effectiveness in a military setting.
These days, Parkinson is a professor emeritus at Stanford, though he drops by MIT’s aero-astro department frequently to visit old friends. He lives in San Luis Obispo, CA, near his six children and five grandchildren. His recent projects include honing GPS uses for farm equipment and guiding the first fully blind aircraft landings. Since 1978, he has received numerous awards, both military and civilian, including a place in NASA’s Hall of Fame and the National Association of Engineers’ Draper Prize, the equivalent of the Nobel in his field. While he deserves the recognition, Parkinson is careful to explain that his achievements were rooted in teamwork. “I was honored, I was flattered,” he says. “But there were a lot of people who should have been standing with me on the stage.”
Keep Reading
Most Popular
Large language models can do jaw-dropping things. But nobody knows exactly why.
And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.
OpenAI teases an amazing new generative video model called Sora
The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.
Google’s Gemini is now in everything. Here’s how you can try it out.
Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.
This baby with a head camera helped teach an AI how kids learn language
A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.