So Laning devised his own executive program, which assigned tasks different priorities and allowed high-priority tasks to cut in on low-priority ones. The idea may sound simple, but the execution was difficult, since it required the computer to allocate memory among different tasks, keep track of where it had broken off each of them, and determine which to resume once it had completed the task of highest priority. “He basically made it up out of whole cloth,” Eyles says. “But it was brilliant.”
As the lab’s work on Apollo expanded, Laning’s involvement in it waned. “Hal loved to do things like [the executive program], especially if it was a major contribution he could do by himself,” Battin recalls. “But when we got the Apollo job, he told me, ‘Dick, I’d like to help out, but I do not want to be a manager. The endless meetings and trying to explain things to people who don’t understand them–I can’t do that.’” Now 89, Laning says he can’t even remember where he was during the lunar landing–whether or not he joined his Instrumentation Lab colleagues around the “squawk box” in their Cambridge office to listen to the radio transmissions between NASA and the astronauts. For him, he suggests, work on spacecraft navigation had lost some of its charm with the introduction of human operators.
Ironically, however, it was during the last few minutes before the Apollo 11 lander touched down–one of the few points during the mission when the astronaut was supposed to take manual control of the vessel–that Laning’s executive function would face its stiffest test.
The Eagle Lands
No one could be sure in advance what the moon’s terrain would look like, so during the last 500 feet of the lunar descent, the astronaut piloting the lander had to be able to redirect it if the landing site initially chosen looked inhospitable. But even then, says Eyles, the astronaut’s control system was only “semi-manual”: “The software was still controlling the throttle,” he says, “and of course the autopilot was in control of maneuvering the vehicle.” Fred Martin argues that astronauts training for the Apollo missions on mockups of the lander–jokingly called “flying bedsteads”–demonstrated that controlling the lander’s descent was beyond human capacity. Two of the flying bedsteads, which had no autopilot, crashed during tests before Apollo 11, and the astronauts–Neil Armstrong was one of them–had to bail out.
So the approach to the lunar surface would be a very bad time for the onboard guidance system to fail. And about five minutes into the lander’s descent, the computer began displaying a series of alarms, indicating that its processor was overloaded.
Eyles was listening to the squawk box at the Instrumentation Lab. If at that moment the decision had been his, he says, he would have aborted the landing. “However,” he says, “the flight controllers who were used to looking at the system from the outside had actually run simulations that had similar alarms and had discovered that in fact it would keep flying. From that perspective, it was safe to say go.”
Ultimately, the culprit turned out to be the radar system that was supposed to gauge the distance to the command module when the lander was on its way back from the moon. Because of a mismatch between the power supplies of the radar and the guidance system, the computer was interpreting random electrical noise as important radar signals. This, added to all the other information that the computer had to process during the extremely tricky descent, was more than the processor could handle.