United Airlines flight 173 departed Denver’s Stapleton Airport on the afternoon of December 28, 1978, bound for Portland, Oregon. Sitting in the left seat was a journeyman captain who had accumulated nearly 28,000 hours of flight time in a career spanning three decades. The flight plan called for 31,900 lbs of fuel to be consumed on the hop. The aircraft departed the gate with an additional 14,800 lbs in order to meet IFR requirements plus a 20-minute buffer (46,700 lbs total). Remarkably, the Douglas DC-8 would crash in a wooded area short of Portland’s Runway 28L three hours later due to fuel exhaustion. Of the 189 occupants aboard the flight, 10 perished. Another 23 suffered serious injury. Ironically, the empty fuel tanks saved lives by eliminating the potential for a post-crash fire.
During the approach into Portland, the first officer (who was flying) requested flaps 15 and gear down. Post-accident interviews indicated an abnormal thump followed the attempt to lower the gear. The annunciator lights on the main gear panel failed to illuminate, indicating an unsafe condition. The captain contacted approach control and requested a holding pattern so the crew could troubleshoot the issue.
The DC-8 had a device that extended from the wing when the respective gear was down (which served as a backup to the indicating lights on the flight deck). The abnormal checklist stated that a normal approach and landing could be performed if these wing mounted devices were observed. The flight engineer verified that they were present. The captain initiated a conversation with United Airlines maintenance personnel. He reported 7,000 lbs of fuel remaining and his intention to continue holding for “another 15 to 20 minutes” (he wanted to provide flight attendants the ability to comprehensively prepare passengers for an emergency landing). In the configuration that the DC-8 maintained throughout the episode (gear down, flaps at 15-degrees), the NTSB calculated a consumption rate of 13,209 lbs per hour. Seven-thousand pounds of fuel, as such, represented 32-minutes of endurance. Had the captain stuck to his original timeline, the aircraft would have landed with 15 minutes of fuel remaining. Six minutes after this conversation, the first officer asked the flight engineer, “How much fuel we got?” The flight engineer responded, “Five thousand.” Two minutes later, the first officer repeated the question, receiving the same answer. Seconds later, the captain noted that “the [fuel] feed pump [lights] are starting to blink.” Eight minutes later, the first officer again asked about the fuel. The flight engineer answered (somewhat befuddling): “four thousand – in each – pounds.” Though the DC-8 had multiple fuel tanks, at the time it only had 4,000 lbs total.
The captain instructed the flight engineer to calculate landing weight based on “another fifteen minutes” of flight. The captain suggested that they would have “three or four thousand pounds” of fuel remaining at touchdown. The flight engineer countered: “Not enough. Fifteen minutes is gonna really run us low on fuel here.” The aircraft continued to hold for several more minutes. Upon inquiry from Portland Approach, the captain reported: “…about four thousand, well, make it three thousand pounds of fuel.” The DC-8 was burning 220 lbs per minute. If a calculation had been made, it would have indicated that a paltry 13 minutes of endurance remained.
The next several minutes were eaten up with conversation about the readiness of the cabin and questions regarding whether spoilers or anti-skid would be functional on the ground. At 1806, the first officer announced that the number four engine was failing. The captain – who was engaged in a conversation with the lead flight attendant – apparently missed the comment. Ten seconds later, the first officer stated, “We’re going to lose an engine.” The captain asked, “Why?” A jumbled conversation followed in which both the first officer and the flight engineer quickly identified that the engine was fuel starved. Within 30 seconds of the initial failure, a second engine on the same wing flamed out. Both engines fed from the same fuel tank. The crew began cross-feeding and were able to relight the two engines via the opposite wing fuel supply, but within a few minutes that tank was empty as well. All four engines flamed out in quick succession.
The captain appeared to be genuinely puzzled when the first engine failed. It is clear that he did not immediately correlate the failure to a dangerously low quantity of fuel onboard. The flight engineer also expressed surprise but was quicker at recognizing the root cause. The first officer immediately suggested that the engine was fuel-starved, clearly indicating that he had been aware of the dwindling fuel supply prior to the flameout. Yet instead of directly challenging the captain on the decision to delay a landing attempt, he instead resorted to questioning the flight engineer about the fuel status. Most likely he was hoping that the captain and flight engineer would catch the hint and develop a game plan. The NTSB faulted the captain for failing to
maintain situational awareness, and the first officer and flight engineer for failing to directly challenge the captain’s decision-making process.
A New Understanding of Error
Threats are encountered on every flight. They mostly represent conditions that cannot be changed. Errors are the result of inappropriate actions taken by pilots. The longstanding attempt to eliminate pilot error in aviation has proven an elusive goal. A better strategy is to focus on minimizing the impact of errors when they occur. Trapping an error is nearly as good as avoiding one. Under this model, the difference between a threat and an error is largely procedural. A landing gear malfunction during final approach (for example) represents a threat. In the case of United 173, another threat was the hierarchy that existed among the flight crew. The captain possessed an extraordinary amount of experience. It is understandable why the first officer and flight engineer would have been reluctant to directly challenge him. The captain failed to actively include the first officer and flight engineer in his decision-making process, which represented an error.
Communication with ATC was likewise inadequate, another error. Plenty of resources were available to the captain, yet he failed to trigger them effectively. His loss of situational awareness regarding fuel endurance was the final nail in the coffin. It is worth noting that many of his errors resulted from his singular focus on a necessary event: Ensuring that the passengers were briefed should an evacuation become necessary.
The problem was not that errors occurred but that the crew failed to trap them. The abnormal gear procedure on the DC-8 directed the crew to execute a normal approach in the event that the gear indicators on the wing were extended, which they were. The first officer never directly challenged the captain regarding diminishing fuel reserves even though the subject was clearly on his mind. The flight engineer noted that insufficient fuel remained for another turn in the hold, yet the captain failed to consider his concern. The crew did not explicitly declare an emergency until all four engines had flamed out. For the initial 45-minutes of the event, the only thing that ATC had to go on was: “We’ve got a gear problem. We’ll let you know.”
The first indication that the landing would be abnormal was when the captain requested crash and fire rescue “in the event [it] should become necessary.” Controllers initiated emergency procedures following the exchange. This occurred 12 minutes prior to the crash. Even after the initial two engines had flamed out, the crew failed to inform ATC of their dire status. Instead, they coyly requested “clearance for an approach into two eight left, now.” The abnormal gear checklist that began the ordeal had only taken 15-minutes to complete. The remaining 40 minutes of follow-on activity generated sufficient error, omission, fixation, and poor communication to produce a fatal accident.
Threat and Error Management
The NTSB recommended aircrew “assertiveness” training as a result. This rapidly morphed into what we now call Crew Resource Management (CRM). It would initially focus on enhancing the “challenge-response” dynamic among flight crew. It did not take long before the umbrella of CRM expanded to include resources outside of the flight deck as well: ATC, passengers, flight attendants, maintenance workers, and any other of a slew of supporting personnel who contribute to safe flight. Single-Pilot Resource Management (SRM) naturally arose from the realization that every pilot has a multitude of external resources available to assist in decision-making.
It is important to distinguish between “single-pilot” and “single-occupant.” Having a second individual onboard – even if they have little knowledge of aviation – can represent a profound resource. Briefing a passenger not only helps to put them at ease but also forces us to utilize a part of our brain that naturally critiques our own impulse. Even a highly experienced pilot sometimes overlooks relevant information. Communication is an effective way to trap those errors.
The development of CRM/SRM began the long arc toward the multidisciplinary field of Human Factors, which utilizes everything from psychology to statistics. Before this, aviation safety largely focused on two goals: eliminate pilot error and develop more resilient aircraft systems. While the second goal has largely been achieved, pilot error as a root cause of accidents has remained persistent. The core tenant of Human Factors is that we all make errors. I have participated in hundreds of training events (both as an applicant and as an evaluator), and I have yet to witness a perfect check ride. A high-performance aircraft is complex by nature, with many different systems and operating parameters to manage. Toss in an external environment that includes adverse weather, ATC congestion and mechanical issues, and you have a recipe for distraction. Every flight requires a particular series of steps to be taken in order to achieve a successful outcome. Errors can occur at any point but are much more likely to develop when a pilot is startled by an unexpected scenario. The compression of time that occurs during flight greatly increases the odds that those errors will lead to an undesirable aircraft state.
The Swiss cheese model is one of the better-known allegories used to describe the process of capturing errors. The holes in the cheese represent the threats and errors that we experience every flight. The cheese slices represent the barriers that we utilize to resist those errors. A checklist is a prime example of a barrier. Used properly, it ensures that an error is recognized prior to a critical phase of flight. If a runway change necessitates a new aircraft configuration after the takeoff checklist has already been performed, the slice of cheese has a potential hole (which can be solved by running the checklist again). Forgetting to actually run the checklist is another hole. It is worth noting that the “error” holes become larger or smaller depending upon workload. When task saturated, the “holes in the cheese” can be nearly the size of the slice itself. The goal of Threat and Error Management (TEM) is to produce enough “slices of cheese” so that all errors will be trapped prior to a complete loss of situational awareness.
While flight training programs largely focus on the pilot’s role in managing risks and mitigating errors, the final solution involves other working groups as well. Have you ever wondered why ATC requires fuel remaining to be reported in hours and minutes following the declaration of an emergency? Knowing the total amount of fuel onboard is obviously useful for firefighters in understanding the potential for a post-accident blaze, but this is something more efficiently accomplished by reporting in pounds or gallons. During the event with United Airlines flight 173, the captain reported fuel remaining in pounds. Had ATC instead required him to report fuel remaining in minutes, he would have rapidly figured out that “three thousand pounds of fuel” actually meant 13 minutes of endurance. Confronted by this, he would have undoubtedly requested an immediate approach. The requirement to report fuel as a time-sensitive resource orients a pilot towards landing before a greater crisis develops, and it provides ATC with a comprehensive understanding in order to back up a potentially task-saturated individual.
Over the past two decades, the NTSB has found pilots at fault in approximately 85 percent of aviation accidents. The FAA has understandably placed emphasis on teaching TEM during regular aviation training cycles. While traditional training focuses on the physical and technical understanding of an aircraft, TEM focuses mainly on psychological processes. It represents a systematic means of introducing logic into the sometimes reflexive nature of decision-making. One vital element of TEM is to acknowledge threats explicitly before takeoff and landing. Even if you are flying alone, it is a good idea to perform these briefings verbally. Talking to yourself feels silly, but it is calming and provokes critical thinking.
Talking is also quite useful in ensuring the completion of checklists. Nearly all pilots develop a “beat and tempo” when verbalizing a checklist. If you inadvertently miss an item, it disrupts the tempo and brings it to your attention. If you are a single-occupant, discussing the flight with a weather briefer introduces a dispassionate third-party opinion (a conversation increasingly omitted in the era of internet weather products). If another pilot is aboard, discussing weather, terrain, unusual approaches, or any other threat is a vital part of the briefing process. Non-pilot passengers almost always appreciate a pre-flight briefing as well. And though you may not want to dwell on risks per se, you can still work them in (“it’ll be a beautiful departure surrounded by the mountains in this box canyon”).
Flying involves a baseline of risk. An issue discovered in the air occasionally calls for an innovative response, but it is almost always better to focus on established solutions (generally in the form of an approved checklist). It is important to remember that many threats produce follow-on threats. Fixation produces errors. You cannot think yourself out of an emergency. Run the appropriate abnormal procedure, communicate unambiguously with ATC, and get the aircraft on the ground. An unremarkable decision made decisively is better than a bout of brilliance made after much delay. An average decision clearly communicated is superior to genius in silence. When faced with a threat, the best mitigation strategy is to follow established procedures and move on.
Most of the major issues that a pilot confronts while in command have already been codified by regulations and aircraft-specific procedures. It is not always necessary to consult the books when “correcting the obvious,” but these documents remain the most trustworthy resource available when troubleshooting more complex issues. Remember, in aviation, you cannot eliminate every threat or error. You can only manage them.
Amazing article designed to save lives! Thanks for bringing awareness to pilots!