Saturday, January 25, 2014
Not To Decide Is To Decide
In the most recent edition of Air Transport World magazine, Robert W. Moorman writes an excellent article reviewing the advances in technology that have improved flight safety. The article ends with quotations from (FSF) Flight Safety Foundation’s CEO Kevin Hiatt. Hiatt gives a statistic that most airline safety managers are aware of, but have not been able to change. He said, “What we’ve discovered is that 96% of the approaches in the system are flown correctly. But in the 4% that are not, we’re finding that the pilot is continuing to fly the approach, rather than initiate a go-around.” In fact, some airlines have been able to increase the percentage of stable approaches, but not the percentage of unstable approaches that result in a go-around.
This conundrum has no hardware solution. The statistics are solely dependent on decision-making. There is not a device that can prevent a pilot from making a poor or ineffective decision. Only training and experience can improve effective decision-making statistics. By experience, I mean the collective experience of all pilots. As the FSF has been doing for decades, data must be collected and shared among professional airmen for the purpose of collective knowledge. What is not being widely done, however, is using this body of data to help pilots learn decision-making skills.
Decision-making (DM) is not the same as standard operating procedure (SOP). In fact, it is just the opposite. Decision-making is, by definition, a choice. SOP does not rely on choice, rather strict obedience. Imagine a spectrum with DM at one extreme and SOP at the other. That spectrum defines the environment that pilots live in. Contemporary training and proficiency standards for commercial pilots are biased very heavily to the SOP end of the spectrum. Pilots are taught how to comply with SOP, but much less training is focused on how to remain within SOP or more importantly how to recover when a deviation from SOP has occurred. It’s easy to label this area as intentional non-compliance, but that would be far too simplistic.
Pilot performance outside SOP is exactly the territory the FSF data describes. When an unstable approach occurs, SOP is no longer controlling the outcome. If it were, the approach would not be unstable or a go around would always be accomplished. When outside SOP, decision-making will be the determining factor. However, the data shows that pilots can be very ineffective when making these decisions. The major approach and landing accidents from 2013 at SFO, LGA and BHM as well as two recent occurrences of landing at the wrong airport further support this position.
Decision-making is not simply a plot on a risk matrix. It is a proactive and deliberative process that evaluates and matches choices to the existing or expected conditions. The choice will most likely be dependent on the goal of the decision maker. If safety is perceived as the primary goal, landing will become subordinate. Conversely, if landing is the goal, it will drive the choice selection. In other words, “I am going to stay safe and land if it works out.” or “I am going to land and I think I can stay safe while I do that.”
Why have the hardware “safety enhancers” of airplanes been more successful than the pilots that fly them? I believe it is because the developers of hardware devices accept failure as a possibility, whereas writers of SOP do not accept the reality of non-compliance whether or not intentional. No component is ever installed on an aircraft without a tested and trained process in the event of it’s failure. What is the process for failure of SOP? Are we to expect that all pilots will follow all SOP all of the time? If not, then what is the process for human failure?
Airline pilots spend hours and hours in the classroom and simulators learning procedures for both normal and non-normal situations. They are carefully evaluated on their knowledge of what to do in the event of system failures. They practice and debrief realistic scenarios over and over to be prepared for extremely rare events. How much time is spent learning how to manage human failures? I bet it’s pretty close to 4%.