Subscribe by Email

Saturday, January 25, 2014

Not To Decide Is To Decide

In the most recent edition of Air Transport World magazine, Robert W. Moorman writes an excellent article reviewing the advances in technology that have improved flight safety.  The article ends with quotations from (FSF) Flight Safety Foundation’s CEO Kevin Hiatt.  Hiatt gives a statistic that most airline safety managers are aware of, but have not been able to change.  He said, “What we’ve discovered is that 96% of the approaches in the system are flown correctly.  But in the 4% that are not, we’re finding that the pilot is continuing to fly the approach, rather than initiate a go-around.”  In fact, some airlines have been able to increase the percentage of stable approaches, but not the percentage of unstable approaches that result in a go-around.

This conundrum has no hardware solution.  The statistics are solely dependent on decision-making.  There is not a device that can prevent a pilot from making a poor or ineffective decision.  Only training and experience can improve effective decision-making statistics.  By experience, I mean the collective experience of all pilots.  As the FSF has been doing for decades, data must be collected and shared among professional airmen for the purpose of collective knowledge.  What is not being widely done, however, is using this body of data to help pilots learn decision-making skills.

Decision-making (DM) is not the same as standard operating procedure (SOP).  In fact, it is just the opposite.   Decision-making is, by definition, a choice.  SOP does not rely on choice, rather strict obedience.   Imagine a spectrum with DM at one extreme and SOP at the other.  That spectrum defines the environment that pilots live in.  Contemporary training and proficiency standards for commercial pilots are biased very heavily to the SOP end of the spectrum.  Pilots are taught how to comply with SOP, but much less training is focused on how to remain within SOP or more importantly how to recover when a deviation from SOP has occurred.  It’s easy to label this area as intentional non-compliance, but that would be far too simplistic.

Pilot performance outside SOP is exactly the territory the FSF data describes.  When an unstable approach occurs, SOP is no longer controlling the outcome.  If it were, the approach would not be unstable or a go around would always be accomplished.  When outside SOP, decision-making will be the determining factor.  However, the data shows that pilots can be very ineffective when making these decisions.  The major approach and landing accidents from 2013 at SFO, LGA and BHM as well as two recent occurrences of landing at the wrong airport further support this position.

Decision-making is not simply a plot on a risk matrix.  It is a proactive and deliberative process that evaluates and matches choices to the existing or expected conditions.  The choice will most likely be dependent on the goal of the decision maker. If safety is perceived as the primary goal, landing will become subordinate.  Conversely, if landing is the goal, it will drive the choice selection.  In other words, “I am going to stay safe and land if it works out.” or “I am going to land and I think I can stay safe while I do that.”

Why have the hardware “safety enhancers” of airplanes been more successful than the pilots that fly them?  I believe it is because the developers of hardware devices accept failure as a possibility, whereas writers of SOP do not accept the reality of non-compliance whether or not intentional.  No component is ever installed on an aircraft without a tested and trained process in the event of it’s failure.  What is the process for failure of SOP?  Are we to expect that all pilots will follow all SOP all of the time?  If not, then what is the process for human failure?

Airline pilots spend hours and hours in the classroom and simulators learning procedures for both normal and non-normal situations.  They are carefully evaluated on their knowledge of what to do in the event of system failures.  They practice and debrief realistic scenarios over and over to be prepared for extremely rare events.  How much time is spent learning how to manage human failures?  I bet it’s pretty close to 4%.


  1. There has been an ongoing controversy in recent years about the need for pilots to regain their "stick and rudder skills" that have atrophied due to the over reliance on automation technology. It's my opinion that the industry has lost it's way on what technology was intended to enable a pilot to do and the human interface that is required to manage it. No doubt the introduction of automation technology has improved aviation safety, but along the way, pilots have been absorbed by this same technology. An unintended result, automation technology has seen pilots becoming "operators" rather than "aviators". I would argue more than just stick and rudder skills have atrophied.

    Poor decision making among other behaviors have contributed to the proliferation of SOP's mentioned above. Is it a realistic expectation to have one commit 3200 pages of policy and procedure to memory? Has the pendulum swung to far the other way from something so vague as just admonishing a pilot; "just don't be stupid'? Where's the balance?

    Yet, the industry has been slow to embrace targeted training of soft skills like decision making when it comes to mitigating unstable approaches. I'm in no way minimizing the importance of SOP, but the industry needs to place a greater emphasis on the training the human element while simplifying the technological interface between man and machine. A back to basics approach. A technically proficient pilot that effectively manages his/her environment using appropriate decision making skills, SOP and automation that doesn't require "heroic" displays of one's flying skills.

  2. If given that many pilots in the industry are "deciding" to continue an approach that is knowingly unstable, does this reflect poorly on the pilot or the criteria used for measurement? Do the pilots really believe in the need to meet retrictive measurements so early in the approach sequence? Let's remember, if you use 500' as the minimum altitude, you are still a minute and a half away from actually landing. Most pilots are trained to make near split-second decisions in many aspects of flying; aborted takeoff decisions due a sudden aircraft malfunction, abortded landing after a significant bounce, TCAS and GPWS warnings, CAT 1, 2, or 3 landings. Making a decision +1.5 minutes before a problem really exists may seem a little early to some especially if things are progressing in the right direction.
    Now you could argue that it is poor decision making when a pilot chooses to not comply with an SOP, like the stabilized approach criteria. He is possibly making a choice between getting-the-job-done or breaking a rule that could jepordize his job. But, people have a tendency to break the rules, if they don't believe in them.
    John - I have often wondered how you can practice and test decision-making in the proper environment. When asked questions in a class room, most can come up with the proper answer. The simulator may be the best way to test decision-making in a stressful environment, but two things stand in the way. AQP and the "concept" of failure is negative learning.

  3. Tye - You point out what I believe is the crux of the conversation. The Stable Approach SOP was developed as an aid to the pilot after FOQA, and LOSA data determined something more was needed. You're right in saying we've been given specialized training to manage various threats when we fly; such as low visibility in a CAT 3 approach. Would you agree decision making in this case is binary? Meaning, if one is able to discern the runway environment, after regulatory requirements are met, we can continue to a landing. If not, a missed approach is executed. Pilot's understand the threat the weather conditions pose and the potential consequences of continuing to a landing in respect to crew/aircraft capabilities. Visual approaches on the other hand are unique. In respect that, the runway is continually in sight. Unlike the CAT 3 example where there is comprehensive guidance when and how to configure the airplane, we're given some wide latitude in configuring the airplane for a visual approach. This doesn't include the threat of ATC further complicating things with altitude, speed and heading changes. These distractions, reduce a crew's SA. Since one can see the runway and don't consciously associate a consequence with the threat of an unstable approach, the default decision is to land. It's not necessarily a case of non-compliance, because there wasn't intent. Rather, it's a crew's inability to identify and/or mange the threats that under-lied the visual approach.

    You are again correct in the observation, that in the classroom, correct answers can be easily given. However, facilitated discussion shares diverse viewpoints where participants "say it" they "own it". Learning occurs on a higher level. A corporate culture needs to be cultivated that acknowledges failure is a by product of our human condition and that's okay. Once the organization is comfortable with that realization, only then can the organization develop truly integrated facilitated learning with AQP. Facilitated learning that emphasizes the CRMTEM skills. Line pilot, peer to peer facilitation.

  4. All good points John. I would agree that low-vis approaches culminate into a binary decision, but so doesn't the continuation of an unstable approach? We would likely both agree that the frequency of occurrence of a unstable approach is higher on visual approaches than on low-vis approaches. Many of the unstable approaches may be visual approaches, but as a technicality. Most are tracking that same course and glideslope found in the low-vis scenario (largest preponderance of unstable approaches occur at the hubs). Not sure I would call these approaches unique. What I would consider unique/different are the methods ATC use to get you there and the demands of airspeed on approach.
    I also agree that a pilot that can see the runway may not consciously see the threat. But, isn't that why they have a multitude of parameters that must be met before preceding to land. One could argue that it might actually be a case of non-compliance, because there was an intent - to continue an approach when the SOP says you will go-around.

    Finally, I think that is what I mean by AQP as a possible problem. AQP forces everything to be graded in the sim and failure is graded as a "failure." Noone wants to see a failure, so most of the randomness and volatility is removed from a sim session. That lack of exposure to unpredictable stressors or irregular events is what is causing the problem not the degredation of hand flying skills per se. As a group, we may have become weaker as our day-in and day-out lives see little randomness - until of course we execute a visual approach. Then our fragility begins to show. Talking TEM is good, but it must be backed up with the ability to get in a sim, find yourself in a UAS, and then get yourself out. The concept of mithridatism - a willful exposure exposure to a low-dose toxin in an attempt to develop an immunity.