← Quora archive  ·  2011 Dec 09, 2011 03:01 PM PST

Question

Why do some planes average the inputs of the two pilots' controls?

Answer

I suspect nobody on the flight control system design team thought of this scenario: two pilots communicating so badly that they had the stick at opposite extremes. Missing probable scenarios in control design has happened multiple times in aerospace engineering history, and has led to many accidents. It is the Black Swan problem basically. The consequences are of course somewhat more tragic than Fail Whale scenarios in web technology. Flight control is also undergoing a major transition at the moment. It used to be entirely a control engineering subfield, but now large amounts of AI are being injected into the field.

I suspect Airbus and Boeing will both try build in mechanisms to prevent this in the future. So the question is really about the design of such a future fault-tolerance mechanism and whether you want to go towards more human control (train 'em better, make the autopilot do much less) or more automation. I believe in the latter direction. Overall, automation increases safety. So let's explore that.

I'd say there are two parts to this future design question.
First there is "normal" control regime operation, where the two signals are close enough that you can infer the same intention, but a presence of execution errors. In this case, averaging is sensible. It will help smooth out noise in the control input. This is the principle behind the Autoland system for instance: it is an automatic landing system used on most large airliners that was trained on human pilot data from thousands of landings. It now lands better than most human pilots. So averaging captures more information and smooths out noise. Think of it as wisdom of the crowds.

In the abnormal regime (say (|u1-u2)|/u_max>0.3) you might infer either an intention disagreement or a breakdown of some sort in the fly-by-wire chain of components (maybe a circuit shorted out somewhere etc.).

I'll ignore the model-checking problem that must be solved to figure out which is the case (you have to traverse a fault tree, figure out via tests and model-based reasoning whether there is a breakdown etc... very hard problem, but you could check for simple things like a broken electrical contact etc., this is basically like a Dr. House diagnostics problem and can get arbitrarily complicated, the Remote Agent experiment tried this in space on the Deep Space 3 mission, with mixed success).

IF the control software+AI decides it is an intention-mismatch problem (pilot A is trying to climb, pilot B is trying to level or dive), there are three solutions.

Solution 1 is to signal the dissonance, let the pilots sort it out. Think loud beeping red light and a flashing "Mismatched Intentions" sign and indicators to show which control inputs are drifting far apart in intention space.

If it persists, the control software has an interesting problem: assuming it can't kill one of the pilots, it has to decide whether to let the dissonance ride or switch modes to favor one or the other pilots, or override both pilots and pick a third path.

Solution 2 would be to support one pilot over the other.

If the instrumentation allows the autopilot to actually know what the hell is going on (in this case the pitot tube kept kicking in and out, making that hard), it should basically choose the theoretical stabilizing solution and cut out the pilot who is violating it (in this case pushing the stick forward to gain airspeed and get out of stall before hitting the water). If the feedback control signals are available, and conditions are benign the problem is not too hard. You'll find the right models in any aircraft design 101 book. Unless there are extremely weird atmospheric conditions that make the data ambiguous, this will work. The autopilot will basically need to compute a confidence measure in its own decisions (kinda like Watson on Jeopardy) and decide whether it feels confident enough to take action.

Something like this seems to have been built into the Airbus system (it's buried in the details of the transcript), but it wasn't sophisticated enough to pick a pilot to support. It did know enough to turn itself off when it thought the pilots would know better though.

Solution 3 is the most futuristic. The autopilot could decide that neither pilot knows what he/she is doing, over-ride both, and attempt to get back into safe flight. This is not inconceivable (what if a bomb goes off and kills both? or a terrorist is forcing both pilots to fly into the ground at gunpoint?). In this case, you have to go beyond intention mismatch detection to intention inference. You have to figure out what each pilot is trying to do. And whether those intentions make sense. Remember, in some departments, the flight control system knows a hell of a lot more about what's going on. It only displays a subset of what it knows in the cockpit UI.

This is relatively easy in trim flight (pilots are trying to get to and stay in level flight at 15,000 feet at so many knots). It is harder in maneuver flight, which involves moves that might look temporarily weird. So you'd need a maneuver library and the ability to recognize when a pilot is using a maneuver to get from one trim state to another. Something like this has also been done in laboratory situations (using a control architecture involving what are called maneuver automata: basically learning a maneuver playbook from human pilots, like the Autoland, but this is more like learning a language).

At an abstract level, this is about as hard as a language learning problem.

If you want to abstract away from the control engineering and AI here, this is really not a 2 pilot problem. It is a 3 pilot problem, one of whom is artificial.

Philosophically, you either have to trust it and give it human levels of decision-making autonomy, or force it to live under human-override authority. If you choose the latter, you lose a lot of performance potential (there's some speculative theory around it. See: http://www.ribbonfarm.com/2009/0... and http://en.wikipedia.org/wiki/Jev...)

Autonomous flight control is now sophisticated enough that it can basically be considered as good as human. The humans have eyes, ears and natural language communication and indirect, impoverished control. The AI has a lot of sensor data, communicates via the UI and has direct control that can respond far faster over more actuators and with greater accuracy (fly-by-wire can make unflyable aircraft flyable, in a way, with such performance military aircraft, the pilot plays a video game, the flight control system does the actual flying).

The human tragedy aside, this is a fascinating control engineering problem (my background is in aerospace control theory, in case that wasn't obvious). I'd love to work on this sort of thing, but unfortunately innovation in the space is mostly stifled by regulation at the moment. The regulatory agencies have a sort of humanistic bias and assume that anything with more automation is less safe. It is a sort of irrational, self-defeating conservatism. A lot of the conservatism is very well motivated, but a lot is not. Takeoff checklists etc. are a great idea. Assuming that human pilots are somehow magically superior to artificial ones is not.

It's also a problem of a blame-driven culture. Human-driven systems with low autonomy for artificial systems allow you to blame somebody when things go to hell. When artificial systems have a lot of autonomy, it is harder to play the blame game. And regulatory regimes are basically about refereeing blame games.