The Humanity of AI


I recently enjoyed the movie Sully. Spoiler Alert: He lands the plane in the Hudson River.

After the crash, or “forced water landing,” the movie focuses on the subsequent question if the landing was heroic or a failure. The actual emergency landing was impressive, and there were no fatalities. However, initial simulations indicated that a runway landing was possible at two different airports.

At a key point, Sully challenges the NTSB investigators to get serious. He says “the simulations are not taking into account the human factor.”  That’s a strange defense. If the human factor causes a plane to land in the Hudson and non humans can land at an airport, then take the humans out of the cockpit!

Sully’s point was further clarified to be about reaction time. His assertion was that a human pilot would first diagnose and attempt to restore lift before taking abortive actions. The simulations (both bot and human) were, after the bird strike, indeed immediately returning to the airport.

On that actual flight, Sully and his first officer made several attempts to restore lift. By the time they confirmed the engines were indeed lost, the airport runway option was eliminated. This was reproduced with the simulations after they added 30 seconds of time for assessment and restoration.

The movie concludes that Sully is a hero, and I’m not refuting that at all. He made split second decisions that resulted with no fatalities. But, I do think he got it wrong with his humanity defense.

Many of you reading this post will someday fly in a completely pilot-less plane. Self-driving cars are upon us now. Self-flying planes are coming. They are conceptually easier than self-driving cars since there are fewer obstacles (bikes, pedestrians, traffic, etc.) in the sky.

At least in the movie, it appears Sully was confusing humanity with problem determination. He points out that no human pilot has trained for this situation. That no commercial plane has lost both engines at such a low altitude over one of the most densely populated places in the world. It was his experience, not rote knowledge, that saved everyone on that plane.

This is why we don’t rely more on auto-pilot today. Autopilot is great a maintaining altitude and course, but can’t properly react to the unexpected. Autopilot is old tech, new tech relies on recent breakthroughs in AI which can adapt better to rapid changes.

AI is still young and rapidly evolving. It’s rapidly improving and it is not unreasonable to assume that a future pilot-bot will outperform human, experienced pilots in the not too distant future. The movie demonstrated the Sully and crew performed their duties in a serial, non parallel, manner. Future bots will likely be able to simultaneously communicate the situation and actions, diagnose and attempt to restore the engines, and implement emergency maneuvers. These future pilot-bots are inevitable, necessary, and will become safer than humans.

There is already a shortage of pilots that’s expected to get worse. However, this will change practically overnight when the FAA approves single pilot flights accompanied by a co-pilot-bot. Then will come full automation, and the cockpit itself will disappear soon after that.

The implications are significant. Planes will be able to take-off, land, and fly closer together which increases the capacity of existing airports. Pilotless planes also effectively eliminates the threat of hijackers, at least from onboard the plane.

Sully wasn’t looking for humanity. He was looking for a reasonable response to an extraordinary situation. The initial simulations proved a runway landing was possible, but that’s because the simulations were unrealistic. As our AI improves, we will transfer more ‘experience’ into the both the bots and the simulations. The results will uncover totally new, unfathomable capabilities.

Dave Michels