decoingvibes.com
Why You Actually Want Machines Writing the Code for Your Next Flight

Why You Actually Want Machines Writing the Code for Your Next Flight

8 min read
by Suranjan Das
aviation software vibecoding code philosophy

Would you fly on a plane whose software systems were not written by a human being? If your immediate knee-jerk answer to that question was “hell no”, you would not be alone. You would be misguided, but not alone.

With the explosion of AI-assists and AI coding agents, it’s getting hard to get away from vibecoded apps, and the fatigue is real. This has created a situation where power users have no issue jumping straight to blaming AI code for any problem with an app or service these days.

And the general consumer trend is that using AI-generated code is considered lazy and a bad practice.

At the same time, the media keeps on harping about how AI coding is taking away developer roles, and CEOs keep talking about more and more of their codebase being AI-generated.

So, by all measures, for a normal person who isn’t in the weeds of aviation software, hearing that their plane’s software was not written by a human being could generate a sense of dread.

But in reality, if one were to make a steadfast rule that they’ll only fly on planes that run on human-written code, they should forget about flying in general.

Why Human-Generated Code for Aviation is a Nightmare

At the dawn of aviation, software was entirely nonexistent. Flight control was a strictly physical endeavor, relying on direct mechanical linkages consisting of networks of steel cables, pushrods, pulleys, and eventually, basic hydromechanical systems.

The “logic” of the aircraft was hardwired into its physical architecture, operating purely on analog principles, in which a pilot’s manual inputs translated directly into aerodynamic outputs.

As technology advanced, aviation underwent a radical transformation.

The introduction of “fly-by-wire” technology severed the direct mechanical link between the cockpit and the control surfaces, replacing it with electronic interfaces and digital computers.

Today, modern airliners are essentially flying data centers. To put this exponential growth into perspective, while the Apollo 11 mission relied on just 145,000 lines of code to land humans on the moon, a modern Boeing 787 Dreamliner requires approximately 14 million lines of code to operate.

Furthermore, across all interconnected systems, from critical flight controls to in-flight entertainment, an Airbus A380 utilizes well over 100 million lines of code.

At this staggering scale of complexity, purely human-generated code becomes a significant liability rather than an asset.

The sheer volume of logic required makes manual programming highly susceptible to human error, which is entirely unacceptable in an industry governed by zero-tolerance safety standards (like the DO-178C certification).

Consequently, modern aerospace engineering has shifted toward model-based design and automated code generation. Instead of typing out millions of lines by hand, engineers design and test the control logic visually, relying on strictly certified compilers to write the flawless, machine-level code required to keep these modern marvels in the sky.

“Vibe-Coding”, AI Assist, and Model-Based Code Generation

To understand why the “non-human” code in your cockpit is different from the “non-human” code in a buggy fitness app, we have to distinguish between automated code generation and the modern AI surge.

Model-based code generation, utilized through tools like MATLAB/Simulink or SCADE, is the bedrock of aerospace. It follows a philosophy of Model-Based Design: engineers create a high-level visual blueprint of the logic, a rigorous mathematical model, and then use a “qualified” generator to translate that model into C or C++ code.

This isn’t a computer “guessing” what the pilot needs; it is a mathematically deterministic translation that eliminates the “typo” factor of a tired human programmer while ensuring every single line can be traced back to a specific safety requirement.

In contrast, AI-assists, the GitHub Copilots of the world, act more like a high-octane version of predictive text.

These tools have been in the developer’s toolkit for years, helping to boilerplate code or suggest functions based on patterns learned from billions of lines of open-source data.

Here, the human remains the primary architect, using the AI as a digital research assistant to speed up the “grunt work.”

While helpful, these systems are probabilistic rather than deterministic; they offer the most likely solution, not necessarily the most correct one, requiring a human “babysitter” to verify that the generated snippet won’t accidentally crash the server.

The true source of current public fatigue is the rise of “vibecoding.”

This is the practice of using Large Language Models to generate entire features or even whole applications via natural language prompts, without the user ever seeing the underlying logic.

It is high-speed, low-friction, and often dangerously low-context.

When a CEO brags about a codebase being “AI-generated,” they are often leaning into this “vibe,” where the goal is rapid deployment at the expense of architectural integrity. This creates “brittle” software, apps that look great on the surface but crumble the moment they encounter an edge case the LLM didn’t predict.

The confusion for the average traveler lies in the semantic collapse of the term “automated.”

When the media reports that a plane is “flying on code not written by humans,” the brain immediately lumps the rigorous, formally verified Simulink models of a Boeing jet with the shaky, hallucination-prone output of an AI prompt.

To the passenger, they both sound like a machine is in charge, but in the world of avionics, there is a world of difference between a machine that follows a strict logical proof and one that is just “vibing.”

The Anti-AI Movement is Putting Programmers on a Pedestal for the Wrong Reasons

Let’s get one thing straight first: the awe surrounding historically hand-crafted code is completely justified.

Margaret Hamilton and her team architecting the Apollo Guidance System’s software, which was then literally woven into core rope memory by hand by skilled Raytheon factory workers, is a legendary feat of human intellect that deserves every bit of its reverence. But in its current backlash against AI, the modern tech discourse has accidentally spun a narrative that everyday programmers are akin to Renaissance sculptors, meticulously chiseling each line of Java or Python out of a block of pure, unadulterated logic.

We need to collectively let go of the idea that “hand-crafted” code is the undisputed gold standard simply by virtue of human inheritance.

The romanticized vision of a lone genius writing a bespoke application entirely from scratch is largely a myth.

If we are being completely honest, software engineers have been aggressively seeking out shortcuts since the invention of the compiler.

The average “hand-crafted” enterprise application today is less of a bespoke masterpiece and more of a digital Frankenstein’s monster, stitched together from Stack Overflow answers, forgotten GitHub repositories, and massive open-source libraries that no single developer fully comprehends.

Developers have been abstracting away the actual “writing” of code for decades. Whether it’s relying on package managers to download thousands of pre-written dependencies or using templating engines to skip the boilerplate, the overarching goal of programming has always been to write less code, not more.

This brings us to the core misunderstanding of the anti-AI movement: the actual generation of syntax, the raw typing out of logic, is not the most valuable part of software development. The true heavy lifting happens before a single line is written.

Systems architecture, structural design, and the deeply nuanced process of translating messy, contradictory human desires into logical workflows are where the real engineering happens.

This is exactly why human developers are fundamentally irreplaceable. An AI agent might be able to rapidly “vibe-code” a flashy frontend, and a deterministic engine like Simulink can flawlessly generate the mathematically verified C++ required for a Boeing rudder actuator.

But neither of these machines can sit in a room, listen to a client’s terrible idea, figure out what problem they are actually trying to solve, and architect a scalable system to fix it. Humans are infinitely better at the grand design. We always have been. The machines, whether they are probabilistic AI assistants or rigorous aerospace compilers, are just here to do the typing.

What Happens When Aviation Gets Code Wrong Though?

If the code is “flawless” and mathematically verified by machines, you might wonder why we’ve still seen catastrophic failures in modern aviation.

The elephant in the room is the Boeing 737 MAX crisis and its Maneuvering Characteristics Augmentation System (MCAS). To be clear: the MCAS tragedy wasn’t caused by a “typo” in the software. There was no missing semicolon or logic gate that flipped the wrong way. The code, in fact, performed exactly as it was designed to do.

The failure was systemic, not syntactical.

The tragedy occurred because the system architecture was fundamentally flawed, relying on a single sensor without adequate redundancy, and the corporate culture prioritized speed-to-market and cost-cutting over transparent safety protocols.

Engineers (and the management above them) made high-level design decisions to hide the system’s existence from pilots, thereby avoiding the need for expensive simulator training. The “automated” part of the plane wasn’t the villain; the villain was the human-driven design philosophy that allowed a single point of failure to override pilot input.

This is precisely why the dream of AI replacing the human element in complex engineering is a fever dream.

While an AI agent might eventually become proficient at generating “vibe-free” code, it possesses zero capacity for responsibility or nuance. It cannot understand the ethical weight of a design trade-off, nor can it push back against a board of directors demanding a shorter development cycle.

A machine has no “skin in the game.” In fact, it has no skin at all.

It operates within a limited context window, unable to grasp the cultural ripples of a safety oversight or the human cost of a systemic failure. We use automated compilers to do the heavy lifting of syntax because humans are bad at typing millions of lines of flawless logic.

But the true job of a software engineer isn’t typing; it’s architecture, risk mitigation, and systemic design.

You can automate code generation, but you cannot automate accountability. And until an AI learns how to take the blame for a 150-ton jet falling out of the sky or is legally allowed to, the humans are staying in the loop.