A catastrophic automation failure may have caused the crash of Air India Flight AI 171, killing 259 people. Experts now point to a software logic flaw involving the aircraft’s FADEC and WOW systems, potentially triggering a legal and regulatory crisis for Boeing, Air India, and global aviation oversight.
On June 12, 2025, Air India Flight AI 171, a Boeing 787-8 Dreamliner registered as VT-ANO, took off from Ahmedabad bound for London. On board were 241 passengers and crew. Within a minute of takeoff, both engines suffered a catastrophic loss of thrust. The aircraft nosedived into a densely populated college campus, killing all but one person onboard and 19 people on the ground. The crash has sparked international outrage and a flurry of investigations. At the heart of the storm is a new theory that suggests the tragedy may have stemmed from a logic error within the aircraft’s automation systems, specifically a miscommunication between the Weight-on-Wheels (WOW) sensor and the Full Authority Digital Engine Control (FADEC).
Though still a theory, early signs point toward a system malfunction in which the WOW sensor may have falsely indicated the aircraft was on the ground. This incorrect input could have blocked FADEC from transitioning to flight mode, causing engine thrust to drop or fuel to cut off entirely during takeoff—a critical moment for any aircraft. If proven true, this would mark one of the most alarming failures in the history of civil aviation automation.
Understanding WOW and FADEC: Software Meets Gravity
In the modern aviation landscape, where software and sensor logic are as critical as aerodynamics and fuel, systems like WOW and FADEC are central to safe operation. The WOW system, comprising pressure-sensitive switches on the landing gear, communicates whether the aircraft is airborne or grounded. This simple signal informs a cascade of vital processes from engine mode settings and braking logic to spoiler deployment and stall warnings.
FADEC, the engine’s microprocessor-controlled brain, uses WOW inputs among others to manage thrust, fuel flow, and ignition automatically. Under normal conditions, it improves engine performance and reduces pilot workload. But FADEC’s full authority nature meaning its commands often override the pilot’s inputs raises legal and regulatory questions, especially when it interprets incorrect data.
If FADEC received a faulty WOW input suggesting the aircraft was still on the ground, it may have overridden takeoff thrust or initiated a fuel cutoff. Such a breakdown in communication between subsystems might have left the pilots powerless as the engines failed at the worst possible time.
A Legal Fault Line: Civil Aviation and the Montreal Convention
This misfire of logic introduces a host of legal and ethical challenges. The Montreal Convention of 1999, a foundational treaty in international air law, defines an “accident” as an unusual or unexpected event external to the passenger. If the FADEC logic shut down thrust mid-air based on faulty WOW data, this incident clearly falls within that definition.
Product liability law in jurisdictions like the United States also recognizes failure to provide sufficient instruction or warnings as a defect. If Boeing or engine maker General Electric failed to offer redundancy procedures or manual override options in the event of a WOW sensor error, they could face legal consequences under Article 21 of the Montreal Convention. This article imposes strict liability on air carriers unless they can prove all reasonable measures were taken to prevent the incident.
FADEC and WOW systems are not theoretical constructs they are built through engineering decisions and are subject to oversight. In 2018 and again in 2022, the FAA issued warnings about WOW-related software issues on Boeing aircraft. That history introduces the concept of foreseeability. Under U.S. tort law and aviation regulations, foreseeability of harm is a key trigger for liability. If either Boeing or Air India continued using a system vulnerable to such miscommunication, the burden may shift to them to prove compliance with airworthiness directives.
When Code Becomes Culpable
Modern aircraft make thousands of decisions per second based on sensor inputs and logical flows embedded in software. In the case of AI 171, if these automated decisions were based on flawed data, they may have become the actual cause of the crash. The Montreal Convention’s Article 21 allows damages unless the airline can prove it did everything reasonably required to prevent the incident. But what happens when “everything” includes trusting a faulty line of code?
In Sikkelee v. Precision Airmotive Corp, the U.S. Court of Appeals ruled that FAA approval of an aircraft component does not exempt manufacturers from product liability. This suggests that even if the FADEC logic was certified, Boeing and GE might still be held accountable under national laws.
Another concern is the absence of operator warnings. If Boeing or GE were aware of prior incidents involving similar logic faults but failed to issue service bulletins or update training manuals, the failure to inform may constitute negligence. In the digital era, where algorithms guide the aircraft’s behavior as much as human pilots, the duty to inform becomes more than a regulatory checkbox it is a moral and legal imperative.
State Responsibility Under the Chicago Convention
The international dimension deepens with state-level obligations. The Chicago Convention of 1944 binds states to maintain airworthiness and safety standards. Article 33 obliges states to recognize other states’ certifications only if safety is assured. If the United States, as the State of Design, failed to enforce modifications to the FADEC-WOW interface, and India, as the State of Registry, continued operating the aircraft despite known risks, both could face accusations of breaching international obligations.
Annex 8 of the Convention outlines airworthiness standards. Paragraph 3.1.2 mandates that an aircraft must remain operable in foreseeable flight conditions. If moisture intrusion or gear compression caused a WOW malfunction, and that triggered a fuel cutoff via FADEC, then the system’s design did not meet this international benchmark.
The ICAO Council, empowered under Article 54(i) to review such matters, may be called upon to investigate whether global aviation safety mechanisms need a reboot. This goes beyond one crash it touches the very heart of aviation regulation in an age of algorithmic control.
Algorithmic Determinism and Jurisprudence
If AI 171’s crash stemmed from a logic flaw rather than mechanical failure, a new frontier in aviation law emerges one that demands accountability from designers of code, not just bolts and rivets. Software that makes decisions with life-or-death consequences must be subjected to the same scrutiny as physical components.
National and international courts may need to redefine the concept of negligence, defect, and causation. The idea of res ipsa loquitur, the thing speaks for itself, falters when “the thing” is an invisible, silent algorithm operating in nanoseconds. Here, expert testimony, source code review, and system architecture may become central to litigation.
More importantly, legal frameworks like the Montreal Convention must evolve to address automation-induced accidents. It is not enough to presume that certified systems are safe. Regulators must require explainability, redundancy, and override pathways in AI-driven systems.
Human Trust in Machine Logic
Passengers boarding AI 171 placed their trust in the aircraft’s systems, assuming they were built, tested, and maintained to safeguard their lives. If this trust was betrayed by invisible software logic gone awry, then the global aviation industry must answer for it not just in technical fixes, but in courtroom accountability.
This incident, if ultimately traced to a WOW-FADEC logic failure, could lead to multi-party lawsuits involving Air India, Boeing, GE, and regulatory bodies. Under the Montreal Convention, claims can be aggregated, and third-party actions under domestic law are not restricted. The road ahead may include class actions, regulatory reforms, and international hearings.
AI 171’s crash could be a turning point in aviation history. It demands a fusion of engineering transparency, legal reform, and international cooperation. The skies are no longer ruled solely by pilots, they are increasingly governed by logic trees and sensor networks. The law must evolve, or it risks irrelevance in the face of digital determinism.
The crash of AI 171 was a tragedy. But it may also be the catalyst that forces aviation law to speak a new language—one fluent in systems theory, machine learning, and software accountability. The future of flight safety may depend not just on better machines, but on smarter laws.
