• Share full article

Advertisement

Supported by

Taking Lessons From What Went Wrong

engineering disasters essay

By William J. Broad

  • July 19, 2010

Disasters teach more than successes.

While that idea may sound paradoxical, it is widely accepted among engineers. They say grim lessons arise because the reasons for triumph in matters of technology are often arbitrary and invisible, whereas the cause of a particular failure can frequently be uncovered, documented and reworked to make improvements.

Disaster, in short, can become a spur to innovation.

There is no question that the trial-and-error process of building machines and industries has, over the centuries, resulted in the loss of much blood and many thousands of lives. It is not that failure is desirable, or that anyone hopes for or aims for a disaster. But failures, sometimes appalling, are inevitable, and given this fact, engineers say it pays to make good use of them to prevent future mistakes.

The result is that the technological feats that define the modern world are sometimes the result of events that some might wish to forget.

“It’s a great source of knowledge — and humbling, too — sometimes that’s necessary,” said Henry Petroski, a historian of engineering at Duke University and author of “Success Through Failure,” a 2006 book. “Nobody wants failures. But you also don’t want to let a good crisis go to waste.”

Now, experts say, that kind of analysis will probably improve the complex gear and procedures that companies use to drill for oil in increasingly deep waters. They say the catastrophic failure involving the Deepwater Horizon oil rig in the Gulf of Mexico on April 20 — which took 11 lives and started the worst offshore oil spill in United States history — will drive the technological progress.

“The industry knows it can’t have that happen again,” said David W. Fowler, a professor at the University of Texas, Austin, who teaches a course on forensic engineering. “It’s going to make sure history doesn’t repeat itself.”

One possible lesson of the disaster is the importance of improving blowout preventers — the devices atop wells that cut off gushing oil in emergencies. The preventer on the runaway well failed. Even before the disaster, the operators of many gulf rigs had switched to more advanced preventers, strengthening this last line of defense.

Of course, an alternative to improving a particular form of technology might be to discard it altogether as too risky or too damaging.

Abandoning offshore drilling is certainly one result that some environmentalists would push for — and not only because of potential disasters like the one in the gulf. They would rather see technologies that pump carbon into the atmosphere, threatening to speed global climate change, go extinct than evolve.

In London on June 22 at the World National Oil Companies Congress, protesters from Greenpeace interrupted an official from BP, the company that dug the runaway well. Planetary responsibility, a protestor shouted before being taken away, “means stopping the push for dangerous drilling in deep waters.”

The history of technology suggests that such an end is unlikely. Devices fall out of favor, but seldom if ever get abolished by design. The explosion of the Hindenburg showed the dangers of hydrogen as a lifting gas and resulted in new emphasis on helium, which is not flammable, rather than ending the reign of rigid airships. And engineering, by definition, is a problem-solving profession. Technology analysts say that constructive impulse, and its probable result for deep ocean drilling, is that innovation through failure analysis will make the wells safer, whatever the merits of reducing human reliance on oil. They hold that the BP disaster, like countless others, will ultimately inspire technological advance.

The sinking of the Titanic, the meltdown of the Chernobyl reactor in 1986, the collapse of the World Trade Center — all forced engineers to address what came to be seen as deadly flaws.

“Any engineering failure has a lot of lessons,” said Gary Halada, a professor at the State University of New York at Stony Brook who teaches a course called “Learning from Disaster.”

Design engineers say that, too frequently, the nature of their profession is to fly blind.

Eric H. Brown, a British engineer who developed aircraft during World War II and afterward taught at Imperial College London, candidly described the predicament. In a 1967 book, he called structural engineering “the art of molding materials we do not really understand into shapes we cannot really analyze, so as to withstand forces we cannot really assess, in such a way that the public does not really suspect.”

Among other things, Dr. Brown taught failure analysis.

Dr. Petroski, at Duke, writing in “Success Through Failure,” noted the innovative corollary. Failures, he said, “always teach us more than the successes about the design of things. And thus the failures often lead to redesigns — to new, improved things.”

One of his favorite examples is the 1940 collapse of the Tacoma Narrows Bridge. The span, at the time the world’s third-longest suspension bridge, crossed a strait of Puget Sound near Tacoma, Wash. A few months after its opening, high winds caused the bridge to fail in a roar of twisted metal and shattered concrete. No one died. The only fatality was a black cocker spaniel named Tubby.

Dr. Petroski said the basic problem lay in false confidence. Over the decades, engineers had built increasingly long suspension bridges, with each new design more ambitious.

The longest span of the Brooklyn Bridge, which opened to traffic in 1883, was 1,595 feet. The George Washington Bridge (1931) more than doubled that distance to 3,500 feet. And the Golden Gate Bridge (1937) went even farther, stretching its middle span to 4,200 feet.

“This is where success leads to failure,” Dr. Petroski said in an interview. “You’ve got all these things working. We want to make them longer and more slender.”

The Tacoma bridge not only possessed a very long central span — 2,800 feet — but its concrete roadway consisted of just two lanes and its deck was quite shallow. The wind that day caused the insubstantial thoroughfare to undulate wildly up and down and then disintegrate. (A 16-millimeter movie camera captured the violent collapse.)

Teams of investigators studied the collapse carefully, and designers of suspension bridges took away several lessons. The main one was to make sure the road’s weight and girth were sufficient to avoid risky perturbations from high winds.

Dr. Petroski said the collapse had a direct impact on the design of the Verrazano-Narrows Bridge, which opened in 1964 to link Brooklyn and Staten Island. Its longest span was 4,260 feet — making it, at the time, the world’s longest suspension bridge and potentially a disaster-in-waiting.

To defuse the threat of high winds, the designers from the start made the roadway quite stiff and added a second deck, even though the volume of traffic was insufficient at first to warrant the lower one. The lower deck remained closed to traffic for five years, opening in 1969.

“Tacoma Narrows changed the way that suspension bridges were built,” Dr. Petroski said. “Before it happened, bridge designers didn’t take the wind seriously.”

Another example in learning from disaster centers on an oil drilling rig called Ocean Ranger. In 1982, the rig, the world’s largest, capsized and sank off Newfoundland in a fierce winter storm, killing all 84 crew members. The calamity is detailed in a 2001 book, “Inviting Disaster: Lessons from the Edge of Technology,” by James R. Chiles.

The floating rig, longer than a football field and 15 stories high, had eight hollow legs. At the bottom were giant pontoons that crewmen could fill with seawater or pump dry, raising the rig above the largest storm waves — in theory, at least.

The night the rig capsized, the sea smashed in a glass porthole in the pontoon control room, soaking its electrical panel. Investigators found that the resulting short circuits began a cascade of failures and miscalculations that resulted in the rig’s sinking.

The lessons of the tragedy included remembering to shut watertight storm hatches over glass windows, buying all crew members insulated survival suits (about $450 each at the time) and rethinking aspects of rig architecture.

“It was a terrible design,” said Dr. Halada of the State University of New York. “But they learned from it.”

Increasingly, such tragedies get studied, and not just at Stony Brook. The Stanford University Center for Professional Development offers a graduate certificate in advanced structures and failure analysis. Drexel University offers a master’s degree in forensic science with a focus on engineering.

So too, professional engineering has produced a subspecialty that investigates disasters. One of the biggest names in the business is Exponent, a consulting company based in Menlo Park, Calif. It has a staff of 900 specialists around the globe with training in 90 engineering and scientific fields.

Exponent says its analysts deal with everything from cars and roller coasters to oil rigs and hip replacements. “We analyze failures and accidents,” the company says , “to determine their causes and to understand how to prevent them.”

Forensic engineers say it is too soon to know what happened with Deepwater Horizon, whose demise flooded the gulf with crude oil. They note that numerous federal agencies are involved in a series of detailed investigations, and that President Obama has appointed a blue-ribbon commission to make recommendations on how to strengthen federal oversight of oil rigs.

But the engineers hold, seemingly with one voice, that the investigatory findings will eventually improve the art of drilling for oil in deep waters — at least until the next unexpected tragedy, and the next lesson in making the technology safer.

One lesson might be to build blowout preventers with more than one blind shear ram . In an emergency, the massive blades of these devices slice through the drill pipe to cut off the flow of gushing oil. The Deepwater Horizon had just one, while a third of the rigs in the gulf now have two.

Perhaps regulators will decided that rig operators, whatever the cost, should install more blind shear rams on all blowout preventers.

“It’s like our personal lives,” said Dr. Fowler of the University of Texas. “Failure can force us to make hard decisions.”

logo

Engineering Disasters: 25 of the Worst Engineering Failures on Record!

  • October 2, 2020
  • EngineeringClicks
  • There have been many engineering failures throughout recent history and further back in time
  • Engineering mistakes have caused the deaths of hundreds of thousands of people in the last 400 years
  • Mistakes have been learned from throughout the years, however failures are still occurring

Engineering disasters. Why do they happen? Let’s look at most famous engineering disasters and the circumstances preceding them. Engineers must study these in order to prevent catastrophes like these from reoccurring.

Engineering processes have vastly improved in recent times and especially in the last 100 years, however the journey to get to where we are now was not always a smooth one. Engineering and engineers will unfortunately never get to a level where everything is perfect, and examples of disasters caused by engineering errors throughout the years are plentiful.

We have previously written about the 5 Engineering Bodges that ended in Disaster , now let’s take a look at the top 25 engineering disasters in the last 120 years. We included the most recent engineering disasters, as well as the oldest on record that happened almost 400 years ago! Dams, bridges, walkways collapsing; molasses pouring down the streets, nuclear stations exploding, massive ships (other than Titanic!) sinking, tons of poisonous gas leaking – all these disasters had something to do with engineering fails, unfortunately.

Engineering Disasters:

Banqiao dam failure (china, 1975), bhopal disaster (india, 1985), chernobyl nuclear disaster (ussr, 1986), pennsylvanian johnstown flood (usa, 1889), ss sultana disaster (usa, 1865), titanic (uk, 1912), st. francis dam disaster (usa, 1928), 1970 dc-10 disasters (usa, 1979), gretna rail disaster (uk, 1915), cleveland gas explosion (usa, 1940), hyatt regency hotel disaster (usa, 1981), air france concorde disaster (france, 2000), quebec bridge disaster (canada, 1907, 1916), hindenburg disaster (usa, 1937), vasa disaster (sweden, 1628), boston molasses disaster (usa, 1919), deepwater horizon (south korea, 2010), space shuttle challenger disaster (usa, 1986), space shuttle columbia (usa, 2003), charles de gaulle airport disaster (france, 2004), apollo 1 disaster (usa, 1967), tacoma bridge disaster (usa, 1940), apollo 13 disaster (usa, 1970), skylab disaster (usa, 1974), fukushima daiichi nuclear disaster (japan, 2011), 1. banqiao dam failure.

The Banqiao Dam failure in 1975 was the collapse of 62 dams in Henan, China, which was caused by Typhoon Nina. Occurring in August 1975, it is the third deadliest flood in history and resulted in the loss of lives in the range of 85,600 – 240,000. This flood also destroyed 6.8 million houses, leaving millions homeless in its wake. Read more about this catastrophe in our article: Engineering Disasters: Banqiao Dam failure .

2. Bhopal Disaster

The Bhopal Gas Tragedy, also known as the Bhopal Disaster, occured in December 2nd-3d, 1985 in a pesticide plant in Bhopal, Madhya Pradesh, India. A highly toxic gas leak resulted in over 500,000 people being exposed to MIC (Methyl Isocyanate) gas. The exact death toll is not known, but it has been said that over 16,000 people died within two weeks, with hundreds of thousands more sustaining lasting injuries because of the leak.

Bhopal People's Health and Documentation Clinic: a picture on the wall depicting the results of the disaster.

3. Chernobyl Nuclear Disaster

The well-known Chernobyl nuclear disaster occurred on April 26th, 1986. The power for the number four reactor at the Chernobyl Nuclear Power Plant dropped to almost zero, which in turn caused a nuclear chain reaction within the reactor. This resulted in a powerful steam explosion, which was followed by a fire in the reactor core that released the radioactive contamination into the air.

Exact figures for the death toll are hard to find, but it is estimated that across Europe there has been between 9,000 and 16,000 lives lost due to the nuclear disaster. Read about Chernobyl disaster in detail here: Engineering Disasters: Chernobyl in Detail.

The video below is about Pripyat, the town located about 3 kilometres away from the Chernobyl nuclear power station, which got abandned following the disaster. The town used to have 47,000 population and was newly built before the tragedy. It was founded in 1970.

4. Pennsylvanian Johnstown Flood

The Great Flood of 1889, also known as the Johnstown flood, occurred following the failure of the South Fork Dam, upstream of Johnstone town, Pennsylvania. On May 31st, 1989, 14.55 million cubic meters of water descended on the town, killing more than 2,200 people. The cause of this engineering failure was extremely heavy rainfall that had fallen several days prior to the flood. More about this engineering fail in this video:

5. SS Sultana Disaster

The SS Sultana was a side-wheel steamboat that sailed on the Mississippi River. It is known as the worst marine disaster in America’s history, when three out of the boat’s four boilers exploded and it burned down to the waterline in 1865. Although the legal capacity of the boat was 376 people, it was severely overcrowded that night with 1,960 prisoners, 22 guards, 70 paying customers and 85 crew members, the exact death toll is unknown and is estimated at 1,238.

The Titanic (full name RMS Titanic) was a large British passenger liner that went down in the Atlantic Ocean on April 15th, 1912 on its first journey. It is estimated that over 1,500 people lost their lives when the ship sank, solidifying its place as one of history’s deadliest marine disasters. On April 14th, it hit an iceberg about 375 miles south of Newfoundland, the boat’s hull buckled inwards and resulted in flooding that the ship could not withstand. Read about this engineering disaster in great detail in our article: Engineering Disasters: Titanic.

7. St. Francis Dam Disaster

The St. Francis Dam, located in Los Angeles, California, was a concrete, curved gravity dam. It created a large storage reservoir for Los Angeles, and was a huge part of the city’s water infrastructure. On March 12th, 1928, the St. Francis Dam burst and released its water, killing at least 431 people. It is known as the one of the worst civil engineering catastrophes in America’s history.

8. 1970 DC-10 Disasters

American Airlines Flight 191 was just one of many crashes involving the McDonnell Douglas DC-10 aircraft during the 1970s. On May 25th, 1979, the flight was taking off when it crashed into the ground. Everyone on board was killed including the 258 passengers and 13 crew members, two people on the ground were also killed. The cause of the crash was engine number 1 separated from the wing and severed main lines. This imbalance and erratic aerodynamics caused the plane to crash into an open field at the end of the runway. Also check out our article about the Concorde disaster .

9. Gretna Rail Disaster

On May 22nd, 1915, there was a multi-train crash near Gretna Green in Dumfriesshire, Scotland. This is known as the Quintinshill rail disaster, resulting in the deaths of over 200 people, and is still the worst railway catastrophe in British history. The cause of the crash was down to negligence of two signal men, who did not acknowledge a Northbound train running on the Southbound line, therefore leading to the crash.

10. Cleveland Gas Explosion

The East Ohio Gas Company built a gas plant in Cleveland, Ohio in 1940, and this was the first plant of its kind in the world. It had four tanks and worked properly for three years before exploding. A vapor began to emit from tank number 4, which ignited but didn’t seem to cause any major damage. Then a second tank exploded, which levelled the tank farm. The explosion travelled through the sewers and up peoples drains, killing less than 200 people. Read our article about this disaster here: Engineering Disasters: East Ohio Gas Company .

11. Hyatt Regency Hotel Disaster

engineers looking at the aftermath of the Hyatt Regency walkway collapse disaster

Two walkways collapsed in the Hyatt Regency Hotel in Kansas City, Missouri on Jul 7th, 1981. The walkways were directly above one another, crashing into a tea dance that was being held in the hotel. It killed 114 people and injured a further 216. It is known as the deadliest non-deliberate structural failure in America’s history, and was caused by changes to the design of the walkway’s steel hanger rods. Read more in our article: Engineering Disasters: Hyatt Regency Hotel .

12. Air France Concorde Disaster

Air France Flight 4590 was an international flight, departing from Charles de Gaulle airport in Paris on the 25th of July, 2000. This flight was to be completed using an Aérospatiale-BAC Concorde. During take-off the aircraft ran over debris on the runway, which led to it blowing a tire, and causing debris to fly into the landing gear bay. A fire resulted, which affected the engines and the plane crashed into a nearby hotel, killing all 109 passengers on board and four people in the hotel. Read more in our article: Engineering Disasters: Air France Flight 4590 .

13. Quebec Bridge Disaster

The Quebec Bridge, located across the lower Saint Lawrence River in between Sainte-Foy and Levis in Quebec, Canada. The particular rail, pedestrian and road bridge collapsed twice, and caused the deaths of 88 people. It is still the longest cantilever bridge in the world, and took over 30 years to finish. The cause of the first collapse (1907) was that the bridge was not strong enough to hold its own weight, and just nine years later (1916) it collapsed again. This time the bridge’s centre span fell into the river as it was being lifted into place. This video looks at the history of the design and construction of this major engineering fail.

14. Hindenburg Disaster

On May 6th, 1937, a German airship LZ 129 Hindenburg full of passengers went on fire and crashed while it was docking in Manchester Township, New Jersey, United States. 35 people died, including 22 crewmen and 13 passengers, of the 97 people that were on the airship, one additional person was killed on the ground. The cause of the fire was undetermined, and greatly affected the use of airships to transport passengers, and this led to the end of the airship era. Read more here: Engineering Disasters: LZ 129 Hindenburg .

15. Vasa Disaster

Vasa was a Swedish warship developed between 1626 and 1628. The ship sank within a few minutes of setting sail, due to the vast amount of weight in the upper levels of the hull, making it extremely unstable. It is famous for having primarily bronze cannons aboard and being one of the most powerful vessels in the world. The issues with the ship were known previous to its departure, but impatience combined with negligence resulted in its doomed journey, killing the 30 people aboard.

16. Boston Molasses Disaster

The Great Molasses Flood, also referred to as the Boston Molasses Disaster, took place on January 15th, 1919. A huge storage tank filled with 2.3 million gallons of molasses burst, resulting in a 35mph wave of molasses rushing through the streets. It killed 21 people and injured a further 150. The cause of the flood was attributed to the thermal expansion of the molasses that had just been delivered to a warmer than usual Boston.

17. Deepwater Horizon

Deepwater Horizon was an “ultra-deepwater” semi-submersible offshore drilling rig. It was built in 2001 and drilled the deepest oil well in history with a vertical depth of 35,050 feet. On April 20th, 2010, while drilling an explosion occurred on the rig, killing 11 workers. This fire was inextinguishable and the Horizon sank two days later, causing the largest marine oil spill in history. The fire was caused by a blowout, which is an uncontrolled release of crude oil/natural gas from an oil/gas well.

Deepwater Horizon in flames after the explosion

18. Space Shuttle Challenger Disaster

On January 28th, 1986, the Space Shuttle Challenger was involved in a fatal accident over Cape Canaveral, Florida. 73 seconds into its flight, the space shuttle broke apart and exploded, killing all seven crew members inside. Some of the crew members were known to have survived the initial break up, but the force with which the crew compartment hit the ocean could not have been survived. The failure was caused by simple O-ring seals that had frozen and failed during the launch. Read our article dedicated to this engineering disaster: Space Shuttle Challenger.

19. Space Shuttle Columbia

space shuttle columbia

20. Charles De Gaulle Airport Disaster

The largest international airport in France is the Paris Charles de Gaulle Airport. It is known as the worlds tenth busiest airport and Europe’s second busiest. On May 23rd, 2004, a section of the Terminal 2E collapsed near Gate E50, and took four lives. This was not long after the terminal had been inaugurated, and the cause of the collapse has not been attributed to one single fault. Among the problems listed, the concrete roof was said to not have been substantial enough once some penetrations had been made. This video goes into detail regarding the engineering fails that led to the collapse.

21. Apollo 1 Disaster

The Apollo 1 (also known as AS-204) was the very first crewed mission of the Apollo Space Program. It was supposed to be the first orbital test of the Apollo program, due to launch on February 21st, 1967. However, the spacecraft never launched, as there was a fire in the cabin during the launch that killed the three crew members. The source of the fire was determined to be electric, and it spread rapidly due to the pure oxygen atmosphere in the cabin.

22. Tacoma Bridge Disaster

On July 1st, 1940, the Tacoma Narrows Bridge located in Pierce County, Washington opened, and just four months later its main span failed and caused the bridge to collapse into the water. Aeroelastic flutter was determined to be the cause of this failure, which was made worse by the bridge’s solid sides that didn’t allow wind to pass through. Fortunately, there was no loss of human life in this disaster, just one dog that was abandoned in his owner’s car. Read more in our article dedicated exclusively to this disaster: Engineering Disasters: Tacoma Narrows Bridge .

23. Apollo 13 Disaster

The Apollo 13 mission (1970) was supposed to be the third mission to land on the moon, and was the seventh mission in the Apollo Space Program. The landing on the Moon was aborted when an oxygen tank exploded two days into the mission. The spacecraft looped around the Moon and returned safely to earth, landing in the Pacific Ocean in a lifeboat. This catastrophe prompted the use of non-combustible materials for space flight to increase the safety of the crew.

24. Skylab Disaster

The first United States space station, Skylab, was launched by NASA and occupied by three different astronaut crews between 1973 and 1974. However the space shuttle would not be ready again until 1981 causing Skylab’s orbit to decay and crash into the Earth. Mathematical errors led to pieces falling in Australia, but fortunately there was no one injured.

25. Fukushima Daiichi Nuclear Disaster

The Fukushima Daiichi Nuclear Disaster occured in 2011, from a nuclear accident at the Fukushima Daiichi Nuclear Power Plant. It was caused by the Tōhoku earthquake and tsunami and was the most devastating nuclear accident since the aforementioned Chernobytl disaster of 1986. Three hydrogen explosions, three nuclear meltdowns and the release of radioactive contaminants into the air was the result of the loss of reactor core cooling. There has been no report of any deaths or injuries as a result of this disaster.

There you have it, the top 25 engineering disasters in the last 400 years. All of these mistakes have improved engineering by giving new engineers case studies and improving safety standards across all engineering disciplines. What do you think of these engineering disasters? Would you add anything else to this list? Let us know with a comment down below!

Leave a Comment Cancel Reply

You must be logged in to post a comment.

Join our Newsletter

Recent posts.

delete face

Delete Face and Move Face in SolidWorks

Is Energy a good career path?

Is Energy a Good Career Path in 2024? 25 Best Paying Jobs in Energy

LOD definition

LOD Definition: Best 10 Point Guide

carbon steel

Carbon Steel: The Ultimate Guide – 4 Main Types, Properties, and Applications

Ai in Mechanical Engineering (image of a robot with CAD design in the background)

AI in Mechanical Engineering. 3 Best Ways How Engineers Can Benefit from ChatGPT.

Solidworks Prices Standard Professional Student

The Ultimate SOLIDWORKS Price Guide – All Options!

ugears design engineering

Wooden 3D Puzzles: Designed, Engineered and Manufactured in Ukraine by Ugears

Top 10 Aerospace Companies in the World

Top Aerospace Companies to Work For. Engineering at its Best

engineering disasters essay

What is the best ERP software for plastic injection moulding products?

GD&T reference center

GD&T Reference Center

Search engineeringclicks, related posts.

mercedes VISION AVTR

Video of the Day: Mercedes Drives Sidewards

Water powered Whirlpool Turbine video

Video of the Day: Water Powered Whirlpool Turbines

video of gyroscope experiments

Video of the Day: The Magic of Gyroscope

poimo inflatable scooter video

Video of the Day: Inflatable Scooter (Poimo)

chairless-chair-video

Video of the Day: Noonee Chairless Chair

life-saving-crash-cushions

Video of the Day: Life-Saving Crash Cushions

engineering disasters essay

Video of the Day: Integrating Genetic Engineering with 3D Printing

How to Design a Steel Column - supports

How to Design a Steel Column

logo

SIGN UP FOR OUR NEWSLETTER

Join our mailing list to get regular updates

Engineering: Engineering Disaster Analysis Research Paper

  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment

Introduction

Investigation, findings and recommendations, recommendations, impact on engineering practices, reference list.

The Schoharie Creek Bridge was built in 1955 to serve the motorist across the Creek river. Since 1955, there was a designing error on Schoharie Creek Bridge because of cracks in the pier shortly after finishing the construction of the bridge in 1955. The collapse of Schoharie Creek Bridge in 1987 was as a result of error in the engineering design, and one of the major causes of the collapse of the Schoharie Creek Bridge was the excessive impact of the scour on the plinths of the bridge that led to the vertical crack.

The findings of the research reveal the errors in design led to the shallow footing of the bridge, and the dept of the footing was not enough to resist the scour. The paper provides recommendation that there is need for further research on how the bridge could resist the scours for a very long time to prevent engineering disaster. There should be changes in engineering practice in order to enhance the credibility of engineering profession.

The major engineering disasters, which range from collapse of building to plane crash have been argued to manifest from error in engineering design. The designed process has caused the classical cases of engineering failures, the engineering designing error can be manifested from any stage, and such error can lead to engineering disaster when the prototype is tested. (Petroski, 1991). Typically, studying of engineering disasters plays huge role in engineering profession. The layman in engineering, who is not aware of the important role the disaster plays in engineering practice may not understand the significant aspect of engineering disaster. (Research paper). Many engineering disasters are caused by error in fundamental design principle that lead technical fault, which inevitably lead to disaster. (Hales, Gooch, 2004).

The purpose of this paper is to provide an informative research on the collapse of the Schoharie Creek Bridge.

The next section gains insight into the general background of the collapse of the Schoharie Creek Bridge, how the incidents is related to engineering disaster.

The Schoharie Creek Bridge is located in Amsterdam serving motorist for more than three decades when the catastrophe occurred in April 5, 1987. The catastrophe was the collapse of the Schoharie Creek Bridge, which led the accident of five vehicles and death of ten people occupying the vehicles. The bridge was structurally designed to resist hydraulic forces, as well as carrying traffic loads and weight. However, with the method the Schoharie Creek Bridge was designed, its construction altered the flow of the river that lead to deposition and erosion on the bridge. (Storey, Delatte, 2003).

The conclusion of National Transportation Safety Board was that the bridge footing relatively had shallow foundation because the footing was vulnerable to sour. The collapse of Schoharie Creek Bridge was the typical example of engineering disaster. (Mueser Rutledge Consulting Engineers).

The Schoharie Creek Bridge was among the several bridges constructed by the New York State Thruway Authority (NYSTA) to serve superhighway across the State of New York in 1950. The Schoharie Creek Bridge was 165 meter (540-foot) in length purposely designed for motorists to cross Schoharie Creek. The nominal length of the Schoharie Creek Bridge was 30.5, 33.5, 36.6, 33.5, and 30.5 meters (100, 110, 120, 110 and 100 feet). Each ends; the concrete pier frames supported the spans of the bridge spans along its abutments. Storey, Delatte, 2003). (See fig 1 for the detail illustration of the Schoharie Creek Bridge)

The skeleton of the bridge was steel stringers with 200 mm (eight-inch), however the super structure consists of longitudinal main girders, which contains transverse floor beams. (Storey, Delatte, 2003).

Thus, reason for the breakdown of the Schoharie Creek Bridge in April 5, 1987 is discussed in the next section.

The history of the collapse of Schoharie Creek Bridge started from 1955, shortly after the construction of the bridge, when the bridge was opened to be used by the motorists. However, it was not long after the Schoharie Creek Bridge was opened to be used by the motorists, the pier plinths of the bridge started to show some vertical cracks. The width of the crack ranged from 3 to 5 mm (1/8 to 3/16 inches).

Typically, the reason for the crack was as a result of high tensile stresses that occurred at the concrete plinth. It was also found out that it is difficult for the plinth to resist the bending stresses between the two columns in the concrete plinth, and the problem had contributed to brittle of the plinth. Typically, other technical problems occurred after the completion of the bridge. For example, it occurred the road drainage of the bridge was poor. Moreover, the expansion bearing of the bridge was noticed to be out-of-plumb. In addition, in the west embankment, there were sufficient supporting materials of the dry stone pavement. All these overall problems since 1955 contributed to the collapse of the Schoharie Creek Bridge in 1987. (Storey, Delatte, 2003).

According to Swenson, Ingraffea (1990), “The primary cause of failure was scour beneath a plain concrete pier footing. However, a necessary secondary cause was unstable propagation of a single crack in the pier. Conditions for initiation of the curvilinear crack are first evaluated. It is concluded that about 28 feet of scour had to occur to initiate stable process zone formation at the point of initiation, but that at least 44 feet was required to cause unstable cracking.” (p. 1)

It should be noted that the cracks occurred because of the tensile stresses and the result led the plinth not to be able to resist bending stresses between the two columns of the bridge.

The spring flood also contributed to the collapse of the bridge because the pier of the bridge had become weak. The rainfall that led to the collapse was around 150 mm (6 inches). Typically, the collapse occurred when there was toppling of pier in the bridge, which resulted in the collapse of span 3 and span 4. (Storey, Delatte, 2003).

It should be noted the excessive scour of the pier 3 was the major contribution to the collapse. Scour is an excessive removal of sediment, which was because of excessive action of erosion. (Storey, Delatte, 2003).

The overall findings of the collapse of the Schoharie Creek Bridge are discussed in the next section.

The outcome of the investigation of the collapse of the Schoharie Creek Bridge reveals that it was typical example of engineering disaster. The paper reveals that the collapse of the bridge was as result of the lapses in the engineering design of the bridge. For example, shortly after the bridge was ready for use by the motorists, it was revealed that there was an engineering design error in the bridge. Part of design error was the shallow footing of the bridge, and the dept of the footing was not enough make the wall of the bridge to resist the scour. Typically, the important factor for the collapse of the bridge was the vulnerability of the scour… (Storey, Delatte, 2003).

It should be noted that the pier 3 of the bridge was not strong enough; it was also installed at the bearing of the soil prone to erosion. Typically, since 1955, the velocity of the flood was calculated and an estimated at velocity of 4.6 meters per second (15 fps) while the estimated flow was calculated at 2.17 million liters per second (76,500 cfs). Thus, the Schoharie Creek Bridge was designed top resist the velocity and flow of the flood at that time.

Concrete Pier frame of the Schoharie Creek Bridge

However, after the construction of the bridge in 1955, it is revealed that “the flood was greater than that anticipated by the designers, and followed the 1955 flood and others that had disturbed the riprap. A curve in the river upstream of the bridge directed a higher-velocity flow toward pier 3. Drift material caught against the piers directed water downward at the base of pier 3. Berms built in 1963 directed floodwaters under the bridge. An embankment west of the creek channel increased flood velocities.” (ASCE Forensics Congress, 2003).

Thus, the scours removed the materials from the bridge through three mechanisms.

First, there had been long-term degradation of the channel bottom of the bridge, and the resulted to the gradual elevation of soil cavity that occurred through the deposition and erosion of riverbed, and the consequence was the shallow of the riverbed. The result of this led to the degradation of the bridge. For example, the process of erosion through natural process has narrowed the pier construction and this resulted to the low resistant of the bridge. For instance, this equation illustrates that Q= Av (the v = velocity, A = cross-sectional area, Q= flow rate). (ASCE Forensics Congress, 2003).

According to National Transportation Safety Board, the magnitude of water velocity was very high that it could move a given rock. Thus, the turbulence of the velocity moving around pile 2 and 3 was very high to the extent that it could remove 300-pound riprap. In addition, the turbulence and velocity of the water were larger than what pier 3 and pier 2 could resist. (National Transportation Safety Board, 1988).

Thus, from the result of the findings, the paper provides the following recommendations.

The evidence has revealed that scours were the major caused of the collapse of the Schoharie Creek Bridge. Typically, there were several methods that the collapse could have been prevented. The insurgence of scour could have been prevented by pier. In addition, the scour could also be prevented by riprap, which could be done by supporting piers with piles. It should be noted that hydraulic force occurs during power flood, and the velocity of the powerful flood can lead to the increase in the damage done by the scour.

It should be noted that at the time of construction of the bridge, the quantity piles estimated by the design engineer for the pile protection was not enough. (ASCE Forensics Congress, 2003).

Essentially, the bridge should be designed for geotechnical, and structural effects hydraulic, which should be able to support strength. There should be change in the engineering practice. It should be noted “that 494 bridges failed during the years 1951 and 1988 as a result of hydraulic conditions, primarily due to scouring” (ASCE Forensics Congress, 2003).

Typically, the current engineering practice revealed the tools used in building the bridge, there is not adequate prediction of scour. (ASCE Forensics Congress, 2003).

There is need for further research on the failure of the entire bridge system. The research should be made at microscopic level to examine the failure of the materials being used for the construction of the bridge. (Lee, Sternberg, 2008).

Although, with the collapse of the Schoharie Creek Bridge, several advances have been made to improve the engineering practice.

The collapse of Schoharie Creek Bridge has revealed that engineering practice has not fully developed. Essentially, the competent professional engineer should be able to provide the expiring date after a certain bridge should not be able to serve the purpose that has been designed for. The collapse of the Schoharie Creek Bridge revealed that there is need for further engineering research on many aspect of construction engineering.

The research aspect should be able to enhance greater understanding of some many aspects of designing of bridge. Further research is needed on the on the hydraulic aspect of the bridge construction, corrosion, impact of scours on the bridge construction. Thus, designing engineer needs to learn, from the past mistakes. The expertise in designing of bridge is highly needed. The design engineers ought to ensure that mistakes that lead to collapse of the bridge. (Lee, 2008).

Evidence has revealed the causes of most of the bridges’ catastrophic can be traced to the factors outside the structure of the bridge. For example, the surging of river current, earthquake acceleration, trucks carrying more weight than what the bridge could resist for a long time. The effects of this in engineering practice are that the whole system of engineering practice should be changed or amended. For example, the design engineers should aware that designing the bridge is for a long time and not for few years. Thus, the system of issuing licence to the engineers should be amended. There should be strict regulation to follow before licence is being issued to an individual to practice engineering. (Lee, 2008).

The paper investigates the collapse of the Schoharie Creek Bridge, which occurred in 1987. The paper finds out that the causes of collapse of the bridge have been articulated with the designing error of the bridge. It should be noted shortly after the finishing the construction of the bridge in 1955, there had been noticeable error in the engineering design of the bridge. Typically, the pier 2 and 3 of the bridge was not strong enough to resist the erosion which contributed to the weaken of the bridge. The paper also reveals that the major factor that contributed to the collapse of the Schoharie Creek Bridge was the scours that occurred due to the constant erosion from the flow and velocity of the water.

Thus, the paper recommends that there is need for further research on the improvement on the engineering practice in order to avert the future collapse of the bridge.

This paper enhances greater understanding of the civil engineers, construction engineers, government, private organization, and academic communities.

ASCE Forensics Congress, (2003). The Collapse of the Schoharie Creek Bridge, Summarized from Storey and Delatte, Lessons from the Collapse of the Schoharie Creek Bridge. Web.

Hales, C. Gooch, S. (2004). Managing engineering design, London, Springer: London.

Lee, G.C. Sternberg, E. (2008). A New System for Preventing Bridge Collapses, Issues Online in Science and Technology. Web.

Mueser RutledgeConsulting Engineers.(n.d.). Schoharie Creek Bridge Collapse. Web.

National Transportation Safety Board, (1988). In reply refer t o : H-88-16 through -20, National Transportation Safety Board Recommendation, Washington DC.

Petroski, H. (1991). Paconius and the pedestal for Apollo: A case study of error in conceptual design, Research in Engineering Design, 3(2), 123-128.

Research paper. (n.d.). Content andFormat, Retrieved October 21, 2009, from Engineering_disasters_research_papers_lecture.pdf 151.

Storey, C. Delatte, N.(2003). Lessons from the Collapse of the Schoharie Creek Bridge. Web.

Swenson, D.V. Ingraffea, A. R. (1990). The collapse of the Schoharie Creek Bridge: a case study in concrete fracture mechanics, International Journal of Fracture, 15(1): 73-92.

  • Certification Related to Electrical Engineering
  • Fan Performance in the Occupational Hygiene Context
  • Reflective Journal on The Illusion of Leadership: Directing Creativity in Business and the Arts by Piers Ibbotson
  • Wea Creek Orchard Company Analysis
  • The Sand Creek Massacre
  • Control Philosophy. Variable Speed Drives
  • The Tractor Hydraulic System: Components
  • Engineering Ethics of Chernobyl and the Three Mile Island
  • Laser Drilling: Extracting Oil and Gas
  • Engineering Hardware Identification
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2022, March 3). Engineering: Engineering Disaster Analysis. https://ivypanda.com/essays/engineering-engineering-disaster-analysis/

"Engineering: Engineering Disaster Analysis." IvyPanda , 3 Mar. 2022, ivypanda.com/essays/engineering-engineering-disaster-analysis/.

IvyPanda . (2022) 'Engineering: Engineering Disaster Analysis'. 3 March.

IvyPanda . 2022. "Engineering: Engineering Disaster Analysis." March 3, 2022. https://ivypanda.com/essays/engineering-engineering-disaster-analysis/.

1. IvyPanda . "Engineering: Engineering Disaster Analysis." March 3, 2022. https://ivypanda.com/essays/engineering-engineering-disaster-analysis/.

Bibliography

IvyPanda . "Engineering: Engineering Disaster Analysis." March 3, 2022. https://ivypanda.com/essays/engineering-engineering-disaster-analysis/.

Advertisement

Advertisement

The Boeing 737 MAX: Lessons for Engineering Ethics

  • Original Research/Scholarship
  • Published: 10 July 2020
  • Volume 26 , pages 2957–2974, ( 2020 )

Cite this article

engineering disasters essay

  • Joseph Herkert 1 ,
  • Jason Borenstein 2 &
  • Keith Miller 3  

114k Accesses

66 Citations

113 Altmetric

12 Mentions

Explore all metrics

The crash of two 737 MAX passenger aircraft in late 2018 and early 2019, and subsequent grounding of the entire fleet of 737 MAX jets, turned a global spotlight on Boeing’s practices and culture. Explanations for the crashes include: design flaws within the MAX’s new flight control software system designed to prevent stalls; internal pressure to keep pace with Boeing’s chief competitor, Airbus; Boeing’s lack of transparency about the new software; and the lack of adequate monitoring of Boeing by the FAA, especially during the certification of the MAX and following the first crash. While these and other factors have been the subject of numerous government reports and investigative journalism articles, little to date has been written on the ethical significance of the accidents, in particular the ethical responsibilities of the engineers at Boeing and the FAA involved in designing and certifying the MAX. Lessons learned from this case include the need to strengthen the voice of engineers within large organizations. There is also the need for greater involvement of professional engineering societies in ethics-related activities and for broader focus on moral courage in engineering ethics education.

Similar content being viewed by others

engineering disasters essay

Repentance as Rebuke: Betrayal and Moral Injury in Safety Engineering

engineering disasters essay

Airworthiness and Safety in Air Operations in Ecuadorian Public Institutions

engineering disasters essay

Safety in Numbers? (Lessons Learned From Aviation Safety Assessment Techniques)

Explore related subjects.

  • Medical Ethics
  • Artificial Intelligence

Avoid common mistakes on your manuscript.

Introduction

In October 2018 and March 2019, Boeing 737 MAX passenger jets crashed minutes after takeoff; these two accidents claimed nearly 350 lives. After the second incident, all 737 MAX planes were grounded worldwide. The 737 MAX was an updated version of the 737 workhorse that first began flying in the 1960s. The crashes were precipitated by a failure of an Angle of Attack (AOA) sensor and the subsequent activation of new flight control software, the Maneuvering Characteristics Augmentation System (MCAS). The MCAS software was intended to compensate for changes in the size and placement of the engines on the MAX as compared to prior versions of the 737. The existence of the software, designed to prevent a stall due to the reconfiguration of the engines, was not disclosed to pilots until after the first crash. Even after that tragic incident, pilots were not required to undergo simulation training on the 737 MAX.

In this paper, we examine several aspects of the case, including technical and other factors that led up to the crashes, especially Boeing’s design choices and organizational tensions internal to the company, and between Boeing and the U.S. Federal Aviation Administration (FAA). While the case is ongoing and at this writing, the 737 MAX has yet to be recertified for flight, our analysis is based on numerous government reports and detailed news accounts currently available. We conclude with a discussion of specific lessons for engineers and engineering educators regarding engineering ethics.

Overview of 737 MAX History and Crashes

In December 2010, Boeing’s primary competitor Airbus announced the A320neo family of jetliners, an update of their successful A320 narrow-body aircraft. The A320neo featured larger, more fuel-efficient engines. Boeing had been planning to introduce a totally new aircraft to replace its successful, but dated, 737 line of jets; yet to remain competitive with Airbus, Boeing instead announced in August 2011 the 737 MAX family, an update of the 737NG with similar engine upgrades to the A320neo and other improvements (Gelles et al. 2019 ). The 737 MAX, which entered service in May 2017, became Boeing’s fastest-selling airliner of all time with 5000 orders from over 100 airlines worldwide (Boeing n.d. a) (See Fig.  1 for timeline of 737 MAX key events).

figure 1

737 MAX timeline showing key events from 2010 to 2019

The 737 MAX had been in operation for over a year when on October 29, 2018, Lion Air flight JT610 crashed into the Java Sea 13 minutes after takeoff from Jakarta, Indonesia; all 189 passengers and crew on board died. Monitoring from the flight data recorder recovered from the wreckage indicated that MCAS, the software specifically designed for the MAX, forced the nose of the aircraft down 26 times in 10 minutes (Gates 2018 ). In October 2019, the Final Report of Indonesia’s Lion Air Accident Investigation was issued. The Report placed some of the blame on the pilots and maintenance crews but concluded that Boeing and the FAA were primarily responsible for the crash (Republic of Indonesia 2019 ).

MCAS was not identified in the original documentation/training for 737 MAX pilots (Glanz et al. 2019 ). But after the Lion Air crash, Boeing ( 2018 ) issued a Flight Crew Operations Manual Bulletin on November 6, 2018 containing procedures for responding to flight control problems due to possible erroneous AOA inputs. The next day the FAA ( 2018a ) issued an Emergency Airworthiness Directive on the same subject; however, the FAA did not ground the 737 MAX at that time. According to published reports, these notices were the first time that airline pilots learned of the existence of MCAS (e.g., Bushey 2019 ).

On March 20, 2019, about four months after the Lion Air crash, Ethiopian Airlines Flight ET302 crashed 6 minutes after takeoff in a field 39 miles from Addis Ababa Airport. The accident caused the deaths of all 157 passengers and crew. The Preliminary Report of the Ethiopian Airlines Accident Investigation (Federal Democratic Republic of Ethiopia 2019 ), issued in April 2019, indicated that the pilots followed the checklist from the Boeing Flight Crew Operations Manual Bulletin posted after the Lion Air crash but could not control the plane (Ahmed et al. 2019 ). This was followed by an Interim Report (Federal Democratic Republic of Ethiopia 2020 ) issued in March 2020 that exonerated the pilots and airline, and placed blame for the accident on design flaws in the MAX (Marks and Dahir 2020 ). Following the second crash, the 737 MAX was grounded worldwide with the U.S., through the FAA, being the last country to act on March 13, 2019 (Kaplan et al. 2019 ).

Design Choices that Led to the Crashes

As noted above, with its belief that it must keep up with its main competitor, Airbus, Boeing elected to modify the latest generation of the 737 family, the 737NG, rather than design an entirely new aircraft. Yet this raised a significant engineering challenge for Boeing. Mounting larger, more fuel-efficient engines, similar to those employed on the A320neo, on the existing 737 airframe posed a serious design problem, because the 737 family was built closer to the ground than the Airbus A320. In order to provide appropriate ground clearance, the larger engines had to be mounted higher and farther forward on the wings than previous models of the 737 (see Fig.  2 ). This significantly changed the aerodynamics of the aircraft and created the possibility of a nose-up stall under certain flight conditions (Travis 2019 ; Glanz et al. 2019 ).

figure 2

(Image source: https://www.norebbo.com )

Boeing 737 MAX (left) compared to Boeing 737NG (right) showing larger 737 MAX engines mounted higher and more forward on the wing.

Boeing’s attempt to solve this problem involved incorporating MCAS as a software fix for the potential stall condition. The 737 was designed with two AOA sensors, one on each side of the aircraft. Yet Boeing decided that the 737 MAX would only use input from one of the plane’s two AOA sensors. If the single AOA sensor was triggered, MCAS would detect a dangerous nose-up condition and send a signal to the horizontal stabilizer located in the tail. Movement of the stabilizer would then force the plane’s tail up and the nose down (Travis 2019 ). In both the Lion Air and Ethiopian Air crashes, the AOA sensor malfunctioned, repeatedly activating MCAS (Gates 2018 ; Ahmed et al. 2019 ). Since the two crashes, Boeing has made adjustments to the MCAS, including that the system will rely on input from the two AOA sensors instead of just one. But still more problems with MCAS have been uncovered. For example, an indicator light that would alert pilots if the jet’s two AOA sensors disagreed, thought by Boeing to be standard on all MAX aircraft, would only operate as part of an optional equipment package that neither airline involved in the crashes purchased (Gelles and Kitroeff 2019a ).

Similar to its responses to previous accidents, Boeing has been reluctant to admit to a design flaw in its aircraft, instead blaming pilot error (Hall and Goelz 2019 ). In the 737 MAX case, the company pointed to the pilots’ alleged inability to control the planes under stall conditions (Economy 2019 ). Following the Ethiopian Airlines crash, Boeing acknowledged for the first time that MCAS played a primary role in the crashes, while continuing to highlight that other factors, such as pilot error, were also involved (Hall and Goelz 2019 ). For example, on April 29, 2019, more than a month after the second crash, then Boeing CEO Dennis Muilenburg defended MCAS by stating:

We've confirmed that [the MCAS system] was designed per our standards, certified per our standards, and we're confident in that process. So, it operated according to those design and certification standards. So, we haven't seen a technical slip or gap in terms of the fundamental design and certification of the approach. (Economy 2019 )

The view that MCAS was not primarily at fault was supported within an article written by noted journalist and pilot William Langewiesche ( 2019 ). While not denying Boeing made serious mistakes, he placed ultimate blame on the use of inexperienced pilots by the two airlines involved in the crashes. Langewiesche suggested that the accidents resulted from the cost-cutting practices of the airlines and the lax regulatory environments in which they operated. He argued that more experienced pilots, despite their lack of information on MCAS, should have been able to take corrective action to control the planes using customary stall prevention procedures. Langewiesche ( 2019 ) concludes in his article that:

What we had in the two downed airplanes was a textbook failure of airmanship. In broad daylight, these pilots couldn’t decipher a variant of a simple runaway trim, and they ended up flying too fast at low altitude, neglecting to throttle back and leading their passengers over an aerodynamic edge into oblivion. They were the deciding factor here — not the MCAS, not the Max.

Others have taken a more critical view of MCAS, Boeing, and the FAA. These critics prominently include Captain Chesley “Sully” Sullenberger, who famously crash-landed an A320 in the Hudson River after bird strikes had knocked out both of the plane’s engines. Sullenberger responded directly to Langewiesche in a letter to the Editor:

… Langewiesche draws the conclusion that the pilots are primarily to blame for the fatal crashes of Lion Air 610 and Ethiopian 302. In resurrecting this age-old aviation canard, Langewiesche minimizes the fatal design flaws and certification failures that precipitated those tragedies, and still pose a threat to the flying public. I have long stated, as he does note, that pilots must be capable of absolute mastery of the aircraft and the situation at all times, a concept pilots call airmanship. Inadequate pilot training and insufficient pilot experience are problems worldwide, but they do not excuse the fatally flawed design of the Maneuvering Characteristics Augmentation System (MCAS) that was a death trap.... (Sullenberger 2019 )

Noting that he is one of the few pilots to have encountered both accident sequences in a 737 MAX simulator, Sullenberger continued:

These emergencies did not present as a classic runaway stabilizer problem, but initially as ambiguous unreliable airspeed and altitude situations, masking MCAS. The MCAS design should never have been approved, not by Boeing, and not by the Federal Aviation Administration (FAA)…. (Sullenberger 2019 )

In June 2019, Sullenberger noted in Congressional Testimony that “These crashes are demonstrable evidence that our current system of aircraft design and certification has failed us. These accidents should never have happened” (Benning and DiFurio 2019 ).

Others have agreed with Sullenberger’s assessment. Software developer and pilot Gregory Travis ( 2019 ) argues that Boeing’s design for the 737 MAX violated industry norms and that the company unwisely used software to compensate for inadequacies in the hardware design. Travis also contends that the existence of MCAS was not disclosed to pilots in order to preserve the fiction that the 737 MAX was just an update of earlier 737 models, which served as a way to circumvent the more stringent FAA certification requirements for a new airplane. Reports from government agencies seem to support this assessment, emphasizing the chaotic cockpit conditions created by MCAS and poor certification practices. The U.S. National Transportation Safety Board (NTSB) ( 2019 ) Safety Recommendations to the FAA in September 2019 indicated that Boeing underestimated the effect MCAS malfunction would have on the cockpit environment (Kitroeff 2019 , a , b ). The FAA Joint Authorities Technical Review ( 2019 ), which included international participation, issued its Final Report in October 2019. The Report faulted Boeing and FAA in MCAS certification (Koenig 2019 ).

Despite Boeing’s attempts to downplay the role of MCAS, it began to work on a fix for the system shortly after the Lion Air crash (Gates 2019 ). MCAS operation will now be based on inputs from both AOA sensors, instead of just one sensor, with a cockpit indicator light when the sensors disagree. In addition, MCAS will only be activated once for an AOA warning rather than multiple times. What follows is that the system would only seek to prevent a stall once per AOA warning. Also, MCAS’s power will be limited in terms of how much it can move the stabilizer and manual override by the pilot will always be possible (Bellamy 2019 ; Boeing n.d. b; Gates 2019 ). For over a year after the Lion Air crash, Boeing held that pilot simulator training would not be required for the redesigned MCAS system. In January 2020, Boeing relented and recommended that pilot simulator training be required when the 737 MAX returns to service (Pasztor et al. 2020 ).

Boeing and the FAA

There is mounting evidence that Boeing, and the FAA as well, had warnings about the inadequacy of MCAS’s design, and about the lack of communication to pilots about its existence and functioning. In 2015, for example, an unnamed Boeing engineer raised in an email the issue of relying on a single AOA sensor (Bellamy 2019 ). In 2016, Mark Forkner, Boeing’s Chief Technical Pilot, in an email to a colleague flagged the erratic behavior of MCAS in a flight simulator noting: “It’s running rampant” (Gelles and Kitroeff 2019c ). Forkner subsequently came under federal investigation regarding whether he misled the FAA regarding MCAS (Kitroeff and Schmidt 2020 ).

In December 2018, following the Lion Air Crash, the FAA ( 2018b ) conducted a Risk Assessment that estimated that fifteen more 737 MAX crashes would occur in the expected fleet life of 45 years if the flight control issues were not addressed; this Risk Assessment was not publicly disclosed until Congressional hearings a year later in December 2019 (Arnold 2019 ). After the two crashes, a senior Boeing engineer, Curtis Ewbank, filed an internal ethics complaint in 2019 about management squelching of a system that might have uncovered errors in the AOA sensors. Ewbank has since publicly stated that “I was willing to stand up for safety and quality… Boeing management was more concerned with cost and schedule than safety or quality” (Kitroeff et al. 2019b ).

One factor in Boeing’s apparent reluctance to heed such warnings may be attributed to the seeming transformation of the company’s engineering and safety culture over time to a finance orientation beginning with Boeing’s merger with McDonnell–Douglas in 1997 (Tkacik 2019 ; Useem 2019 ). Critical changes after the merger included replacing many in Boeing’s top management, historically engineers, with business executives from McDonnell–Douglas and moving the corporate headquarters to Chicago, while leaving the engineering staff in Seattle (Useem 2019 ). According to Tkacik ( 2019 ), the new management even went so far as “maligning and marginalizing engineers as a class”.

Financial drivers thus began to place an inordinate amount of strain on Boeing employees, including engineers. During the development of the 737 MAX, significant production pressure to keep pace with the Airbus 320neo was ever-present. For example, Boeing management allegedly rejected any design changes that would prolong certification or require additional pilot training for the MAX (Gelles et al. 2019 ). As Adam Dickson, a former Boeing engineer, explained in a television documentary (BBC Panorama 2019 ): “There was a lot of interest and pressure on the certification and analysis engineers in particular, to look at any changes to the Max as minor changes”.

Production pressures were exacerbated by the “cozy relationship” between Boeing and the FAA (Kitroeff et al. 2019a ; see also Gelles and Kaplan 2019 ; Hall and Goelz 2019 ). Beginning in 2005, the FAA increased its reliance on manufacturers to certify their own planes. Self-certification became standard practice throughout the U.S. airline industry. By 2018, Boeing was certifying 96% of its own work (Kitroeff et al. 2019a ).

The serious drawbacks to self-certification became acutely apparent in this case. Of particular concern, the safety analysis for MCAS delegated to Boeing by the FAA was flawed in at least three respects: (1) the analysis underestimated the power of MCAS to move the plane’s horizontal tail and thus how difficult it would be for pilots to maintain control of the aircraft; (2) it did not account for the system deploying multiple times; and (3) it underestimated the risk level if MCAS failed, thus permitting a design feature—the single AOA sensor input to MCAS—that did not have built-in redundancy (Gates 2019 ). Related to these concerns, the ability of MCAS to move the horizontal tail was increased without properly updating the safety analysis or notifying the FAA about the change (Gates 2019 ). In addition, the FAA did not require pilot training for MCAS or simulator training for the 737 MAX (Gelles and Kaplan 2019 ). Since the MAX grounding, the FAA has been become more independent during its assessments and certifications—for example, they will not use Boeing personnel when certifying approvals of new 737 MAX planes (Josephs 2019 ).

The role of the FAA has also been subject to political scrutiny. The report of a study of the FAA certification process commissioned by Secretary of Transportation Elaine Chao (DOT 2020 ), released January 16, 2020, concluded that the FAA certification process was “appropriate and effective,” and that certification of the MAX as a new airplane would not have made a difference in the plane’s safety. At the same time, the report recommended a number of measures to strengthen the process and augment FAA’s staff (Pasztor and Cameron 2020 ). In contrast, a report of preliminary investigative findings by the Democratic staff of the House Committee on Transportation and Infrastructure (House TI 2020 ), issued in March 2020, characterized FAA’s certification of the MAX as “grossly insufficient” and criticized Boeing’s design flaws and lack of transparency with the FAA, airlines, and pilots (Duncan and Laris 2020 ).

Boeing has incurred significant economic losses from the crashes and subsequent grounding of the MAX. In December 2019, Boeing CEO Dennis Muilenburg was fired and the corporation announced that 737 MAX production would be suspended in January 2020 (Rich 2019 ) (see Fig.  1 ). Boeing is facing numerous lawsuits and possible criminal investigations. Boeing estimates that its economic losses for the 737 MAX will exceed $18 billion (Gelles 2020 ). In addition to the need to fix MCAS, other issues have arisen in recertification of the aircraft, including wiring for controls of the tail stabilizer, possible weaknesses in the engine rotors, and vulnerabilities in lightning protection for the engines (Kitroeff and Gelles 2020 ). The FAA had planned to flight test the 737 MAX early in 2020, and it was supposed to return to service in summer 2020 (Gelles and Kitroeff 2020 ). Given the global impact of the COVID-19 pandemic and other factors, it is difficult to predict when MAX flights might resume. In addition, uncertainty of passenger demand has resulted in some airlines delaying or cancelling orders for the MAX (Bogaisky 2020 ). Even after obtaining flight approval, public resistance to flying in the 737 MAX will probably be considerable (Gelles 2019 ).

Lessons for Engineering Ethics

The 737 MAX case is still unfolding and will continue to do so for some time. Yet important lessons can already be learned (or relearned) from the case. Some of those lessons are straightforward, and others are more subtle. A key and clear lesson is that engineers may need reminders about prioritizing the public good, and more specifically, the public’s safety. A more subtle lesson pertains to the ways in which the problem of many hands may or may not apply here. Other lessons involve the need for corporations, engineering societies, and engineering educators to rise to the challenge of nurturing and supporting ethical behavior on the part of engineers, especially in light of the difficulties revealed in this case.

All contemporary codes of ethics promulgated by major engineering societies state that an engineer’s paramount responsibility is to protect the “safety, health, and welfare” of the public. The American Institute of Aeronautics and Astronautics Code of Ethics indicates that engineers must “[H]old paramount the safety, health, and welfare of the public in the performance of their duties” (AIAA 2013 ). The Institute of Electrical and Electronics Engineers (IEEE) Code of Ethics goes further, pledging its members: “…to hold paramount the safety, health, and welfare of the public, to strive to comply with ethical design and sustainable development practices, and to disclose promptly factors that might endanger the public or the environment” (IEEE 2017 ). The IEEE Computer Society (CS) cooperated with the Association for Computing Machinery (ACM) in developing a Software Engineering Code of Ethics ( 1997 ) which holds that software engineers shall: “Approve software only if they have a well-founded belief that it is safe, meets specifications, passes appropriate tests, and does not diminish quality of life, diminish privacy or harm the environment….” According to Gotterbarn and Miller ( 2009 ), the latter code is a useful guide when examining cases involving software design and underscores the fact that during design, as in all engineering practice, the well-being of the public should be the overriding concern. While engineering codes of ethics are plentiful in number, they differ in their source of moral authority (i.e., organizational codes vs. professional codes), are often unenforceable through the law, and formally apply to different groups of engineers (e.g., based on discipline or organizational membership). However, the codes are generally recognized as a statement of the values inherent to engineering and its ethical commitments (Davis 2015 ).

An engineer’s ethical responsibility does not preclude consideration of factors such as cost and schedule (Pinkus et al. 1997 ). Engineers always have to grapple with constraints, including time and resource limitations. The engineers working at Boeing did have legitimate concerns about their company losing contracts to its competitor Airbus. But being an engineer means that public safety and welfare must be the highest priority (Davis 1991 ). The aforementioned software and other design errors in the development of the 737 MAX, which resulted in hundreds of deaths, would thus seem to be clear violations of engineering codes of ethics. In addition to pointing to engineering codes, Peterson ( 2019 ) argues that Boeing engineers and managers violated widely accepted ethical norms such as informed consent and the precautionary principle.

From an engineering perspective, the central ethical issue in the MAX case arguably circulates around the decision to use software (i.e., MCAS) to “mask” a questionable hardware design—the repositioning of the engines that disrupted the aerodynamics of the airframe (Travis 2019 ). As Johnston and Harris ( 2019 ) argue: “To meet the design goals and avoid an expensive hardware change, Boeing created the MCAS as a software Band-Aid.” Though a reliance on software fixes often happens in this manner, it places a high burden of safety on such fixes that they may not be able to handle, as is illustrated by the case of the Therac-25 radiation therapy machine. In the Therac-25 case, hardware safety interlocks employed in earlier models of the machine were replaced by software safety controls. In addition, information about how the software might malfunction was lacking from the user manual for the Therac machine. Thus, when certain types of errors appeared on its interface, the machine’s operators did not know how to respond. Software flaws, among other factors, contributed to six patients being given massive radiation overdoses, resulting in deaths and serious injuries (Leveson and Turner 1993 ). A more recent case involves problems with the embedded software guiding the electronic throttle in Toyota vehicles. In 2013, “…a jury found Toyota responsible for two unintended acceleration deaths, with expert witnesses citing bugs in the software and throttle fail safe defects” (Cummings and Britton 2020 ).

Boeing’s use of MCAS to mask the significant change in hardware configuration of the MAX was compounded by not providing redundancy for components prone to failure (i.e., the AOA sensors) (Campbell 2019 ), and by failing to notify pilots about the new software. In such cases, it is especially crucial that pilots receive clear documentation and relevant training so that they know how to manage the hand-off with an automated system properly (Johnston and Harris 2019 ). Part of the necessity for such training is related to trust calibration (Borenstein et al. 2020 ; Borenstein et al. 2018 ), a factor that has contributed to previous airplane accidents (e.g., Carr 2014 ). For example, if pilots do not place enough trust in an automated system, they may add risk by intervening in system operation. Conversely, if pilots trust an automated system too much, they may lack sufficient time to act once they identify a problem. This is further complicated in the MAX case because pilots were not fully aware, if at all, of MCAS’s existence and how the system functioned.

In addition to engineering decision-making that failed to prioritize public safety, questionable management decisions were also made at both Boeing and the FAA. As noted earlier, Boeing managerial leadership ignored numerous warning signs that the 737 MAX was not safe. Also, FAA’s shift to greater reliance on self-regulation by Boeing was ill-advised; that lesson appears to have been learned at the expense of hundreds of lives (Duncan and Aratani 2019 ).

The Problem of Many Hands Revisited

Actions, or inaction, by large, complex organizations, in this case corporate and government entities, suggest that the “problem of many hands” may be relevant to the 737 MAX case. At a high level of abstraction, the problem of many hands involves the idea that accountability is difficult to assign in the face of collective action, especially in a computerized society (Thompson 1980 ; Nissenbaum 1994 ). According to Nissenbaum ( 1996 , 29), “Where a mishap is the work of ‘many hands,’ it may not be obvious who is to blame because frequently its most salient and immediate causal antecedents do not converge with its locus of decision-making. The conditions for blame, therefore, are not satisfied in a way normally satisfied when a single individual is held blameworthy for a harm”.

However, there is an alternative understanding of the problem of many hands. In this version of the problem, the lack of accountability is not merely because multiple people and multiple decisions figure into a final outcome. Instead, in order to “qualify” as the problem of many hands, the component decisions should be benign, or at least far less harmful, if examined in isolation; only when the individual decisions are collectively combined do we see the most harmful result. In this understanding, the individual decision-makers should not have the same moral culpability as they would if they made all the decisions by themselves (Noorman 2020 ).

Both of these understandings of the problem of many hands could shed light on the 737 MAX case. Yet we focus on the first version of the problem. We admit the possibility that some of the isolated decisions about the 737 MAX may have been made in part because of ignorance of a broader picture. While we do not stake a claim on whether this is what actually happened in the MAX case, we acknowledge that it may be true in some circumstances. However, we think the more important point is that some of the 737 MAX decisions were so clearly misguided that a competent engineer should have seen the implications, even if the engineer was not aware of all of the broader context. The problem then is to identify responsibility for the questionable decisions in a way that discourages bad judgments in the future, a task made more challenging by the complexities of the decision-making. Legal proceedings about this case are likely to explore those complexities in detail and are outside the scope of this article. But such complexities must be examined carefully so as not to act as an insulator to accountability.

When many individuals are involved in the design of a computing device, for example, and a serious failure occurs, each person might try to absolve themselves of responsibility by indicating that “too many people” and “too many decisions” were involved for any individual person to know that the problem was going to happen. This is a common, and often dubious, excuse in the attempt to abdicate responsibility for a harm. While it can have different levels of magnitude and severity, the problem of many hands often arises in large scale ethical failures in engineering such as in the Deepwater Horizon oil spill (Thompson 2014 ).

Possible examples in the 737 MAX case of the difficulty of assigning moral responsibility due to the problem of many hands include:

The decision to reposition the engines;

The decision to mask the jet’s subsequent dynamic instability with MCAS;

The decision to rely on only one AOA sensor in designing MCAS; and

The decision to not inform nor properly train pilots about the MCAS system.

While overall responsibility for each of these decisions may be difficult to allocate precisely, at least points 1–3 above arguably reflect fundamental errors in engineering judgement (Travis 2019 ). Boeing engineers and FAA engineers either participated in or were aware of these decisions (Kitroeff and Gelles 2019 ) and may have had opportunities to reconsider or redirect such decisions. As Davis has noted ( 2012 ), responsible engineering professionals make it their business to address problems even when they did not cause the problem, or, we would argue, solely cause it. As noted earlier, reports indicate that at least one Boeing engineer expressed reservations about the design of MCAS (Bellamy 2019 ). Since the two crashes, one Boeing engineer, Curtis Ewbank, filed an internal ethics complaint (Kitroeff et al. 2019b ) and several current and former Boeing engineers and other employees have gone public with various concerns about the 737 MAX (Pasztor 2019 ). And yet, as is often the case, the flawed design went forward with tragic results.

Enabling Ethical Engineers

The MAX case is eerily reminiscent of other well-known engineering ethics case studies such as the Ford Pinto (Birsch and Fielder 1994 ), Space Shuttle Challenger (Werhane 1991 ), and GM ignition switch (Jennings and Trautman 2016 ). In the Pinto case, Ford engineers were aware of the unsafe placement of the fuel tank well before the car was released to the public and signed off on the design even though crash tests showed the tank was vulnerable to rupture during low-speed rear-end collisions (Baura 2006 ). In the case of the GM ignition switch, engineers knew for at least four years about the faulty design, a flaw that resulted in at least a dozen fatal accidents (Stephan 2016 ). In the case of the well-documented Challenger accident, engineer Roger Boisjoly warned his supervisors at Morton Thiokol of potentially catastrophic flaws in the shuttle’s solid rocket boosters a full six months before the accident. He, along with other engineers, unsuccessfully argued on the eve of launch for a delay due to the effect that freezing temperatures could have on the boosters’ O-ring seals. Boisjoly was also one of a handful of engineers to describe these warnings to the Presidential commission investigating the accident (Boisjoly et al. 1989 ).

Returning to the 737 MAX case, could Ewbank or others with concerns about the safety of the airplane have done more than filing ethics complaints or offering public testimony only after the Lion Air and Ethiopian Airlines crashes? One might argue that requiring professional registration by all engineers in the U.S. would result in more ethical conduct (for example, by giving state licensing boards greater oversight authority). Yet the well-entrenched “industry exemption” from registration for most engineers working in large corporations has undermined such calls (Kline 2001 ).

It could empower engineers with safety concerns if Boeing and other corporations would strengthen internal ethics processes, including sincere and meaningful responsiveness to anonymous complaint channels. Schwartz ( 2013 ) outlines three core components of an ethical corporate culture, including strong core ethical values, a formal ethics program (including an ethics hotline), and capable ethical leadership. Schwartz points to Siemens’ creation of an ethics and compliance department following a bribery scandal as an example of a good solution. Boeing has had a compliance department for quite some time (Schnebel and Bienert 2004 ) and has taken efforts in the past to evaluate its effectiveness (Boeing 2003 ). Yet it is clear that more robust measures are needed in response to ethics concerns and complaints. Since the MAX crashes, Boeing’s Board has implemented a number of changes including establishing a corporate safety group and revising internal reporting procedures so that lead engineers primarily report to the chief engineer rather than business managers (Gelles and Kitroeff 2019b , Boeing n.d. c). Whether these measures will be enough to restore Boeing’s former engineering-centered focus remains to be seen.

Professional engineering societies could play a stronger role in communicating and enforcing codes of ethics, in supporting ethical behavior of engineers, and by providing more educational opportunities for learning about ethics and about the ethical responsibilities of engineers. Some societies, including ACM and IEEE, have become increasingly engaged in ethics-related activities. Initially ethics engagement by the societies consisted primarily of a focus on macroethical issues such as sustainable development (Herkert 2004 ). Recently, however, the societies have also turned to a greater focus on microethical issues (the behavior of individuals). The 2017 revision to the IEEE Code of Ethics, for example, highlights the importance of “ethical design” (Adamson and Herkert 2020 ). This parallels IEEE activities in the area of design of autonomous and intelligent systems (e.g., IEEE 2018 ). A promising outcome of this emphasis is a move toward implementing “ethical design” frameworks (Peters et al. 2020 ).

In terms of engineering education, educators need to place a greater emphasis on fostering moral courage, that is the courage to act on one’s moral convictions including adherence to codes of ethics. This is of particular significance in large organizations such as Boeing and the FAA where the agency of engineers may be limited by factors such as organizational culture (Watts and Buckley 2017 ). In a study of twenty-six ethics interventions in engineering programs, Hess and Fore ( 2018 ) found that only twenty-seven percent had a learning goal of development of “ethical courage, confidence or commitment”. This goal could be operationalized in a number of ways, for example through a focus on virtue ethics (Harris 2008 ) or professional identity (Hashemian and Loui 2010 ). This need should not only be addressed within the engineering curriculum but during lifelong learning initiatives and other professional development opportunities as well (Miller 2019 ).

The circumstances surrounding the 737 MAX airplane could certainly serve as an informative case study for ethics or technical courses. The case can shed light on important lessons for engineers including the complex interactions, and sometimes tensions, between engineering and managerial considerations. The case also tangibly displays that what seems to be relatively small-scale, and likely well-intended, decisions by individual engineers can combine collectively to result in large-scale tragedy. No individual person wanted to do harm, but it happened nonetheless. Thus, the case can serve a reminder to current and future generations of engineers that public safety must be the first and foremost priority. A particularly useful pedagogical method for considering this case is to assign students to the roles of engineers, managers, and regulators, as well as the flying public, airline personnel, and representatives of engineering societies (Herkert 1997 ). In addition to illuminating the perspectives and responsibilities of each stakeholder group, role-playing can also shed light on the “macroethical” issues raised by the case (Martin et al. 2019 ) such as airline safety standards and the proper role for engineers and engineering societies in the regulation of the industry.

Conclusions and Recommendations

The case of the Boeing 737 MAX provides valuable lessons for engineers and engineering educators concerning the ethical responsibilities of the profession. Safety is not cheap, but careless engineering design in the name of minimizing costs and adhering to a delivery schedule is a symptom of ethical blight. Using almost any standard ethical analysis or framework, Boeing’s actions regarding the safety of the 737 MAX, particularly decisions regarding MCAS, fall short.

Boeing failed in its obligations to protect the public. At a minimum, the company had an obligation to inform airlines and pilots of significant design changes, especially the role of MCAS in compensating for repositioning of engines in the MAX from prior versions of the 737. Clearly, it was a “significant” change because it had a direct, and unfortunately tragic, impact on the public’s safety. The Boeing and FAA interaction underscores the fact that conflicts of interest are a serious concern in regulatory actions within the airline industry.

Internal and external organizational factors may have interfered with Boeing and FAA engineers’ fulfillment of their professional ethical responsibilities; this is an all too common problem that merits serious attention from industry leaders, regulators, professional societies, and educators. The lessons to be learned in this case are not new. After large scale tragedies involving engineering decision-making, calls for change often emerge. But such lessons apparently must be retaught and relearned by each generation of engineers.

ACM/IEEE-CS Joint Task Force. (1997). Software Engineering Code of Ethics and Professional Practice, https://ethics.acm.org/code-of-ethics/software-engineering-code/ .

Adamson, G., & Herkert, J. (2020). Addressing intelligent systems and ethical design in the IEEE Code of Ethics. In Codes of ethics and ethical guidelines: Emerging technologies, changing fields . New York: Springer ( in press ).

Ahmed, H., Glanz, J., & Beech, H. (2019). Ethiopian airlines pilots followed Boeing’s safety procedures before crash, Report Shows. The New York Times, April 4, https://www.nytimes.com/2019/04/04/world/asia/ethiopia-crash-boeing.html .

AIAA. (2013). Code of Ethics, https://www.aiaa.org/about/Governance/Code-of-Ethics .

Arnold, K. (2019). FAA report predicted there could be 15 more 737 MAX crashes. The Dallas Morning News, December 11, https://www.dallasnews.com/business/airlines/2019/12/11/faa-chief-says-boeings-737-max-wont-be-approved-in-2019/

Baura, G. (2006). Engineering ethics: an industrial perspective . Amsterdam: Elsevier.

Google Scholar  

BBC News. (2019). Work on production line of Boeing 737 MAX ‘Not Adequately Funded’. July 29, https://www.bbc.com/news/business-49142761 .

Bellamy, W. (2019). Boeing CEO outlines 737 MAX MCAS software fix in congressional hearings. Aviation Today, November 2, https://www.aviationtoday.com/2019/11/02/boeing-ceo-outlines-mcas-updates-congressional-hearings/ .

Benning, T., & DiFurio, D. (2019). American Airlines Pilots Union boss prods lawmakers to solve 'Crisis of Trust' over Boeing 737 MAX. The Dallas Morning News, June 19, https://www.dallasnews.com/business/airlines/2019/06/19/american-airlines-pilots-union-boss-prods-lawmakers-to-solve-crisis-of-trust-over-boeing-737-max/ .

Birsch, D., & Fielder, J. (Eds.). (1994). The ford pinto case: A study in applied ethics, business, and technology . New York: The State University of New York Press.

Boeing. (2003). Boeing Releases Independent Reviews of Company Ethics Program. December 18, https://boeing.mediaroom.com/2003-12-18-Boeing-Releases-Independent-Reviews-of-Company-Ethics-Program .

Boeing. (2018). Flight crew operations manual bulletin for the Boeing company. November 6, https://www.avioesemusicas.com/wp-content/uploads/2018/10/TBC-19-Uncommanded-Nose-Down-Stab-Trim-Due-to-AOA.pdf .

Boeing. (n.d. a). About the Boeing 737 MAX. https://www.boeing.com/commercial/737max/ .

Boeing. (n.d. b). 737 MAX Updates. https://www.boeing.com/737-max-updates/ .

Boeing. (n.d. c). Initial actions: sharpening our focus on safety. https://www.boeing.com/737-max-updates/resources/ .

Bogaisky, J. (2020). Boeing stock plunges as coronavirus imperils quick ramp up in 737 MAX deliveries. Forbes, March 11, https://www.forbes.com/sites/jeremybogaisky/2020/03/11/boeing-coronavirus-737-max/#1b9eb8955b5a .

Boisjoly, R. P., Curtis, E. F., & Mellican, E. (1989). Roger Boisjoly and the challenger disaster: The ethical dimensions. J Bus Ethics, 8 (4), 217–230.

Article   Google Scholar  

Borenstein, J., Mahajan, H. P., Wagner, A. R., & Howard, A. (2020). Trust and pediatric exoskeletons: A comparative study of clinician and parental perspectives. IEEE Transactions on Technology and Society , 1 (2), 83–88.

Borenstein, J., Wagner, A. R., & Howard, A. (2018). Overtrust of pediatric health-care robots: A preliminary survey of parent perspectives. IEEE Robot Autom Mag, 25 (1), 46–54.

Bushey, C. (2019). The Tough Crowd Boeing Needs to Convince. Crain’s Chicago Business, October 25, https://www.chicagobusiness.com/manufacturing/tough-crowd-boeing-needs-convince .

Campbell, D. (2019). The many human errors that brought down the Boeing 737 MAX. The Verge, May 2, https://www.theverge.com/2019/5/2/18518176/boeing-737-max-crash-problems-human-error-mcas-faa .

Carr, N. (2014). The glass cage: Automation and us . Norton.

Cummings, M. L., & Britton, D. (2020). Regulating safety-critical autonomous systems: past, present, and future perspectives. In Living with robots (pp. 119–140). Academic Press, New York.

Davis, M. (1991). Thinking like an engineer: The place of a code of ethics in the practice of a profession. Philos Publ Affairs, 20 (2), 150–167.

Davis, M. (2012). “Ain’t no one here but us social forces”: Constructing the professional responsibility of engineers. Sci Eng Ethics, 18 (1), 13–34.

Davis, M. (2015). Engineering as profession: Some methodological problems in its study. In Engineering identities, epistemologies and values (pp. 65–79). Springer, New York.

Department of Transportation (DOT). (2020). Official report of the special committee to review the Federal Aviation Administration’s Aircraft Certification Process, January 16. https://www.transportation.gov/sites/dot.gov/files/2020-01/scc-final-report.pdf .

Duncan, I., & Aratani, L. (2019). FAA flexes its authority in final stages of Boeing 737 MAX safety review. The Washington Post, November 27, https://www.washingtonpost.com/transportation/2019/11/27/faa-flexes-its-authority-final-stages-boeing-max-safety-review/ .

Duncan, I., & Laris, M. (2020). House report on 737 Max crashes faults Boeing’s ‘culture of concealment’ and labels FAA ‘grossly insufficient’. The Washington Post, March 6, https://www.washingtonpost.com/local/trafficandcommuting/house-report-on-737-max-crashes-faults-boeings-culture-of-concealment-and-labels-faa-grossly-insufficient/2020/03/06/9e336b9e-5fce-11ea-b014-4fafa866bb81_story.html .

Economy, P. (2019). Boeing CEO Puts Partial Blame on Pilots of Crashed 737 MAX Aircraft for Not 'Completely' Following Procedures. Inc., April 30, https://www.inc.com/peter-economy/boeing-ceo-puts-partial-blame-on-pilots-of-crashed-737-max-aircraft-for-not-completely-following-procedures.html .

Federal Aviation Administration (FAA). (2018a). Airworthiness directives; the Boeing company airplanes. FR Doc No: R1-2018-26365. https://rgl.faa.gov/Regulatory_and_Guidance_Library/rgad.nsf/0/fe8237743be9b8968625835b004fc051/$FILE/2018-23-51_Correction.pdf .

Federal Aviation Administration (FAA). (2018b). Quantitative Risk Assessment. https://www.documentcloud.org/documents/6573544-Risk-Assessment-for-Release-1.html#document/p1 .

Federal Aviation Administration (FAA). (2019). Joint authorities technical review: observations, findings, and recommendations. October 11, https://www.faa.gov/news/media/attachments/Final_JATR_Submittal_to_FAA_Oct_2019.pdf .

Federal Democratic Republic of Ethiopia. (2019). Aircraft accident investigation preliminary report. Report No. AI-01/19, April 4, https://leehamnews.com/wp-content/uploads/2019/04/Preliminary-Report-B737-800MAX-ET-AVJ.pdf .

Federal Democratic Republic of Ethiopia. (2020). Aircraft Accident Investigation Interim Report. Report No. AI-01/19, March 20, https://www.aib.gov.et/wp-content/uploads/2020/documents/accident/ET-302%2520%2520Interim%2520Investigation%2520%2520Report%2520March%25209%25202020.pdf .

Gates, D. (2018). Pilots struggled against Boeing's 737 MAX control system on doomed Lion Air flight. The Seattle Times, November 27, https://www.seattletimes.com/business/boeing-aerospace/black-box-data-reveals-lion-air-pilots-struggle-against-boeings-737-max-flight-control-system/ .

Gates, D. (2019). Flawed analysis, failed oversight: how Boeing, FAA Certified the Suspect 737 MAX Flight Control System. The Seattle Times, March 17, https://www.seattletimes.com/business/boeing-aerospace/failed-certification-faa-missed-safety-issues-in-the-737-max-system-implicated-in-the-lion-air-crash/ .

Gelles, D. (2019). Boeing can’t fly its 737 MAX, but it’s ready to sell its safety. The New York Times, December 24 (updated February 10, 2020), https://www.nytimes.com/2019/12/24/business/boeing-737-max-survey.html .

Gelles, D. (2020). Boeing expects 737 MAX costs will surpass $18 Billion. The New York Times, January 29, https://www.nytimes.com/2020/01/29/business/boeing-737-max-costs.html .

Gelles, D., & Kaplan, T. (2019). F.A.A. Approval of Boeing jet involved in two crashes comes under scrutiny. The New York Times, March 19, https://www.nytimes.com/2019/03/19/business/boeing-elaine-chao.html .

Gelles, D., & Kitroeff, N. (2019a). Boeing Believed a 737 MAX warning light was standard. It wasn’t. New York: The New York Times. https://www.nytimes.com/2019/05/05/business/boeing-737-max-warning-light.html .

Gelles, D., & Kitroeff, N. (2019b). Boeing board to call for safety changes after 737 MAX Crashes. The New York Times, September 15, (updated October 2), https://www.nytimes.com/2019/09/15/business/boeing-safety-737-max.html .

Gelles, D., & Kitroeff, N. (2019c). Boeing pilot complained of ‘Egregious’ issue with 737 MAX in 2016. The New York Times, October 18, https://www.nytimes.com/2019/10/18/business/boeing-flight-simulator-text-message.html .

Gelles, D., & Kitroeff, N. (2020). What needs to happen to get Boeing’s 737 MAX flying again?. The New York Times, February 10, https://www.nytimes.com/2020/02/10/business/boeing-737-max-fly-again.html .

Gelles, D., Kitroeff, N., Nicas, J., & Ruiz, R. R. (2019). Boeing was ‘Go, Go, Go’ to beat airbus with the 737 MAX. The New York Times, March 23, https://www.nytimes.com/2019/03/23/business/boeing-737-max-crash.html .

Glanz, J., Creswell, J., Kaplan, T., & Wichter, Z. (2019). After a Lion Air 737 MAX Crashed in October, Questions About the Plane Arose. The New York Times, February 3, https://www.nytimes.com/2019/02/03/world/asia/lion-air-plane-crash-pilots.html .

Gotterbarn, D., & Miller, K. W. (2009). The public is the priority: Making decisions using the software engineering code of ethics. Computer, 42 (6), 66–73.

Hall, J., & Goelz, P. (2019). The Boeing 737 MAX Crisis Is a Leadership Failure, The New York Times, July 17, https://www.nytimes.com/2019/07/17/opinion/boeing-737-max.html .

Harris, C. E. (2008). The good engineer: Giving virtue its due in engineering ethics. Science and Engineering Ethics, 14 (2), 153–164.

Hashemian, G., & Loui, M. C. (2010). Can instruction in engineering ethics change students’ feelings about professional responsibility? Science and Engineering Ethics, 16 (1), 201–215.

Herkert, J. R. (1997). Collaborative learning in engineering ethics. Science and Engineering Ethics, 3 (4), 447–462.

Herkert, J. R. (2004). Microethics, macroethics, and professional engineering societies. In Emerging technologies and ethical issues in engineering: papers from a workshop (pp. 107–114). National Academies Press, New York.

Hess, J. L., & Fore, G. (2018). A systematic literature review of US engineering ethics interventions. Science and Engineering Ethics, 24 (2), 551–583.

House Committee on Transportation and Infrastructure (House TI). (2020). The Boeing 737 MAX Aircraft: Costs, Consequences, and Lessons from its Design, Development, and Certification-Preliminary Investigative Findings, March. https://transportation.house.gov/imo/media/doc/TI%2520Preliminary%2520Investigative%2520Findings%2520Boeing%2520737%2520MAX%2520March%25202020.pdf .

IEEE. (2017). IEEE Code of Ethics. https://www.ieee.org/about/corporate/governance/p7-8.html .

IEEE. (2018). Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems (version 2). https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead_v2.pdf .

Jennings, M., & Trautman, L. J. (2016). Ethical culture and legal liability: The GM switch crisis and lessons in governance. Boston University Journal of Science and Technology Law, 22 , 187.

Johnston, P., & Harris, R. (2019). The Boeing 737 MAX Saga: Lessons for software organizations. Software Quality Professional, 21 (3), 4–12.

Josephs, L. (2019). FAA tightens grip on Boeing with plan to individually review each new 737 MAX Jetliner. CNBC, November 27, https://www.cnbc.com/2019/11/27/faa-tightens-grip-on-boeing-with-plan-to-individually-inspect-max-jets.html .

Kaplan, T., Austen, I., & Gebrekidan, S. (2019). The New York Times, March 13. https://www.nytimes.com/2019/03/13/business/canada-737-max.html .

Kitroeff, N. (2019). Boeing underestimated cockpit chaos on 737 MAX, N.T.S.B. Says. The New York Times, September 26, https://www.nytimes.com/2019/09/26/business/boeing-737-max-ntsb-mcas.html .

Kitroeff, N., & Gelles, D. (2019). Legislators call on F.A.A. to say why it overruled its experts on 737 MAX. The New York Times, November 7 (updated December 11), https://www.nytimes.com/2019/11/07/business/boeing-737-max-faa.html .

Kitroeff, N., & Gelles, D. (2020). It’s not just software: New safety risks under scrutiny on Boeing’s 737 MAX. The New York Times, January 5, https://www.nytimes.com/2020/01/05/business/boeing-737-max.html .

Kitroeff, N., & Schmidt, M. S. (2020). Federal prosecutors investigating whether Boeing pilot lied to F.A.A. The New York Times, February 21, https://www.nytimes.com/2020/02/21/business/boeing-737-max-investigation.html .

Kitroeff, N., Gelles, D., & Nicas, J. (2019a). The roots of Boeing’s 737 MAX Crisis: A regulator relaxes its oversight. The New York Times, July 27, https://www.nytimes.com/2019/07/27/business/boeing-737-max-faa.html .

Kitroeff, N., Gelles, D., & Nicas, J. (2019b). Boeing 737 MAX safety system was vetoed, Engineer Says. The New York Times, October 2, https://www.nytimes.com/2019/10/02/business/boeing-737-max-crashes.html .

Kline, R. R. (2001). Using history and sociology to teach engineering ethics. IEEE Technology and Society Magazine, 20 (4), 13–20.

Koenig, D. (2019). Boeing, FAA both faulted in certification of the 737 MAX. AP, October 11, https://apnews.com/470abf326cdb4229bdc18c8ad8caa78a .

Langewiesche, W. (2019). What really brought down the Boeing 737 MAX? The New York Times, September 18, https://www.nytimes.com/2019/09/18/magazine/boeing-737-max-crashes.html .

Leveson, N. G., & Turner, C. S. (1993). An investigation of the Therac-25 accidents. Computer, 26 (7), 18–41.

Marks, S., & Dahir, A. L. (2020). Ethiopian report on 737 Max Crash Blames Boeing, March 9, https://www.nytimes.com/2020/03/09/world/africa/ethiopia-crash-boeing.html .

Martin, D. A., Conlon, E., & Bowe, B. (2019). The role of role-play in student awareness of the social dimension of the engineering profession. European Journal of Engineering Education, 44 (6), 882–905.

Miller, G. (2019). Toward lifelong excellence: navigating the engineering-business space. In The Engineering-Business Nexus (pp. 81–101). Springer, Cham.

National Transportation Safety Board (NTSB). (2019). Safety Recommendations Report, September 19, https://www.ntsb.gov/investigations/AccidentReports/Reports/ASR1901.pdf .

Nissenbaum, H. (1994). Computing and accountability. Communications of the ACM , January, https://dl.acm.org/doi/10.1145/175222.175228 .

Nissenbaum, H. (1996). Accountability in a computerized society. Science and Engineering Ethics, 2 (1), 25–42.

Noorman, M. (2020). Computing and moral responsibility. In Zalta, E. N. (Ed.). The Stanford Encyclopedia of Philosophy (Spring), https://plato.stanford.edu/archives/spr2020/entries/computing-responsibility .

Pasztor, A. (2019). More Whistleblower complaints emerge in Boeing 737 MAX Safety Inquiries. The Wall Street Journal, April 27, https://www.wsj.com/articles/more-whistleblower-complaints-emerge-in-boeing-737-max-safety-inquiries-11556418721 .

Pasztor, A., & Cameron, D. (2020). U.S. News: Panel Backs How FAA gave safety approval for 737 MAX. The Wall Street Journal, January 17, https://www.wsj.com/articles/panel-clears-737-maxs-safety-approval-process-at-faa-11579188086 .

Pasztor, A., Cameron.D., & Sider, A. (2020). Boeing backs MAX simulator training in reversal of stance. The Wall Street Journal, January 7, https://www.wsj.com/articles/boeing-recommends-fresh-max-simulator-training-11578423221 .

Peters, D., Vold, K., Robinson, D., & Calvo, R. A. (2020). Responsible AI—two frameworks for ethical design practice. IEEE Transactions on Technology and Society, 1 (1), 34–47.

Peterson, M. (2019). The ethical failures behind the Boeing disasters. Blog of the APA, April 8, https://blog.apaonline.org/2019/04/08/the-ethical-failures-behind-the-boeing-disasters/ .

Pinkus, R. L., Pinkus, R. L. B., Shuman, L. J., Hummon, N. P., & Wolfe, H. (1997). Engineering ethics: Balancing cost, schedule, and risk-lessons learned from the space shuttle . Cambridge: Cambridge University Press.

Republic of Indonesia. (2019). Final Aircraft Accident Investigation Report. KNKT.18.10.35.04, https://knkt.dephub.go.id/knkt/ntsc_aviation/baru/2018%2520-%2520035%2520-%2520PK-LQP%2520Final%2520Report.pdf .

Rich, G. (2019). Boeing 737 MAX should return in 2020 but the crisis won't be over. Investor's Business Daily, December 31, https://www.investors.com/news/boeing-737-max-service-return-2020-crisis-not-over/ .

Schnebel, E., & Bienert, M. A. (2004). Implementing ethics in business organizations. Journal of Business Ethics, 53 (1–2), 203–211.

Schwartz, M. S. (2013). Developing and sustaining an ethical corporate culture: The core elements. Business Horizons, 56 (1), 39–50.

Stephan, K. (2016). GM Ignition Switch Recall: Too Little Too Late? [Ethical Dilemmas]. IEEE Technology and Society Magazine, 35 (2), 34–35.

Sullenberger, S. (2019). My letter to the editor of New York Times Magazine, https://www.sullysullenberger.com/my-letter-to-the-editor-of-new-york-times-magazine/ .

Thompson, D. F. (1980). Moral responsibility of public officials: The problem of many hands. American Political Science Review, 74 (4), 905–916.

Thompson, D. F. (2014). Responsibility for failures of government: The problem of many hands. The American Review of Public Administration, 44 (3), 259–273.

Tkacik, M. (2019). Crash course: how Boeing’s managerial revolution created the 737 MAX Disaster. The New Republic, September 18, https://newrepublic.com/article/154944/boeing-737-max-investigation-indonesia-lion-air-ethiopian-airlines-managerial-revolution .

Travis, G. (2019). How the Boeing 737 MAX disaster looks to a software developer. IEEE Spectrum , April 18, https://spectrum.ieee.org/aerospace/aviation/how-the-boeing-737-max-disaster-looks-to-a-software-developer .

Useem, J. (2019). The long-forgotten flight that sent Boeing off course. The Atlantic, November 20, https://www.theatlantic.com/ideas/archive/2019/11/how-boeing-lost-its-bearings/602188/ .

Watts, L. L., & Buckley, M. R. (2017). A dual-processing model of moral whistleblowing in organizations. Journal of Business Ethics, 146 (3), 669–683.

Werhane, P. H. (1991). Engineers and management: The challenge of the Challenger incident. Journal of Business Ethics, 10 (8), 605–616.

Download references

Acknowledgement

The authors would like to thank the anonymous reviewers for their helpful comments.

Author information

Authors and affiliations.

North Carolina State University, Raleigh, NC, USA

Joseph Herkert

Georgia Institute of Technology, Atlanta, GA, USA

Jason Borenstein

University of Missouri – St. Louis, St. Louis, MO, USA

Keith Miller

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Joseph Herkert .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Herkert, J., Borenstein, J. & Miller, K. The Boeing 737 MAX: Lessons for Engineering Ethics. Sci Eng Ethics 26 , 2957–2974 (2020). https://doi.org/10.1007/s11948-020-00252-y

Download citation

Received : 26 March 2020

Accepted : 25 June 2020

Published : 10 July 2020

Issue Date : December 2020

DOI : https://doi.org/10.1007/s11948-020-00252-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Engineering ethics
  • Airline safety
  • Engineering design
  • Corporate culture
  • Software engineering
  • Find a journal
  • Publish with us
  • Track your research

engineering disasters essay

Case Study:

St. francis dam (california, 1928), description & background, lessons learned, other resources, quick facts.

Location : California, USA

Year Constructed : 1926

Type : Masonry/Concrete

Height : 205 ft.

Primary Purpose : Water Supply

Date of Incident : March 12-13, 1928

Evacuation : No

Fatalities : 432+

Property Damage : $7 Million

Located approximately forty miles northwest of Los Angeles, California, St. Francis Dam was a curved concrete gravity dam constructed between 1924 and 1926 in order to provide a storage reservoir for the Los Angeles Aqueduct system. It was only the second concrete dam of nine dams built by the Los Angeles Bureau of Waterworks & Supply starting in 1921. While the dam’s upstream face exhibited a nearly vertical profile, the downstream side was equipped with a stair step design that resulted in base and crest thicknesses of 175 and 16 feet, respectively. The main structure reached a height of 205 feet and spanned 700 feet along its curvilinear crest. The design and construction of the St. Francis Dam was executed solely by the Los Angeles Bureau of Waterworks & Supply under the supervision of the organization’s chief engineer William Mulholland. The 1928 failure of the dam which resulted in the deaths of over 400 civilians was attributed to a series of human errors and poor engineering judgment. Due to the tremendous loss of life and property damage estimated to be $7 million, some consider the failure of the St. Francis Dam to be the “worst American civil engineering disaster of the 20th century.”

engineering disasters essay

Aerial view of St. Francis dam site after failure.

William Mulholland was a “self-taught” engineer who had achieved national recognition and admiration between 1906 and 1913 when he orchestrated the design and construction of the Los Angeles-Owens River Aqueduct, the longest water conveyance system at the time. In addition, during his time as a supervising engineer, Mulholland had overseen the completion of numerous  embankment dams . Mulholland’s experience in concrete dam design, however, was lacking. Prior to the design and construction of the St. Francis Dam, he had only participated in the design of one other concrete gravity dam. Mulholland Dam, which was named in his honor, is a curved concrete gravity dam of similar height, constructed between 1923 and 1925. Although his experience resided primarily in the design of embankment dams, Mulholland proposed that a concrete gravity dam would be the proper structure for the canyon terrain across which St. Francis would be built.

Multiple instances of poor judgment by Mulholland and several of his subordinates significantly contributed to the cause of the failure of St. Francis Dam. Plans for the dam were based upon those previously prepared by Mulholland for the Mulholland Dam with little regard for site-specific investigations. When these plans were finalized and after construction began, the height of the dam was raised by ten feet on two separate occasions in order to provide additional reservoir storage needed to sustain the growing community surrounding the dam. Although these modifications increased the dam’s height by twenty feet, no changes were made to its base width. As a result, the intended safety margin for structural stability decreased significantly. Mulholland’s team recognized this effect, however the engineering analysis, acquiring of additional materials, and extended construction time to properly mitigate the height increase were considered to be too costly to the project and to those stakeholders who were financially invested in the completion and operation of the dam.

St. Francis Dam failed at midnight on March 12-13, 1928 only twelve hours after its last inspection by Mulholland. For a considerable period leading up to the last inspection, leaking cracks were observed within the main dam and at its  abutments which were dismissed as conditions typical of the dam type.

When investigating the cause of failure, it was clear that the proposed St. Francis Dam design was not reviewed by any independent party. It was also clear that it was designed to prevent small foundation stresses only and not accommodate full uplift . It is estimated that the design exhibited a safety factor less than one while Mulholland claimed it was designed using a safety factor of four. Although opinions vary, more recent and more thorough investigations assign the ultimate failure mode to weakening of the left abutment foundation rock due to the saturated condition created by the reservoir which essentially re-activated a large landslide that combined with a destabilizing uplift force on the main dam caused failure to initiate at the dam’s left end. In quick succession as catastrophic failure was occurring at the left end, the maximum height section tilted and rotated which destabilized the right end of the main dam causing catastrophic failure at the right end as well.

In the aftermath of the failure, Mulholland took full responsibility for the accident during a hearing stating, “Don’t blame anyone else, you know you can just fasten it on me. If there was human error, I was the human” and he “only envied those who were killed.” He ended his career by stepping down as head of the City of Los Angeles Bureau of Waterworks & Supply shortly after the failure.

References:

(1) Alvi, I. A. (2013).  Human Factors in Dam Failures.  ASDSO Annual Conference .  Providence:  Association of State Dam Safety Officials.

(2) Rogers, J. D. (2006, 6:2).  Lessons Learned from the St. Francis Dam Failure.  Geo-Strata , 14-17. 

(3) Rogers, J. D. & McMahon, D. J. (1993). Reassessment of the St. Francis Dam Failure.  ASDSO Annual Conference .  Kansas City:  Association of State Dam Safety Officials. 

(4) Rogers, J. D. & Hasselmann, K. F. (2013).  The St. Francis Dam Failure:  Worst American Engineering Disaster of the 20 th Century.  AEG Shlemon Specialty Conference:  Dam Failures and Incidents .  Denver:  Association of Environmental and Engineering Geologists.

(5) VandenBerge, D. R., Duncan, J.M., & Brandon, T. (2011).  Lessons Learned From Dam Failures .  Virginia Polytechnic Institute and State University.

engineering disasters essay

Concrete gravity dams should be evaluated to accommodate full uplift.

engineering disasters essay

Dam failure sites offer an important opportunity for education and memorialization.

engineering disasters essay

Dam incidents and failures can fundamentally be attributed to human factors.

engineering disasters essay

Intervention can stop or minimize consequences of a dam failure. Warning signs should not be ignored.

engineering disasters essay

Regular operation, maintenance, and inspection of dams is important to the early detection and prevention of dam failure.

engineering disasters essay

Stability of the dam foundation and other geologic features must be considered during dam design.

engineering disasters essay

The first filling of a reservoir should be planned, controlled, and monitored.

engineering disasters essay

Timely warning and rapid public response are critical to saving lives during a dam emergency.

Additional lessons learned (not yet developed).

  • Safety should not be sacrificed for cost.

engineering disasters essay

Human Factors in Dam Failures

Author: I.A. Alvi

Technical paper published by Association of State Dam Safety Officials

engineering disasters essay

Impacts of the 1928 St. Francis Dam Failure on Geology, Civil Engineering, and America

Author: J. Rogers

Presentation at Missouri University of Science & Technology Meeting

engineering disasters essay

Lessons Learned from the St. Francis Dam Failure

Geo-Strata Magazine

engineering disasters essay

Mapping the St. Francis Dam Outburst Flood with Geographic Information Systems

Presentation on St. Francis Dam failure

engineering disasters essay

Reassessment of the St. Francis Dam Failure

engineering disasters essay

The St. Francis Dam Failure: Worst American Engineering Disaster of the 20th Century

Presentation at AEG Shlemon Specialty Conference

engineering disasters essay

The 1928 St. Francis Dam Failure and its Impacts on American Civil Engineering

Technical paper published by American Society of Civil Engineers

engineering disasters essay

The Limits of Professional Autonomy: William Mulholland and the St. Francis Dam

Author: M. Dyrud

Technical paper published by American Society for Engineering Education

Additional Resources not Available for Download

  • Nunis, Jr., Doyce B. (1995), The St. Francis Dam Disaster Revisited.  Historical Society of Southern California and Ventura County Museum of History and Art.
  • Nuss, L. K., & Hansen, K. D. (2013). Lessons Learned from Concrete Dam Failures Since St. Francis Dam.  USSD Annual Conference . Phoenix:  United States Society on Dams.
  • Outland, Charles F. (2002).  Man-Made Disaster: The Story of St. Francis Dam.  The Ventura County Museum of History and Art . Ventura, California.
  • VandenBerge, D. R., Duncan, J.M., & Brandon, T. (2011).  Lessons Learned from Dam Failures .  Virginia Polytechnic Institute and State University.
  • WordPress.org
  • Documentation
  • Learn WordPress

IMAGES

  1. Engineering Disasters Essay Example

    engineering disasters essay

  2. Disasters

    engineering disasters essay

  3. Essay on disaster management in english || Disaster management essay

    engineering disasters essay

  4. Essay on natural disasters sample that will show you how it is done

    engineering disasters essay

  5. Report on Civil Engineering Disasters Free Essay Example

    engineering disasters essay

  6. PPT

    engineering disasters essay

COMMENTS

  1. Taking Lessons From Engineering Disasters - The New York Times

    The sinking of the Titanic, the meltdown of the Chernobyl reactor in 1986, the collapse of the World Trade Center all forced engineers to address what came to be seen as deadly flaws.

  2. Engineering Disasters: 25 of the Worst Engineering Failures ...

    Engineering disasters. Why do they happen? Let's look at most famous engineering disasters and the circumstances surrounding them. Engineers must study these in order to prevent catastrophes like these from reoccurring.

  3. engineering disasters essay | Bartleby

    Engineering Disasters. mankind has made leaps and bounds in advancing technology and engineering capability. From skyscrapers to nuclear reactors, engineers continue to design and build things that previous generations would have thought impossible or could have never imagined.

  4. Engineering Risks and Failures: Lessons Learned from ...

    Once engineers and managers are aware of what constitutes a disaster and what types of failure lead to disasters, we are in a better position to reduce the chances of disasters occurring and to respond appropriately when they do occur.

  5. Engineering Ethics Case Study: The Challenger Disaster

    This course provides instruction in engineering ethics through a case study of the Space Shuttle Challenger disaster. The course begins by presenting the minimum technical details needed to understand the physical cause of the Shuttle failure. The disaster itself is chronicled through NASA photographs.

  6. Engineering: Engineering Disaster Analysis - 2454 Words ...

    The major engineering disasters, which range from collapse of building to plane crash have been argued to manifest from error in engineering design.

  7. The Boeing 737 MAX: Lessons for Engineering Ethics

    In this paper, we examine several aspects of the case, including technical and other factors that led up to the crashes, especially Boeing’s design choices and organizational tensions internal to the company, and between Boeing and the U.S. Federal Aviation Administration (FAA).

  8. Engineering Disasters - 401 Words | Bartleby

    The purpose of this report is to highlight significant engineer failures over history. Many of the disasters occurred in the latter half of the 20th and beginning of the 21st century. Starting with the Chernobyl nuclear disaster, three aerospace related accidents, Challenger, Apollo 13, and Mars Climate Orbiter.

  9. Full article: Engineering risk and disaster: a special issue ...

    This special issue of Engineering Studies describes ways in which engineering – the act of design giving form to a built environment – is deeply enmeshed in risk-taking and disaster. Questions about the relationships between risk, disaster, and engineering were first seriously raised in the urbanization and industrialization period of the ...

  10. St. Francis Dam (California, 1928) | Case Study | ASDSO ...

    Due to the tremendous loss of life and property damage estimated to be $7 million, some consider the failure of the St. Francis Dam to be the “worst American civil engineering disaster of the 20th century.”