February 3, 2018

Are Autonomous Cars Really Safer Than Human Drivers?

Most comparisons between human drivers and automated vehicles have been at best uneven—and at worst unfair

By Peter Hancock & The Conversation US

self driving cars essay conclusion

Getty Images

The following essay is reprinted with permission from  The Conversation , an online publication covering the latest research.

Much of the push toward self-driving cars has been underwritten by the  hope that they will save lives  by getting involved in fewer crashes with  fewer injuries and deaths  than human-driven cars. But so far, most comparisons between human drivers and automated vehicles have been at best uneven, and at worst, unfair.

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

The statistics  measuring how many crashes occur  are hard to argue with:  More than 90 percent  of car crashes in the U.S. are thought to involve some form of  driver error . Eliminating this error would,  in two years , save as many people as the country lost in  all of the Vietnam War .

But to me, as a  human factors researcher , that’s not enough information to properly evaluate whether automation may actually be better than humans at not crashing. Their respective crash rates can only be determined by also knowing how many non-collisions happen. For human drivers is it one collision per billion chances to crash, or one in a  trillion ?

Assessing the rate at which things do not happen is extremely difficult. For example, estimating how many times you didn’t bump into someone in the hall today relates to how many people there were in the hallway and how long you were walking there. Also, people forget non-events very quickly, if we even notice them happening. To determine whether automated vehicles are safer than humans, researchers will need to establish a non-collision rate for both humans and these emerging driverless vehicles.

Comparing appropriate statistics

Crash statistics for human-driven cars  are compiled from all sorts of driving situations, and on all types of roads. This includes people driving through pouring rain, on dirt roads and climbing steep slopes in the snow. However, much of the data on self-driving cars’ safety comes  from Western states  of the U.S., often in good weather. Large amounts of the data have been recorded on unidirectional, multi-lane highways, where the most important tasks are staying in the car’s own lane and not getting too close to the vehicle ahead.

Automated cars are  rather good at those kinds of tasks  – but then again, so are humans. The data on fully automated systems will naturally expand to cover more roads as  states allow automated vehicles  to operate more widely. But it will take some time before self-driving cars can cover as many miles in a year and in as many circumstances as human drivers presently do.

It is true that self-driving cars don’t get  tired, angry, frustrated or drunk . But neither can they yet react to uncertain and ambiguous situations with the same skill or anticipation of an attentive human driver, which suggests that perhaps the two still  need to work together . Nor do purely automated vehicles possess the foresight to avoid potential peril: They largely drive from moment to moment, rather than thinking ahead to possible events literally  down the road .

To an automated vision system, a bus shelter full of people  might appear quite similar to an uninhabited corn field . Indeed, deciding what action to take in an emergency is difficult for humans, but drivers have  sacrificed themselves for the greater good of others . An automated system’s  limited understanding of the world  means it  will almost never  evaluate a situation the same way a human would. And machines can’t be specifically programmed in advance to  handle every imaginable set of events .

New tech brings new concerns

Some people may argue that the promise of simply reducing the number of injuries and deaths is enough to justify expanding the use of driverless cars. I do agree that it would be a great thing if tomorrow were the dawn of a new day when a completely driverless roadway killed or injured no one; although such an arrangement might  suck more of the enjoyment  from our everyday lives, especially for those who love driving.

But experience from aviation shows that as new automated systems are introduced, there is  often an increase in the rate of adverse events . Though temporary, this potential  uptick in the crash rate  may cause concern for the general public and then politicians, lawmakers and even manufacturers – who  might be discouraged  from sticking with the new technology.

As a result, comparisons between humans and automated vehicles have to be performed carefully. This is particularly true because human-controlled vehicles are likely to remain on the roads for many years and even decades to come. How will people and driverless cars mix together, and  who will be at fault for any collisions  between them?

To fairly evaluate driverless cars on how well they fulfill their promise of improved safety, it’s important to ensure the data being presented actually provide a true comparison. Choosing to replace humans with automation has more effects than simply a  one-for-one swap . It’s important to make those decisions mindfully.

This article was originally published on  The Conversation . Read the  original article .

Argumentative Essay On Self-Driving Cars

self driving cars essay conclusion

Self-driving cars are still in the early stages of development, but they have the potential to revolutionize transportation. They could reduce accidents, relieve congestion, and provide new mobility options for seniors and people with disabilities. But there are also concerns about safety and privacy.

Expertise Final Project Over the past ten years, those that have been old enough to be aware of their surroundings know how drastic technology has changed over the years. New and greatly improved ways of communicating, entertainment, and transportation have been introduced, and they’ve been introduced at increasing and astonishing rates. Transportation is used daily by majority of people worldwide. In urban settings, commuters find it difficult to drive and get where they need to when roadways are congested and their commutes are long.

Frustration, fatigue and anger derive from this difficulty. While travelers experience this n the road, their ability to drive isn’t safe for themselves and others, and often these are the reasons for most accidents. Simple mistakes that are caused by this are often inevitable and could be changed if daily travelers didn’t have to worry about being the ones controlling the vehicle. The world today relies on technology to do most things for themselves, and a car that could drive itself would significantly assist people universally.

Recent testing done by Google with development of autonomous cars has the attention of people globally on the dramatic change that an advancement such as self-driving ediums would bring into society if it’s introduced within the upcoming years. Self-driving cars are safe, modern, and an updated way of transportation that will benefit people worldwide in the upcoming future. Bus and taxi services will become simplified and obtainable for pedestrians who need quick transportation.

Should Self-Driving Cars Be Legal

self driving cars essay conclusion

There are also concerns about legal liability. If a self-driving car gets into an accident, who is responsible? The driver? The car manufacturer? The software company? This is still a relatively new technology, and the laws have not caught up yet.

Self-driving cars also raise ethical questions . For example, what should the car do if it gets into an accident? Should it try to save the lives of the passengers, even if that means sacrificing the lives of pedestrians? These are tough questions that need to be considered before self-driving cars become more widespread.

Overall, there are many factors to consider when it comes to self-driving cars. Safety, legal liability, and ethics are just a few of the issues that need to be addressed. Self-driving cars have the potential to revolutionize transportation, but there are still many hurdles to overcome before they can be fully accepted by society.

Self-driving cars are becoming increasingly prevalent on roads across the globe. But, should they be legal? Some experts say yes, as they can help to reduce accidents and improve traffic flow. Others believe that self-driving cars are too dangerous and unpredictable to be allowed on public roads.

This travel method would be quick, safe, and reliable. Self-driving cars will be useful for society in the commute of passengers, although it should have limited usage on the roads today. Annually, there’s an estimate of more than 37,000 people that are killed in the US due to traffic related ccidents. 93-95% of these accidents are due to simple human error (Peterson, Peters). Whether it was a mistake that could’ve been prevented or if it was unavoidable, humans are unfortunately flawed in numerous ways while behind the wheel.

Most commonly today, the biggest preventable reasons behind fatal accidents are drunk driving and distraction with technology. Although both are illegal, our country is still faced yearly with frequent deaths with correctable causes. It’s nearly impossible to prevent humans with the ability to drive to refrain from being behind the wheel with no distractions, but the rogress of Google’s self-driving software makes it possible to program robots to do this. It’s clear that cars are one of the best and worst things invented.

Counter Argument For Self-Driving Cars

Self-driving cars are becoming increasingly popular, but there are still many who are skeptical of them. Some people argue that self-driving cars are not safe, and that we should not be trusting them with our lives. Here is a counter argument to that claim.

Self-driving cars have been tested extensively and have proven to be much safer than human-driven cars. In fact, studies have shown that self-driving cars are far less likely to get into accidents than human-driven cars. Self-driving cars also have the potential to reduce traffic congestion and save lives.

Critics of self-driving cars often argue that we should not be trusting them with our lives. However, it is important to remember that human drivers are responsible for the majority of accidents on our roads. In fact, studies have shown that human error is responsible for 94% of all car accidents. Self-driving cars have the potential to drastically reduce the number of accidents on our roads, and save lives in the process.

There are still many skeptics of self-driving cars, but it is important to remember that they have the potential to make our roads much safer. Self-driving cars have been tested extensively and have proven to be much safer than human-driven cars. In addition, self-driving cars have the potential to reduce traffic congestion and save lives. We should not be afraid to trust self-driving cars with our lives, as they have the potential to make our roads much safer.

More so best for a great deal of reasons, although cars have caused an unreal amount of fatalities and accidents that have caused serious injuries to people that may or may not have been at fault. The amount of people that die from car related accidents is equal to 737 jet planes crashing weekly (Peterson, Peters). The general population is aware that human drivers aren’t always substantial for operating vehicles, but self-driving technology ould make it a safer option and ability to transport commuters on a daily basis.

The driverless car is one of the most promising new technologies of our time. The potential for these vehicles to transform the way we live and work is staggering. But as with any new technology, there are also concerns about safety and security. In this essay, we will explore the pros and cons of driverless cars .

On the plus side, driverless cars have the potential to make our roads much safer. By removing human error from the equation, driverless cars could dramatically reduce the number of accidents on our roads. They could also help to ease congestion, as they can communicate with each other to optimize routes and avoid traffic jams.

On the downside, driverless cars could pose a threat to people’s privacy. If data from driverless cars is collected and shared, it could be used to track people’s movements and even spy on them. There are also concerns that driverless cars could be hacked, and used for malicious purposes.

Overall, driverless cars hold great promise. But as with any new technology, there are also some risks that need to be considered.

Since the autonomous cars had been initially introduced into testing in 2009, there’s been only 16 very minor accidents. Each of these accidents had cases of the other human drivers being at fault (Richtel, Dougherty). Therefore, the only unsafe factors in the autonomous cars are humans themselves. With statistics and testing results in mind, self-driving cars are developed to be an exceptional safe traveling method. As people age into their senior years of life, they lose the ability to attentively operate a vehicle. Their senses of being aware of the details of their surroundings that are necessary to drive are lost.

For most elders, they never wish to stop driving. The same concept goes for those with disabilities they’re born with, blindness, and even more tragically for those that have suddenly lost the ability to drive at a younger age. The freedom and capabilities to access transportation easily on our own should never have to end. Drivers will simply have the ability to type in or speak into their cars of their destination and let the car do the work (Dallegro). Introducing self-driving cars into ociety today will benefit everybody, especially for the impaired.

Google is developing self-driving vehicles to operate without the help of a human through exact accuracy of mapping software and sensors surrounding the car (Sage). The prototype uses Laser Illuminating Detection and Ranging (Lidar), used for 3D mapping for the car and four radars surrounding it to detect speeds of others. It includes high powered cameras that allows to see precisely around the car in the range of 30 meters, sonar for sound related detection, specific positioning, and other state of the art software (Clark).

As of 2014, there was already over 2,000 miles of the four million miles in the world mapped out for the self-driving cars (Madrigal). A year later, the Google self- driving cars have logged 70,000 miles during their test driving (Clark). Boris Sofman from The Atlantic quotes, “We are able to turn the physical world into a virtual world”. Rather than having the software be simple mapping for the self-driving cars, the programming scientists are creating are precise enough to know how high a traffic light is off the ground or how many inches high a curb to the side of the road is (Madrigal).

As a result, the autonomous vehicle is able to accurately detect its surroundings to perform accordingly. China, globally known for being one of the most thriving countries in production and growth, plan on having self-driving technology in transportation methods on their roads within the next two years (Walker). The Chinese will likely have fully functioning features of the self-driving software on the road before the US, but we’re expected to closely follow. Companies such as Baidu and Yutong located in China have done numerous public transportation demonstrations of the notion.

The culture and government is more open to the idea currently than it is in America. It’s most probable that China will see the features first used in public transit, taking the place of bus and taxi services (Walker). In large cities of the region, majority of individuals that own a motor vehicle only use it for the sake of the commute to work positions. With plans of self- driving technology on roadways, it’s anticipated that a large number of the population will not feel the need to own a car, making fuel and vehicles costs for individuals decrease significantly.

Furthermore, the economic conditions will be mproved drastically in terms of allowing the population to travel in a conveniently practical way of simplified traveling in a fuel efficient method. Following this, environmental conditions will also remarkably improve. Opposing opinions on self-driving vehicles are argued for appropriate and understandable rationale. Driving a vehicle as a human gives us a sense of freedom in having the ability ourselves to drive at our own speed, rate, and go the routes we choose to take.

Entertainment with driving and operating different vehicles has been popular since cars were first invented. In fact, the biggest distinguished eason why people are against autonomous cars is because it’s seen as too safe and restricts traveler’s freedom (Richtel, Dougherty). Many consider race-car driving to be a sport, and the change in demand for self-operated vehicles in the near future could change the possibility of continuing careers and hobbies with cars.

There would be no reason to make cars different when they each perform the same functions. Another reason why allowing self-driving programs to fully control cars is a controversial idea is due to the issue of who would be considered at fault if their was an accident between two self- riven automobiles. Insurance companies haven’t jumped on board of the idea of this technology yet for this justification. How would you be able to know who’s liable when the human didn’t have any contact or ability to correct the self-driving car’s actions?

Humans have the capability of using their own judgement while driving to step outside of legal boundaries in cases of emergency. A self-driving vehicle lacks this since it’s programmed to obey any and all laws that are presented on the roadways (Peterson, Peters). Lacking these human senses and abilities while utilizing a vehicle could lead to difficulty in the erformance of the self-driving car. The movement of self- driving software has been transitioning rapidly across the globe, with features already presented in major car companies such as BMW, Mercedes, and Tesla (Greenough).

Though the features to brake and park on a vehicle’s own has impressed its users, the advanced softwaring of a fully functionable self-automated car will be a major step into the economic, travel, and environment changes that are valuable in the physical world today. Society has become reliant on the technology that does things for themselves, and allowing mechanical methods of commuting to e done by itself would buy the population the time and energy that most need.

The United States started with their ways of travel to be done without help through horse and buggy a century ago, and the method will be brought back into the current society with autonomous software (Hirisch). With an estimate of over a billion dollars spent in the course of the next decade, the future for autonomous vehicles is reachable (Sage). The self-driving car is a major step forward in today’s technological abilities that’s expected to arrive sooner in our society than what the world may envision.

More Essays

  • Essay On Driverless Cars
  • What Are The Dangers Of Texting And Driving Essay
  • Argumentative Essay: Should Students Drive To Schools?
  • Compare And Contrast Driving In The Winter Essay
  • Self-Driving Ethical Dilemmas Essay
  • Driverless Vehicle Accidents Essay
  • Drunk Driving Essay
  • Cause And Effect Essay On Texting While Driving
  • Electric Cars Case Study Essay
  • Exploring Alternative Fuels

Essay Service Examples Science Automobile

Self-driving Cars Argumentative Essay

Table of contents

Introduction, what are self-driving cars, the benefits of self-driving cars, the concerns about self-driving cars.

  • Proper editing and formatting
  • Free revision, title page, and bibliography
  • Flexible prices and money-back guarantee

document

Our writers will provide you with an essay sample written from scratch: any topic, any deadline, any instructions.

reviews

Cite this paper

Related essay topics.

Get your paper done in as fast as 3 hours, 24/7.

Related articles

Self-driving Cars Argumentative Essay

Most popular essays

In order to improve the critical injuries due to the automobile accidents, there are many types of...

With the pace of technology change continues to increase nowadays, it leads to everyone taste keep...

A car is a wheeled engine car utilized for transportation. Most meanings of autos state that they...

Authors investigated the application of thermoelectric generators for the power production by...

  • Conversation
  • Environmental Issues

A 75-year-old recycling business and its current standing. Environmental Impact of Automobile/Car...

Waste oil may be a resource that can't be disposed of arbitrarily because of the presence of...

  • Animal Testing

Humans have been using animals for many uses like food, commotion from one place to another,...

  • Biomedical Engineering
  • Biotechnology

Biomedical engineering technologists should consider the improvement of health care access in...

  • Fahrenheit 451

Regularly of our lives, we spend endless hours under the grasp of innovation. In Ray Bradbury's...

Join our 150k of happy users

  • Get original paper written according to your instructions
  • Save time for what matters most

Fair Use Policy

EduBirdie considers academic integrity to be the essential part of the learning process and does not support any violation of the academic standards. Should you have any questions regarding our Fair Use Policy or become aware of any violations, please do not hesitate to contact us via [email protected].

We are here 24/7 to write your paper in as fast as 3 hours.

Provide your email, and we'll send you this sample!

By providing your email, you agree to our Terms & Conditions and Privacy Policy .

Say goodbye to copy-pasting!

Get custom-crafted papers for you.

Enter your email, and we'll promptly send you the full essay. No need to copy piece by piece. It's in your inbox!

ESSAY SAUCE

ESSAY SAUCE

FOR STUDENTS : ALL THE INGREDIENTS OF A GOOD ESSAY

Essay: Self-driving cars

Essay details and download:.

  • Subject area(s): Engineering essays
  • Reading time: 11 minutes
  • Price: Free download
  • Published: 27 July 2024*
  • Last Modified: 27 July 2024
  • File format: Text
  • Words: 2,979 (approx)
  • Number of pages: 12 (approx)
  • Tags: Artificial intelligence essays

Text preview of this essay:

This page of the essay has 2,979 words. Download the full version above.

Self-driving cars. Ten years ago, something like this would seem completely unbelievable. Now however, it’s becoming our reality. A lot of questions come to mind with this subject: how will this affect our lives? How will this affect our future? Is this good for us? Is this bad? These questions must be strongly considered and answered while looking at this subject. I think that this is a promising new field, but that makes a person ask many serious questions, and even involves the entire concept of comparing artificial intelligence to human intelligence. I think that while this type of vehicle has promise, it’s very hard to choose artificial intelligence over human intelligence. A lot of things must be figured out before this type of car can seriously challenge the people’s preference for cars that they have to drive themselves.         

  What are self-driving cars? Well, as is clear due to the name itself, these are cars that don’t need a driver to be driven. These cars use an artificial intelligence system to decide everything that is typically decided by the driver themselves. A person can presumably input the destination, and the car will do the rest on its own. In other words, it’s a form of a taxi cab owned by the passenger themselves. There is however much more to this than just that, these cars also make decisions normally made by human drivers, such as choosing the best routes and even calculating how to cause the least casualties in an accident. This is a very controversial subject, as these cars may prefer the owner/passenger die instead of others if it causes the least overall casualties. It’s not yet clear as to all the things that these cars will be able to do, but the general basics are clear: you sit back and let the car do the rest. An interesting question that isn’t often asked, is whether a person will need to have a driver’s license to be in this car. If the car does all the work, then why does a person need to know how to drive? Will there be an option for the person to drive the car themselves if they choose to do that? Unfortunately, these are all currently unanswered questions, what we do know for a fact, is that these are cars that drive the person themselves.

                The idea that cars should drive themselves is as old as cars themselves. Putting this idea into motion however was only possible in modern times. The earliest prototype of such a car was the 1925 car made by Arden Motors, called the “Chandler”. This idea was also promoted by General Motors in the 1930’s and shown off at the world fair of 1939. It was even predicted that these cars would be common in the US by the 1960’s. In 1953, RCA Labs built a miniature prototype of such a car, again promoting this as a serious future option for consumers. The common issue with all these designs however, was that none of these were practical vehicles that people could buy or trust to work properly. These were all ideas and predictions but not practical working concepts. General Motors went a step further and created a series of cars called “Firebirds”, that were supposed to be self-driven cars that would be on the market by 1975. This became a popular topic in the media and led to many interested journalists and reporters to be allowed to test drive these cars. The excitement was there but the cars still were not able to be put on the market.

                                    The 1960’s saw Ohio State University and the Bureau of Public Roads to continue the pursuit of putting this type of car on the market. The attempts however were again hard to get off the ground, and simple prototypes were the only thing that was able to be completed. Great Britain’s Transport and Road Research Laboratory was next to try and fail at this idea. In this version of the idea, magnetic cables were embedded in the roads, and a system called Citroen DS interacted with them to move the cars across roads. In the 1970’s, the Bendix corporation worked with Stanford University, to work on a concept involving cables that were buried in the ground, and that helped move cars on the road. I think that it is obvious why this didn’t work out in the end either. I think that it is important to mention that funding was a major problem for many of these ideas. As can be easily assumed, none of these features could possibly be done at affordable rates, and that they required large amounts of labor and large changes to the roads to accommodate these changes.

                              The Germans decided to get into this field in the 1980’s. Mercedes-Benz launched their own version of such a car, but their version could not move faster than 39 miles per hour, a number that was clearly far below the speed of an average car. Multiple American universities were next. Universities of Maryland and Michigan created prototypes that were able to travel on hard terrains at different speeds, but again that were not very fast. It seemed that the ability to make these cars fast was, yet another problem faced by the developers. In 1991, the United States Congress passed the ISTEA Transportation Authorization bill that pushed for a creation of an automatic transport system by 1997. By the late 1990’s, the university of Parma in Italy, and Daimler-Benz were able to create vehicles that could reach the speed of 81 mph. The issue of funding and efficient mass production however continued to plague these new advancements. The 2000’s saw even more progress, as Germany invented a “Spirit of Berlin” taxicab, and the Netherlands invented the ParkShuttle. Neither of these options was able to fully replace human driven transportation services, but they managed to be effective means of transportation regardless. By the end of the decade, most of the major car companies were working on making self-driven cars. Mercedes-Benz, Audi, Tesla, Toyota… are some of the more notable companies that were working on prototypes. Uber and Lyft began developing self-driven taxicabs in recent years to save money on drivers, and by making their business to run more smoothly and efficiently. In 2018, a woman was killed by an automated Uber vehicle, and Audi officially announced the release of a mass-produced line of self-driven cars.

                 What can be predicted about the future of this technology? Logically we can assume that with the current state of technology, better cars will be released, and practical self-driven cars will be readily available to the public. Will this idea take-off with the public? That is the harder question. There is really nothing that can be seriously predicted about how the public will react to this. Personally, I think that it will decades before people will be ready to replace cars that they can drive themselves, with self-driven cars. Why do I think so? I think that many people love driving and would not want to let “somebody else” do it for them. It’s also reasonable to assume that taxicabs may be less expensive than buying a self-driven car. There is also the issue with the cost. How much will these cars cost? Will the average driver be able to afford such a car? Will it be popular among the general population? There are too many unanswered and hard to answer questions about the topic. I do have one concern that comes to mind… the Industrial Revolution threatened entire industries, as many people lost jobs to basically machines. How many taxi drivers would be needed with self-driven cars in the equation? How many bus drivers would be required?

               What kind of impact will such cars have on the general population?  How will this affect hardware? How will this affect future software? How will it affect data? I think that this concept becoming more popular, will lead to increased funding for the development of new hardware and software pertaining to self-driven cars. It will also likely lead to new ideas for other areas. What about self-working computers? What about self-working irons and laundry machine/robots? There are a lot of concepts that can be thought of by thinking of self-operating hardware and software. There are a lot of things that can be thought of by simply thinking of self-working hardware and software. I think that it will lead to a major development of these concepts. It will also lead to major advancements in software in general, as well as artificial intelligence. If companies can successfully build artificial intelligence systems that will drive cars by themselves, a lot of other things can be made self-controlled as well. One thing that I think could be done successfully is computers that can-do things for you, for example your taxes or other accounting related tasks. I can even imagine self-driven planes and boats. Basically, there are a lot of advancements that can be made through self-working technology. It’s possibl e that driving a car will become less of a priority for people, and getting a driver’s license might become more of a novelty than a necessity. I also think that NASCAR and the popularity of racing can be affected by the popularity of self-made cars as well as the whole culture of driving. The main question for me is the cost of these cars. The affordability or lack of it will be a major reason as to why or why not this business concept will work. I have my doubts over the topic, as I think that many people enjoy driving and would not want to give it up. There is also a wide variety of taxicab services that are cheaper alternatives to owning a self-driving car. I’m also unclear on whether sports cars will be possible to be self-driven as well. The latter is important because of the popularity of such cars.

                     How will this technology change the way business is conducted? The main thing that comes to mind is the lack of a driver’s license when purchasing a car? Is it possible that there would be a lack of an age limit to buy a car too? I think that businesses would also come up with new marketing strategies to sell these cars, since driving would no longer be an important part of the marketing pitch. It would also be a potential issue for Uber and Lyft, as well as other taxicab and car service companies. It might even affect limo companies, as wealthy people might prefer very expensive self-driven cars. The big thing that comes to mind is that driving would not be an important component of owning a car. I think that it’s common sense that any change in business would lead to companies adjusting their strategies and marketing campaigns, and focusing on different areas to promote their ideas. It could also affect other businesses entirely as they would focus more on self-working concepts and products. As I mentioned earlier, a laundromat can use some type of a laundry machine/robot that would be doing laundry for you. Phone companies could come up with cell phones that work automatically in some way, and come up with phone plans for self-driving cars. Why? Well, now it would no longer be illegal to talk on the phone in your car. I mean why would it be when it wouldn’t distract you from driving? What about television screens in cars? The owner/passenger now has free time, doesn’t that seem to be a new business opportunity for companies like Netflix? I think that these would be the main things affected by self-driving cars and similar technology. Every new invention that changes the way people normally do things, is bound to change the marketplace and affect the way that companies handle their business expenditures.

                       How would self-driving cars affect competition between companies? There would be no reason for companies to focus on driving as a major part of their selling pitches. Commercials would no longer advertise the handling and driving of cars, as the person would not actually be driving it. Companies would compete for having technology in which the person would have to do the least to make it work. Companies would try to gain a competitive advantage against one another by adding features that would make the product do as much as possible by itself. I can imagine cars that incorporate other technology, can you imagine cars that do accounting for you? What about a competing company that makes a car that can call companies and have conversations for you? What about cars that would make decisions for you? What about cars that act as your secretaries while driving you? There is almost a limitless amount of possibilities that can be accomplished by a company aiming to stand out. This type of technology can of course be applied to other technologies as well. So now we’re talking about cell phones that call for you, cell phones that make decisions for you…. Basically, companies would take this technology to the extreme to compete. The spirit of competition has driven many industries to unprecedented highs, and so this industry will likely be no exception to this rule. The question would ultimately be about which companies would stay ahead of the curve, and which ones would not.

                                 How do self-driving cars affect society in a global way? Well if this concept takes off, then countries will try to keep up with each other by improving on the technology and by attempting to avoid being “behind” others. It will be a major driving factor in the competition between major companies, and create new forms of advancement in other technologies. The global impact of such a technology is enormous and would change a lot of things as we know them. It’s certainly not going to be an isolated idea that only affects one country and one field, it will affect the whole world and affect multiple industries, including those that have nothing to do with the automobile industry.

                         Is there an ethical side to self-driving cars? A major question that comes to mind is whether it is a good idea to trust so much in artificial intelligence. What would happen if someone who doesn’t know how to drive is faced with a malfunctioning vehicle? What happens if these vehicles cause a multitude of accidents? Is it a good idea for our society to become more “lazy”? Should we really try to have something else do as much of our work as possible? This is an issue that can be debated ad nauseum without a generally agreed upon answer or a solution. Personally, I think that giving so much authority to machines is dangerous, how long before we start putting machines in leadership positions and becoming completely incompetent without them? We rely on the internet, cell phones, cars, and social media daily, how would many of us survive if all these options were taken away?  Why do we need a car to drive itself? Why can’t a person do it themselves? Why is this improvement even needed? There seems to be an endless supply of questions on this subject. Personally, I think that my position on the subject has been made clear. I don’t think that self-driven cars are as much a necessity as it seems to be, and that the current state of transportation is a better and more efficient way of doing things.

                             What are the legal repercussions of self-driving cars? What happens if the car owner gets into an accident? Is the person responsible or is the car? If it’s the car, what’s going to happen next? Obviously, nobody will arrest the car, so does this mean that no one is in trouble if their car runs someone over? How do we define right and wrong when it comes to artificial intelligence? Will any of these cars be able to both be controlled by people and artificial intelligence? In that case, could someone run another person over with a car and then blame it on artificial intelligence? How would law enforcement be able to prove what happened? Would the company itself be responsible? Once again, we enter a new reality filled with many different possibilities and new rules required to administer them. It seems pretty clear to me that self-driving cars will need a whole new set of laws to determine accidents that will almost certainly happen regardless of whether the driver is human or not.

As the advancement of self-sustaining technology arises, so does the general concern that I stated earlier. Driverless cars can either be a technology that benefits the population or that is detrimental to society. From the information that I found and my own opinion on these cars, my views are that it would be detrimental. Specifically, due to the life – death calculations that the artificial intelligence can make. An example being to avoid multiple causalities, AI may calculate that putting your life on the line is the correct way to go. I’d argue this as being something the AI should never be able to decide. More or less because it can’t use emotional intuition to make choices that involve life or death . All things considered we came a long way with our technology, and so did the concept of cars that drive themselves.  Our society is bound to be affected by a step of this magnitude, but a lot of factors must be taken into consideration, to make a true judgment on the matter. Self-driving cars will either change driving as we know it, or become a failed attempt to fix something that did not need fixing.

...(download the rest of the essay above)

Discover more:

  • Artificial intelligence essays

Recommended for you

  • Artificial-neural-networks (ANNs) and their applications
  • John Searle’s Chinese room & Systems / Robot / Brain Simulator Reply
  • Elon Musk’s Neuralink: A Step Towards Curing Depression and Addiction

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, Self-driving cars . Available from:<https://www.essaysauce.com/engineering-essays/self-driving-cars/> [Accessed 14-08-24].

These Engineering essays have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on Essay.uk.com at an earlier date.

Essay Categories:

  • Accounting essays
  • Architecture essays
  • Business essays
  • Computer science essays
  • Criminology essays
  • Economics essays
  • Education essays
  • Engineering essays
  • English language essays
  • Environmental studies essays
  • Essay examples
  • Finance essays
  • Geography essays
  • Health essays
  • History essays
  • Hospitality and tourism essays
  • Human rights essays
  • Information technology essays
  • International relations
  • Leadership essays
  • Linguistics essays
  • Literature essays
  • Management essays
  • Marketing essays
  • Mathematics essays
  • Media essays
  • Medicine essays
  • Military essays
  • Miscellaneous essays
  • Music Essays
  • Nursing essays
  • Philosophy essays
  • Photography and arts essays
  • Politics essays
  • Project management essays
  • Psychology essays
  • Religious studies and theology essays
  • Sample essays
  • Science essays
  • Social work essays
  • Sociology essays
  • Sports essays
  • Types of essay
  • Zoology essays

Seven Arguments Against the Autonomous-Vehicle Utopia

All the ways the self-driving future won’t come to pass

People stand indoors near a silver self-driving car.

Self-driving cars are coming. Tech giants such as Uber and Alphabet have bet on it, as have old-school car manufacturers such as Ford and General Motors. But even as Google’s sister company Waymo prepares to launch its self-driving-car service and automakers prototype vehicles with various levels of artificial intelligence , there are some who believe that the autonomous future has been oversold—that even if driverless cars are coming, it won’t be as fast, or as smooth, as we’ve been led to think. The skeptics come from different disciplines inside and out of the technology and automotive industries, and each has a different bear case against self-driving cars. Add them up and you have a guide to all the ways our autonomous future might not materialize.

Bear Case 1: They Won’t Work Until Cars Are as Smart as Humans

Computers have nowhere near human intelligence. On individual tasks, such as playing Go or identifying some objects in a picture, they can outperform humans, but that skill does not generalize. Proponents of autonomous cars tend to see driving as more like Go: a task that can be accomplished with a far-lower-than-human understanding of the world. But in a duo of essays in 2017, Rodney Brooks, a legendary roboticist and artificial-intelligence researcher who directed the MIT Computer Science and Artificial Intelligence Laboratory for a decade, argued against the short-term viability of self-driving cars based on the sheer number of “edge cases,” i.e., unusual circumstances, they’d have to handle.

Read: The AI that has nothing to learn from humans

“Even with an appropriate set of guiding principles, there are going to be a lot of perceptual challenges … that are way beyond those that current developers have solved with deep learning networks, and perhaps a lot more automated reasoning than any AI systems have so far been expected to demonstrate,” he wrote . “I suspect that to get this right we will end up wanting our cars to be as intelligent as a human, in order to handle all the edge cases appropriately. ”

He still believes that self-driving cars will one day come to supplant human drivers. “Human driving will probably disappear in the lifetimes of many people reading this,” he wrote. “But it is not going to all happen in the blink of an eye.”

Bear Case 2: They Won’t Work, Because They’ll Get Hacked

Every other computer thing occasionally gets hacked, so it’s a near-certainty that self-driving cars will be hacked, too. The question is whether that intrusion—or the fear of it— will be sufficient to delay or even halt the introduction of autonomous vehicles.

Read: The banality of the Equifax breach

The transportation reporter and self-driving car skeptic Christian Wolmar once asked a self-driving-car security specialist named Tim Mackey to lay out the problem. Mackey “believes there will be a seminal event that will stop all the players in the industry in their tracks,” Wolmar wrote . ‘‘We have had it in other areas of computing, such as the big-data hacks and security lapses and it will happen in relation to autonomous cars.” Cars, even ones that don’t drive themselves, have already proved vulnerable to hackers .

The obvious counterargument is that data lapses, hacking, identity theft, and a whole lot of other things have done basically nothing to slow down the consumer internet. A lot of people see these problems and shrug . However, the physical danger that cars pose is far greater, and maybe the norms developed for robots will be different from those prevalent on the internet, legally and otherwise , as the University of Washington legal scholar Ryan Calo has argued.

Bear Case 3: They Won’t Work as a Transportation Service

Right now most companies working on self-driving cars are working on them as the prelude to a self-driving-car service. So you wouldn’t own your car; you’d just get rides from a fleet of robo-cars maintained by Waymo or Uber or Lyft. One reason for that is the current transportation-service companies can’t seem to find their way to profitability. In fact, they keep losing insane amounts of money . Take the driver out of the equation and maybe all of that money saved would put them in the black. At the same time, the equipment that’s mounted on self-driving cars to allow them to adequately convert physical reality into data is extremely expensive. Consumer vehicles with all those lasers and computers on board would be prohibitively expensive. On top of that, the question of calibrating and maintaining all that equipment would be entrusted to people like me, who don’t wash their car for months at a time.

Read: Will Uber and Lyft become different things?

Put these factors together and the first step in fully autonomous vehicles that most companies are betting on is to sell robo-car service, not robo-cars.

There is a simple rejoinder to why this might not work. George Hotz, who is himself attempting to build a DIY driving device, has a funny line that sums it up. “They already have this product, it’s called Uber, it works pretty good,” Hotz told The Verge . And what is a robo-car ride if not “a worse Uber”?

Bear Case 4: They Won’t Work, Because You Can’t Prove They’re Safe

Commercial airplanes rely heavily on autopilot, but the autopilot software is considered provably safe because it does not rely on machine-learning algorithms. Such algorithms are harder to test because they rely on statistical techniques that are not deterministic. Several engineers have questioned how self-driving systems based on machine learning could be rigorously screened. “Most people, when they talk about safety, it’s ‘Try not to hit something,’” Phil Koopman, who studies self-driving-car safety at Carnegie Mellon University, told Wired this year. “In the software-safety world, that’s just basic functionality. Real safety is, ‘Does it really work?’ Safety is about the one kid the software might have missed, not about the 99 it didn’t.”

Regulators will ultimately decide if the evidence that self-driving-car companies such as Waymo have compiled of safe operation on roads and in simulations meets some threshold of safety. More deaths caused by autonomous vehicles, such as an Uber’s killing of Elaine Herzberg , seem likely to drive that threshold higher.

Koopman, for one, thinks that new global standards like the ones we have for aviation are needed before self-driving cars can really get on the road, which one imagines would slow down the adoption of the cars worldwide.

Bear Case 5: They’ll Work, But Not Anytime Soon

Last year, Ford announced plans to invest $1 billion in Argo AI, a self-driving-car company. So it was somewhat surprising when Argo’s CEO, Bryan Salesky, posted a pessimistic note about autonomous vehicles on Medium shortly after. “We’re still very much in the early days of making self-driving cars a reality,” he wrote . “Those who think fully self-driving vehicles will be ubiquitous on city streets months from now or even in a few years are not well connected to the state of the art or committed to the safe deployment of the technology.”

In truth, that’s the timeline the less aggressive carmakers have put forth. Most companies expect some version of self-driving cars in the 2020s, but when within the decade is where the disagreement lies.

Bear Case 6: Self-Driving Cars Will Mostly Mean Computer-Assisted Drivers

While Waymo and a few other companies are committed to fully driverless cars or nothing, most major carmakers plan to offer increasing levels of autonomy , bit by bit. That’s GM’s play with the Cadillac Super Cruise. Daimler, Nissan, and Toyota are targeting the early 2020s for incremental autonomy.

Read: The most important self-driving car announcement yet

Waymo’s leadership and Aurora’s Chris Urmson worry that disastrous scenarios lie down this path. A car that advertises itself as self-driving “should never require the person in the driver’s seat to drive. That hand back [from machine to human] is the hard part,” Urmson told me last year . “If you want to drive and enjoy driving, God bless you, go have fun, do it. But if you don’t want to drive, it’s not okay for the car to say, ‘I really need you in this moment to do that.’”

Bear Case 7: Self-Driving Cars Will Work, But Make Traffic and Emissions Worse

And finally, what if self-driving works, technically, but the system it creates only “solve[s] the problem of ‘I live in a wealthy suburb but have a horrible car commute and don’t want to drive anymore but also hate trains and buses,’” as the climate advocate Matt Lewis put it . That’s what University of California at Davis researchers warn could happen if people don’t use (electric-powered) self-driving services and instead own (gasoline-powered) self-driving cars. “Sprawl would continue to grow as people seek more affordable housing in the suburbs or the countryside, since they’ll be able to work or sleep in the car on their commute,” the scenario unfolds . Public transportation could spiral downward as ride-hailing services take share from the common infrastructure.

And that’s not an unlikely scenario based on current technological and market trends. “Left to the market and individual choice, the likely outcome is more vehicles, more driving and a slow transition to electric cars,” wrote Dan Sperling, the director of the UC Davis Institute of Transportation Studies, in his 2018 book, Three Revolutions: Steering Automated, Shared, and Electric Vehicles to a Better Future .

It would certainly be a cruel twist if self-driving cars managed to save lives on the road while contributing to climate catastrophe. But if the past few years of internet history have taught us anything, any technology as powerful and society-shaping as autonomous vehicles will certainly have unintended consequences. And skeptics might just have a handle on what those could be.

About the Author

self driving cars essay conclusion

More Stories

Getting Back to Normal Is Only Possible Until You Test Positive

The Messiest Phase of the Pandemic Yet

Self-driving Cars and the Right to Drive

  • Research Article
  • Open access
  • Published: 23 June 2022
  • Volume 35 , article number  57 , ( 2022 )

Cite this article

You have full access to this open access article

self driving cars essay conclusion

  • William Ratoff   ORCID: orcid.org/0000-0001-6129-5197 1  

4651 Accesses

2 Citations

1 Altmetric

Explore all metrics

Every year, 1.35 million people are killed on roads worldwide and even more people are injured. Emerging self-driving car technology promises to cut this statistic down to a fraction of the current rate. On the face of it, this consideration alone constitutes a strong reason to legally require — once self-driving car technology is widely available and affordable — that all vehicles on public roads be self-driving. Here I critically investigate the question of whether self-driving, or autonomous, vehicles should be legally mandated. I develop an argument — premised upon Mill’s Harm Principle — that any legislation mandating the use of self-driving vehicles on public roads is morally impermissible. The Harm Principle, under its most plausible interpretation, has it that the state is warranted in legislating against some activity only if that activity violates the rights of others. In brief, I argue that a human driver, who opts to drive herself on public roads rather than rely on self-driving technology, does not violate anyone’s rights when she so acts. Consequently, when granting the Harm Principle, it follows that the state is not warranted in mandating the use of self-driving vehicles on public roads. If I am correct, the proponent of a self-driving vehicle mandate must reject the Harm Principle. Given its intuitive plausibility and central place in liberal philosophical thought, this is a weighty cost.

Similar content being viewed by others

self driving cars essay conclusion

Automated Driving and the Future of Traffic Law

self driving cars essay conclusion

Ethical Issues in Automated Driving—Opportunities, Dangers, and Obligations

self driving cars essay conclusion

Self-Driving Vehicles—an Ethical Overview

Explore related subjects.

  • Medical Ethics

Avoid common mistakes on your manuscript.

1 Introduction

Every year, approximately 1.35 million people are killed annually in road-traffic accidents (WHO, 2018 ) and over 90% of these accidents are the result of human error (Anderson et al., 2016 ). However, emerging self-driving car technology promises to cut the former statistic down to a fraction of the current rate (Anderson et al., 2016 ; Garza, 2011 ). This consideration alone constitutes a strong reason to favor the development and use of self-driving cars (Hevalke & Nida-Rumelin, 2015 ). After all, even if wide-spread use of autonomous vehicles only reduced road traffic accidents by 10% — an extremely conservative estimate (Dorf, 2016 ) — that would still amount to around 135,000 lives saved a year.

A plurality of philosophers and legal scholars have now argued that there are sufficient reasons to legally require — once self-driving car technology is safer, widely available, and affordable — that all vehicles on public roads be self-driving (Dorf, 2016 ; Sparrow & Howard, 2017 ). For example, Michael Dorf ( 2016 ) has argued that considerations concerning the greater good suffice to justify a ban on human-driven cars on public roads. In his own words: “…the argument for banning human-driven cars is really quite simple: it would save many lives and avert many more serious injuries…” (Dorf, 2016 ). And Sparrow and Howard ( 2017 ), as I interpret them, suggest that human-driven cars can be banned from public roads on the grounds that individuals have a right not to be subject to unnecessary risks. As they put it: “…As long as driverless vehicles aren’t safer than human drivers, it will be unethical to sell them (Shladover, 2016 ). Once they are safer than human drivers when it comes to risks to 3 rd parties, then it should be illegal to drive them…” (Sparrow & Howard, 2017 ).

In this paper, I critically investigate the question of whether self-driving vehicles should be legally mandated on public roads. To wit, I argue — contra Dorf ( 2016 ) and Sparrow and Howard ( 2017 ) — that it would be morally wrong to legally mandate self-driving vehicles on public roads. I begin, in Sect. “ The Harm Principle Again ”, by reminding the reader of Mill’s Harm Principle, the doctrine that the state, or any individual, is warranted in coercively interfering with some activity only if that activity violates the rights of third-parties. After that, in Sect. “ The Right to Drive ”, I formulate a classical liberal, or libertarian, argument from the Harm Principle against the moral permissibility of legislation mandating use of self-driving technology on public roads. In essence, my argument goes like this: when granting the Harm Principle, the state is warranted in legislating against some activity only if that activity violates the rights of third-parties. But a driver who chooses to drive herself on public roads rather than using self-driving technology violates the rights of no third-parties in so acting. Consequently, the state is not warranted in mandating use of self-driving car technology on public roads. Finally, in Sects. “ A Right Not to be Subject to Unnecessary Risks? , The Harm Principle Again , and Proves Too Much? ”, I address various objections to my argument that may have occurred to the reader.

2 The Harm Principle

In his On Liberty , J. S. Mill asserts that “…the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others. His own good, either physical or moral, is not a sufficient warrant…” (Mill, 1859 ). This slogan — the Harm Principle — has been taken as a rallying cry by generations of philosophical liberals. In essence, the Harm Principle has it that people should be free to act however they please so long as their actions cause no harm to anybody else. The state — or anybody for that matter — has no business in regulating any activity that yields no (third-party) victims. The Harm Principle has been endorsed, in one form or another, by a plurality of leading moral, political, and legal philosophers (Feinberg, 1984 ; Hart, 1963 ; Raz, 1986 ).

The Harm Principle can be fruitfully conceived as an anti-paternalism doctrine (Holtug, 2002 ). It articulates a limit on the permissible reach of the law: the state is warranted in legislating against some activity only if that activity poses a harm to others . Consequently, any restriction on the behavior of an individual that yields no third-party victims is an instance of morally impermissible legislative overreach. Consider, for example, the legislation — both historical and contemporary — prohibiting (attempted) suicide. Footnote 1 Although suicide is nearly always a terrible mistake, most of us regard legislation criminalizing (attempted) suicide as objectionably paternalistic. It is not the state’s business to coercively weigh in on the matter of whether I should keep on living or rather end my own life. In contrast, murder is clearly an appropriate object of legislation. The Harm Principle offers us a (partial) explanation of why this should be so: the criminal prohibition of murder, but not the legislation against suicide, regulates an activity that constitutes a harm to third-parties (Charlesworth, 1993 ; Holtug, 2002 ).

Of course, the plausibility of the Harm Principle will turn upon how it is interpreted. In particular, upon how the notion of “harm to others” is to be understood. The dominant interpretation of the Harm Principle in the literature is the Rights Violation reading (Norris Turner, 2014 ). On this view, agent S’s action A is a harm to others just when A violates the rights of some other agent or rights-holder S 2 . Consequently, the Harm Principle, on this reading, has it that the state can interfere with action A of agent S only if A violates the rights of some subject S 2 . This interpretation of Mill has been endorsed by a number of philosophers — including David Brink ( 1992 ), Alan Fuchs ( 2006 ), John Rawls ( 2007 ), and Wendy Donner ( 2009 ). Footnote 2

The Rights Violation reading draws support from the fact that, if “harm to others” is not restricted to rights-violations, then the Harm Principle cannot do the philosophical work for which it was intended (Holtug, 2002 ). After all, if “harm to others” is understood broadly to include lowering wellbeing, then the emotional suffering S causes to S* by divorcing him will constitute a harm. But it would be draconian for the state to deny S a divorce on such grounds. The Harm Principle was conceived as a statement articulating the limits of permissible intervention by the state, or any other third-party, into the lives of individuals. It is supposed to carve out a sphere of inviolable individual liberty. But if “harm to others” is understood so broadly — such that even the emotional suffering induced by the end of a romantic relationship counts as harm — then the state will be warranted, by the lights of the Harm Principle, in coercively regulating almost any aspect of life (Jacobson, 2000 ). So, for example, suicide could be criminalized on the grounds that it causes enormous distress to family, friends, and other loved ones. And this is surely not a plausible reading of Mill. Better to understand “harm to others” in a more demanding sense — as the Rights Violation reading does — such that the Harm Principle actually limns a sphere of inviolable personal liberty (Holtug, 2002 ). If “harms to others” are restricted to violations of the rights of others, then the emotional suffering induced by divorce, or the suicide of a loved one, do not count as harms that can be coercively regulated by the state (or any third-party). Rather, only harms to others that constitute rights violating wrongs are appropriate objects of legislation. Aside from making for a more plausible reading of Mill, this formulation of the Harm Principle accords better with our moral intuitions.

Of course, on the Rights Violation interpretation, the prescriptions made by the Harm Principle depend upon what rights people have. Two philosophers, with very different conceptions of the nature of rights and of what rights we have, will both be able to affirm the Harm Principle, despite otherwise disagreeing significantly on matters of permissible government legislation. So, for example, a certain kind of libertarian who rejects the existence of any positive rights — that is, rights that entail the existence of duties on the part of others to aid the rights-holder — might hold that the Harm Principle renders morally impermissible a policy of taxing the rich to fund health care for the poor. After all, for this libertarian, no-one would be having any of their rights violated by the absence of universal healthcare. In contrast, another species of liberal might countenance such legislation on the grounds that individuals have a positive right to healthcare that does not leave them in a precarious financial position (Holtug, 2002 ). On such a view, the poor are having a right violated if they lack access to affordable healthcare. Consequently, for this Harm Principle-endorsing liberal, there is room for the government to coercively legislate taxation policies that fund universal health care.

If this is correct, then the Harm Principle, by itself, looks to tell us little of substance regarding which concrete instances of proposed government legislation are warranted (Holtug, 2002 ). After all, whether or not some piece of coercive regulation is warranted turns upon prior questions regarding what rights and duties the agents in question have. This observation has led some critics to claim that the Harm Principle is empty in the absence of a background theory of justice and rights. As Holtug ( 2002 ) puts it: “…the Harm Principle is of no use without a theory of justice, but if we have this theory, it seems that we have no need for the Harm Principle. It would seem that the theory of justice will settle the issue of coercion all by itself…” However, this criticism — even if valid — is of no significance for the dialectic that I will be developing here. All Harm Principle-endorsing liberals will agree that coercive legislation of some activity is unjust if that activity does not violate the rights of any third-party. And that is the state of affairs that obtains, I claim, in the case of hypothetical legislation mandating the use of self-driving car technology on public roads.

3 The Right to Drive

Now that we have reminded ourselves of the content of the Harm Principle, everything is in place for me to formulate my argument against the moral permissibility of legislation mandating use of self-driving technology on public roads. The chassis of my argument goes like this:

The state is warranted in legislating against some activity A only if that activity A violates the rights of third-parties.

A driver S who chooses to drive herself on public roads rather than using self-driving technology does not violate the rights of any third-parties in so acting.

Therefore, the state is not warranted in mandating use of self-driving car technology on public roads.

Premise (1) is simply the statement of the Harm Principle. Why think that it is true? Although Mill ( 1859 ) makes his case for the Harm Principle through appeal ultimately to his utilitarianism, a more plausible case, by my lights, for the Harm Principle can be made on deontological grounds. We autonomous rational agents have (natural) rights. These rights place firm limits on the scope of permissible interference by third-parties. As Robert Nozick has put it: “…Individuals have rights, and there are things no person or group may do to them (without violating their rights). So strong and far-reaching are these rights that they raise the question of what, if anything, the state and its officials may do…” (Nozick, 1974 ). We are not mere instruments who can be manipulated in any arbitrary way to suit the purposes of some other agent or to promote the greater good (Markovits, 2014 ). On the contrary, we are rights-holders. We should be free to act as we see fit — so long, that is, as we do not violate the rights of anyone else in so acting. Given that this is so, it looks to straightforwardly follow that the government is only warranted in legislating against some activity if that activity violates the rights of some third-party. Footnote 3

Premise (2) is the claim that a driver, who chooses to drive herself on public roads rather than using self-driving technology, does not violate the rights of any third-parties in so acting. Why think that this is the case? Well, it simply doesn’t seem like this driver is violating the rights of anyone else when she so acts. After all, which right would she be violating? In typical cases of wrongdoings, there is normally an obvious candidate. For example, if I killed you, I would be violating — amongst other things — your right to life. If I enslaved you or trapped you in my basement, I would be violating your right to liberty. If I punched you in the face, I would be violating your right to not suffer, or be at risk of suffering, significant bodily injuries. In contrast, when our driver chooses to drive herself on public roads, rather than relying on self-driving technology, it’s not obvious which right, if any, of other road-users she is violating. Given this, the burden of proof is on the proponent of the self-driving vehicle mandate to establish that there is a rights-violation going on when someone so acts. And, in the absence of any compelling reason to think that there is a rights-violation when someone chooses to drive herself, rather than use self-driving technology, the default or presumptive view, that we ought to affirm, is that there is no such rights-violation. Footnote 4

The above argument is clearly valid: the truth of the premises would guarantee the truth of the conclusion. Consequently, when granting the truth of both premises, it follows that my conclusion must likewise be true: the state is not warranted in mandating use of self-driving car technology on public roads. It should also be noted that my conclusion here is completely consistent with there being good moral reasons — or even a moral obligation — for people to voluntarily opt to use self-driving vehicles on public roads. The fact that it would be wrong for the government to mandate some course of action does not entail that that action is not morally required. For all I have said, morality may very well require people to use self-driving vehicles on public roads — for example, because so acting promotes the greater good.

4 A Right Not to Be Subject to Unnecessary Risks?

Of course, proponents of the mandate at hand are not going to be so quickly convinced that the state is not warranted in imposing such a mandate. They will reject one or other of the premises of the above argument. So, for example, some may reject the Harm Principle, and thus deny premise (1) — perhaps by endorsing the view that the government is warranted in regulating some activity that violates the rights of no third-parties when the stakes are high enough with respect to the common good. Others — in particular, Sparrow and Howard ( 2017 ) — will reject premise (2) on the grounds that a driver who chooses to drive herself, rather than use self-driving technology, does violate a right of third-party road-users — namely, their right not to be subject to unnecessary risks. In the rest of this section, I will consider this latter objection to my argument, and return to the former one in the next section.

In their (2017) paper, Sparrow and Howard argue that we can justify legislation banning human-driven cars from public roads on the grounds that (1) individuals have a right not to be subject to unnecessary risks and (2) once self-driving car technology is affordable, widely available, and safer than human-driven cars, human-driven cars will pose an unnecessary risk to said individuals. Consequently, (3) road-users are having their right not to be subject to unnecessary risks violated when our driver chooses to drive herself rather than use self-driving technology. In their words, “…Once vehicles without a human being at the controls become safer than vehicles with a human being at the controls, then the moment a human being takes the wheel they will place the lives of third-parties – as well as their own lives – at risk. Moreover, imposing this extra risk on third-parties will be unethical: the human driver will be the moral equivalent of a drunk robot…” (Sparrow & Howard, 2017 ).

Why should we join Sparrow and Howard in thinking that we have a right not to be subject to unnecessary risks? In brief, because this hypothesis explains our moral intuitions. It just seems wrong to impose a needless risk on some non-consenting third-party. Suppose, for example, that I drive recklessly fast through a family neighborhood at a speed such that I cannot properly control my car. Intuitively, I am wronging third-party road-users and pedestrians by so acting. In general, the facts about wrongdoings are explained by facts about moral rights-violations (Thomson, 1990 ). And, very plausibly, what explains the wrongness of my reckless speeding is the fact that it violates the rights of third-parties not to be subject to unnecessary risks — in this case, the risk of my losing control of my car and crashing into them. This case, and others, gives us good reason, I think, to hold that we have a right not to be subject to unnecessary risks.

What makes a risk unnecessary or needless ? Let us say that activity A poses an unnecessary risk X just when there is some activity A* that possesses the benefits of A but that lacks risk X. The activity of choosing to drive yourself on public roads, rather than using self-driving technology, therefore counts as posing an unnecessary risk to third-parties. After all, there is an alternative to driving yourself — namely, using a self-driving vehicle — that possesses all the benefits we attribute to motor travel (such as transportation and convenience) that poses lower risk of injury or death to third-party road-users. The extra risk that your act of driving yourself poses to third-parties consequently counts as an unnecessary risk in the sense at hand. Granting that individuals have a right not to be subject to unnecessary risks, it looks to follows that a driver is violating a right of third-party road-users when she chooses to drive herself on public roads over using a self-driving alternative.

Is this rights-violation weighty enough to justify coercive legislation requiring the use of self-driving vehicle technology on public roads, as Sparrow and Howard have suggested? The right not to be subject to unnecessary risks appears to be weighty enough to justify some coercive legislation regulating the use of vehicles on public roads. Consider, for example, the legislation prohibiting driving under the influence of drugs or alcohol. The justification for this legislation seems to be our right not to be subject to unnecessary risks. Clearly, for any arbitrary driver, their driving under the influence ensures that they will pose a greater threat to the wellbeing of others than they otherwise would if driving sober. And this extra risk is unnecessary in the above defined sense: the goods we attribute to motor travel — transport and convenience — can all be had without drunk driving. (No-one needs to get drunk to drive!). Driving under the influence therefore constitutes an unnecessary risk. And the legislation banning it strikes us as being wholly just. In sum, this is a real-life case in which the right not to be subject to unnecessary risk seems to be outweighing our presumptive liberty to use one’s vehicle in any way one pleases and justifying the existence of coercive legislation regulating driving on public roads. Footnote 5

Another instance of the right not to be subject to unnecessary risk justifying regulation of vehicles on public roads is indicator lights. In the early days of cars, people indicated which direction they were about to turn by signaling with their hands, or even with a small flag. Nowadays, however, vehicles are legally required to have lights that indicate the direction in which they are about to turn to both rear and oncoming traffic. We would regard it as unnecessarily dangerous to third-parties if a driver insisted on using hand signals, rather than their indicator lights, when turning their vehicle. Consequently, we are (nearly) all inclined to think that the right of any arbitrary third-party to not be subject to unnecessary risk trumps a driver’s presumptive liberty to use their light-less car on public roads and rely instead on hand signals. For this reason then, we are (nearly) all in agreement that legislation mandating the use of indicator lights on public roads is justified.

In sum, there is a precedent for the right not to be subject to unnecessary risks justifying the existence of coercive regulation of vehicles on public roads. Given this, it appears reasonable to think, as Sparrow and Howard ( 2017 ) have argued, that legislation mandating the use of self-driving vehicle technology on public roads could be justified through appeal to this same right.

However, I’m skeptical of this line of thought. On what grounds? In essence, I don’t think that a driver who chooses to drive herself, over relying on self-driving car technology, is violating other road-users right not to be subject to unnecessary risks. Why? Well, although it is indisputable that by choosing to drive herself she is creating extra unnecessary risk for third-party road-users, it’s far from clear that this behavior violates the right of these third-parties not to be subject to unnecessary risks. After all, it is intuitively obvious that not any imposition of unnecessary risk violates this right. We impose unnecessary risks on others all the time without violating their right not to be subject to unnecessary risks — for example, when I exercise by going for a run in my neighborhood rather than using the treadmill in my garage. When running on sidewalks, I slightly increase the probability that some innocent third-party suffers a serious physical injury, or even death, from my accidently running into them and knocking them to the ground. But we don’t regard my act of going for a run as morally wrong, or as violating anyone else’s right not to be subject to unnecessary risks. This suggests that individuals don’t have a blanket right not to be subject to unnecessary risks, but rather a right not to be subject to significant unnecessary risks — that is, unnecessary risks above some certain threshold.

Given this, the question of the permissibility of coercive regulation mandating the use of self-driving cars on public roads turns upon the issue of whether a driver who chooses to drive herself, rather than use self-driving technology, is imposing a significant unnecessary risk on third-parties when she so acts, one that suffices to violate their right against being subject to unnecessary risks. I will now argue that the risk such a driver imposes on third-party road-users by so acting does not reach the threshold for significance. Her action does not violate anyone else’s right not to be subject to significant unnecessary risks. The bones of my argument go like this:

If I violate your right to not be subject to significant unnecessary risks when I choose to drive my car rather than use a self-driving alternative, then I must violate that right when I go for an enjoyable spin in my car for no further purpose.

I don’t violate your right not to be subject to significant unnecessary risks when I go for an enjoyable spin in my car for no further purpose.

Therefore, I don’t violate your right to not be subject to significant unnecessary risks when I choose to drive my car rather than use a self-driving alternative.

Let’s consider premise (b) first. Why accept it? In a nutshell, it just seems intuitively obvious that I don’t violate anyone’s rights when I go for an enjoyable spin in my car for no further purpose. After all, if I was violating someone else’s right by so acting, then I would have been wronging them, since I would have been violating their rights without good excuse. Footnote 6 But it doesn’t seem like I am doing anything morally wrong, or wronging other drivers or pedestrians, when I go for an enjoyable spin in my car for no further purpose. Even though I am clearly subjecting others to some additional risk by so acting, the additional risk appears to be trivial — and certainly not substantial enough to violate anyone else’s right not to be subject to significant unnecessary risks. The probability of my killing or injuring anyone else on the road is miniscule. Most people go their whole lives without ever harming anyone else as a result of their driving. In the USA in 2014, 2,626,418 people died in total. Of these, 32,675 died in road-traffic accidents (USDT, 2016 ). Footnote 7 To be sure, that is 32,675 people too many. But, to put this number in perspective, all the vehicles in the USA in 2014 together travelled 3026 billion miles. That means there is one fatality for every 1.08 million vehicles miles travelled (USDT, 2016 ). The odds of my killing or injuring anyone else on my enjoyable spin around the city are tiny. (Indeed, even if I went on a cross-country road-trip, the probability of such an accident remains minute — especially if I am driving responsibly, and not under the influence of alcohol etc.). Given all this, we have good reason to think that I don’t violate anyone else’s rights, or wrong them, when I go for an enjoyable spin around the block — or, for that matter, an epic road-trip across the country — in my car.

How about premise (a)? Why think that, if I am violating your right not to be subject to significant unnecessary risks when I choose to drive my car rather than use a self-driving alternative, then I must also be violating that right when I go for an enjoyable spin in my car for no further purpose? First, we should note that both activities do constitute unnecessary risks in the above defined sense. In each case, there is some equally good alternative activity that either poses no risk to third-parties or poses a lower risk to said third-parties. So, for example, rather than deriving pleasure by going for a spin in my car around town, I could satisfy my pleasure-drive by watching TV or playing my piano or going for a walk etc. Given these alternatives, my act of going for an enjoyable spin in my car for no further purpose poses an unnecessary risk to third-party drivers, passengers, and pedestrians. Likewise, given that I could use self-driving technology instead, my act of driving myself to my destination — for example, my place of work, my children’s school, or the hospital — poses an unnecessary to risk to third-parties.

Second, we should observe that the extra unnecessary risk imposed on third-parties by my choice to drive myself, rather than using self-driving technology, is less than the extra unnecessary risk imposed on third-parties by my decision to go for an enjoyable spin in my car, when I could find enjoyment by staying home and watching TV etc. After all, when I choose to go for an enjoyable spin, over watching TV, the extra risk imposed on third-parties goes from approximately zero (the risk to others of my watching TV) to X (the risk to others of my driving around on public roads). So the amount of extra unnecessary risk I impose on third-parties by so acting is equal to X. And let us say that the amount of risk imposed on third-parties by my using self-driving vehicle technology on public roads is Y where Y < X (and Y is a non-zero positive real number). Consequently, when I choose to drive myself rather than using a self-driving car, the extra risk I impose on others goes from Y (the risk to others of my using a self-driving car) up to X (the risk to others of my driving a car). This means that the amount of extra unnecessary risk that I impose on third-parties by choosing to drive myself is equal to X minus Y. And, of course, X minus Y amount of unnecessary risk is less than X amount of unnecessary risk. In this way then, we can see why the unnecessary risk I impose on third-parties by choosing to drive myself, rather than relying on self-driving car technology, must be less than the unnecessary risk I impose on others through my decision to find enjoyment by going for a spin in my car, rather than by staying home and watching TV.

We are now in a position to see why premise (a) is true. Whether or not some activity A of mine violates your right not to be subject to significant unnecessary risks is a function of how much unnecessary risk activity A imposes upon you. In particular, activity A only violates this right of yours if the amount of unnecessary risk A imposes on you has a value above some certain threshold Z. Given the above result — that my act of driving myself rather than using self-driving technology imposes less unnecessary risk on third-parties than my act of finding enjoyment by going for a drive rather than watching TV etc. — it straightforwardly follows that, if I violate your right not to be subject to significant unnecessary risks when I choose to drive my car rather than use a self-driving alternative, then I must violate that right when I choose to enjoy myself by going for a drive rather than by watching TV etc. In other words, we have established premise (a).

This completes my case for thinking that a driver who chooses to drive herself, rather than using self-driving technology, does not violate anyone else’s right not to be subject to unnecessary risks. Although she imposes some additional unnecessary risk on third-parties when she so acts, this extra unnecessary risk does not amount to a violation of said third-parties right not to be subject to significant unnecessary risks. The significance of this result for my initial argument against the permissibility of legislation mandating use of self-driving technology is straightforward: the only compelling seeming reason to doubt premise (2) of this argument — that a driver who chooses to drive herself on public roads, rather than using self-driving technology, does not violate the rights of any third-parties in so acting — has been shown to fail. In the absence of any other compelling reason to doubt this premise, we should regard the initial appearances on this matter as being veridical: drivers do not violate the rights of other road-users or pedestrians when they choose to drive themselves rather than relying on self-driving technology. Granting the Harm Principle, it follows that the state is not warranted in coercively mandating the use of self-driving cars on public roads.

5 The Harm Principle Again

The only other option available to the proponent of mandating use of self-driving cars on public roads is to reject the Harm Principle and deny premise (1) of my argument. On this view, coercive legislation can be justified in the absence of any rights-violation — in particular, and most plausibly, when the stakes are high enough with respect to considerations of aggregate wellbeing or the common good.

The proponent of the mandate at hand might assert that this is the state of affairs that obtains in the case of self-driving cars. For example, Michael Dorf ( 2016 ) has argued that human-driven vehicles should be banned on public roads — once self-driving car technology is safer, widely available, and affordable etc. — on the grounds that such a law would makes the world a better place. In such a world, there would be overall fewer deaths and serious injuries. Of course, in this world, the pleasures of driving for oneself etc. would be absent. But the loss of these goods would be (vastly) outweighed by the huge reduction in road-traffic deaths and injuries.

However, I don’t think we should be so quick to reject the Harm Principle in favor of the view that coercive legislation can be justified purely through appeal to considerations of the greater good. After all, it is highly plausible (almost platitudinous, by my lights) that we rational agents have rights — amongst other things, rights to life, liberty, and the ownership of justly acquired property (Locke, 1689 ; Nozick, 1974 ). We are not mere instruments, who can be manipulated or coerced in this way or that for the sake of the greater good, even when the stakes are high (Markovits, 2014 ). On the contrary, we are autonomous beings, who possess a (natural) right to act as we please and to own and responsibly use any artifact, such as a car, so long as we don’t violate the rights of anybody else. In general, our rights “trump” considerations of the greater good. Footnote 8 They place firm limits on the powers of any third party — such as the state — to coercively interfere with the conduct of any individual for the sake of the collective good. A concrete example should bolster this thought. Let’s suppose, for example, that the world would be, in aggregate, a far better place if (Amazon founder) Jeff Bezos had 99.9% of his property confiscated and redistributed against his will. Clearly, no third party — such as the government — would be warranted in seizing 99.9% of the assets of Jeff Bezos and redistributing all his property simply because it would make the world overall much better. Why? Because Jeff Bezos has a right to own (justly acquired) property. Footnote 9 And these rights are sufficiently normatively weighty to eclipse even enormous gains in the aggregate good. They can only be outweighed — very plausibly in practice, but perhaps even in principle — by competing rights and duties. In this way then, reflection on the nature of rights gives us good reason, I think, to endorse the Harm Principle.

Secondly, if the Harm Principle is rejected, and coercive legislation is held to be justifiable purely on the grounds that it promotes the greater good, then paternalistic legislation — such as the aforementioned legislation criminalizing (attempted) suicide — could be justified. So, for example, people who have survived a suicide attempt could be permissibly prosecuted and imprisoned for a period of time for their own safety (granting, of course, that such a policy would actually promote the good). However, the permissibility of such an enterprise is in conflict with our moral intuitions. Such paternalistic legislation does not strike us as being just or justifiable, even if it does promote the good. This consideration also gives us good reason, I believe, to endorse the Harm Principle. Given all this, it follows that if the proponent of mandating self-driving cars on public roads can only justify her position by denying the Harm Principle, then this is going to be a very weighty cost of her view.

6 Proves Too Much?

The last objection to my dialectic, which I will consider here, is that my argument from the Harm Principle against the permissibility of a self-driving vehicle mandate proves too much.

I have argued that such legislation is morally impermissible on the grounds that choosing to drive yourself, over using self-driving technology, violates the rights of no third-parties (including their right not to be subject to significant unnecessary risks) and that the state is not warranted in coercively regulating any activity that violates the rights of no third-parties. One might worry that if legislation mandating use of self-driving vehicles is morally impermissible on these grounds, then actual legislation mandating speed-limits and use indicator lights, or outlawing driving under the influence of drugs or alcohol, should be likewise impermissible. After all, if the unnecessary risks imposed on third-parties by my decision to drive myself, rather than rely on self-driving technology, fail to violate said third-parties’ right not to be subject to significant unnecessary risks, then can we be confident that the unnecessary risks imposed on third-parties by my decision to speed, indicate only with my hands, or drive under the influence, do violate this right? As we saw before, legislation prohibiting these activities on the road is intuitively justified through appeal to the thought that they impose a sufficiently high unnecessary risk to third-parties that they violate our right not to be subject to unnecessary risks. And it seems obvious that the state is justified in mandating a speed-limit and use of indicator lights and in prohibiting driving under the influence.

However, this worry is unfounded. There is a clear asymmetry in the degree of unnecessary risk imposed on others by choosing to drive yourself, over using a self-driving vehicle, on the one hand, and the degree of unnecessary risk imposed on others by speeding, indicating with one’s hands, and driving under the influence, on the other. In essence, the extra unnecessary risk imposed by speeding relative to driving under the speed-limit, indicating with one’s hands relative to using indicator lights, and drunk driving relative to driving whilst sober, is, I think, significantly larger than the extra unnecessary risk imposed when you choose to drive yourself rather than use a self-driving vehicle. This should be fairly obvious. For example, most people simply cannot drive at 150 miles per hour (or whatever) without posing a very high risk to others (and themselves). If I drive at this speed on public roads, then I am — in my judgment — violating other people’s right not to be subject to unnecessary risks, since the odds of my causing a serious road-traffic accident increase very significantly relative to my driving at (say) 25 miles per hour. Speed-limits are set the way they are because legislators have judged that this is a speed that (nearly) all licensed drivers can safely drive at. Footnote 10 There is judged to be an acceptable level of risk and the speed-limit is set to be at the limit of this acceptable level.

Likewise, there would surely be far more serious accidents on roads if people sometimes indicated with lights but also sometimes with their hands, or with small flags, whenever they felt like it. And it should go without saying that driving under the influence significantly increases the risks of a road-traffic accident. Data supporting this conclusion comes from the fact that, in 2016, 10,497 people in the USA died in alcohol-impaired driving crashes — a figure that accounts for 28% of all traffic-related deaths in the USA (CDC, 2016 ). In sum, there is no good reason to think that the argument from the Harm Principle against the permissibility of legislation mandating use of self-driving vehicles on public roads proves too much by also ruling out legislation regulating speeding, use of indicator lights, and driving under the influence.

7 Conclusion

I have developed an argument — premised upon Mill’s Harm Principle — that any legislation mandating the use of self-driving vehicles on public roads is morally impermissible. The Harm Principle, under its most plausible interpretation, has it that the state is warranted in legislating against some activity only if that activity violates the rights of others. I argued that a human driver, who opts to drive herself on public roads rather than rely on self-driving technology, does not violate anyone’s rights when she so acts. Consequently, when granting the Harm Principle, it follows that the state is not warranted in mandating the use of self-driving vehicles on public roads. If I am correct, then the proponent of the self-driving car mandate must reject the Harm Principle. Given its intuitive plausibility and central place in liberal philosophical thought, this is a weighty cost of such a view.

Data Availability

Not applicable.

Suicide was a crime in England and Wales until 1961 (Neeleman, 1996). As of 2021, suicide is still treated as a crime in at least 20 countries worldwide (The Guardian, 2021).

Of course, others — such as Norris Turner ( 2014 ) — criticize and dissent from this reading.

I will return to the question of what reason we have to accept the Harm Principle in Sect. “ The Harm Principle Again ” below, where I provide further reason to believe it.

In Sect. “ A Right Not to be Subject to Unnecessary Risks? ” of this paper, I consider — and reject — the claim that someone acting in such a way violates the right of third-parties not to be subject to (significant) unnecessary risks. So, my argument for premise (2) is really only fully made by the end of Sect. “ A Right Not to be Subject to Unnecessary Risks? ”. In effect, most of the rest of this paper should be understood as providing reasons to accept the above outlined argument that the state is not warranted in mandating self-driving vehicles on public roads.

Sparrow & Howard ( 2017 ) make a related point thus: “…there is clear evidence in the history of the evolution of the law that the public resents the imposition of risk that is not reasonably understood to be necessary to securing the goods provided by motor travel (namely, transport and convenience). In particular, the robust public support for laws the prohibit driving whilst under the influence of drugs or alcohol testify to the fact that people don’t like it when drivers place them at an elevated risk of death or injury…” (Sparrow & Howard, 2017 ).

And, if it is claimed that my excuse is the pleasure that I gain from going for a drive, then let’s suppose that I am not motivated by the pleasure that I expect to experience, but rather by some overwhelming impulse that compels me to drive round and round the neighborhoods of my home city — a feature of my OCD that I have chosen, for whatever reason, not to have treated. Here I have no excuse or justification for my action of driving around. But, intuitively, I’m not doing anything morally wrong by so acting.

Furthermore, for any arbitrary year, approximately 30% of road-traffic accident deaths are alcohol related (CDC, 2016). This means that the probability of anyone killing anyone else whilst driving sober (a necessary condition on responsible driving) is significantly lower than the cited statistics would suggest.

In this paper, I am assuming that the contribution respecting or violating a right make to determining the permissibility of an action does not reduce to the contribution respecting or violating that right makes to the overall goodness, or value, of that action. I take this to be the view of rights that accords most closely with commonsense (Nozick, 1974 ).

Some will no doubt hold that much of Jeff Bezos wealth is unjustly acquired in virtue of Amazon’s exploitative employment practices etc. However, I suspect that few — beyond the most extreme anti-Capitalists — will hold that Jeff Bezos has a right to only 0.1% of his wealth.

Other factors, such as fuel efficiency, also factor in. But I will bracket such considerations here.

Anderson, J. M., Nidhi, K., Stanley, K. D., Sorensen, P., Samaras, C., & Oluwatola, O. A. (2016). Autonomous vehicle technology: A guide for policymakers . Rand Corporation.

Book   Google Scholar  

Brink, D. (1992). Mill’s deliberative utilitarianism. Philosophy and Public Affairs, 21 , 67–103.

Google Scholar  

Center for Disease Control. (2016). https://www.cdc.gov/transportationsafety/impaired_driving/impaired-drv_factsheet.html

Charlesworth, M. (1993). Bioethics in a liberal society. Cambridge University Press.

Donner, W. (2009). Autonomy, tradition, and the enforcement of morality. In Mill’s On Liberty (ed. C. L. Ten). Cambridge, Cambridge University Press.

Dorf, M. (2016). Should self-driving cars be mandatory? Verdict. https://verdict.justia.com/2016/10/05/self-driving-cars-mandatory

Feinberg, J. (1984). Offense to others . Oxford University Press.

Fuchs, A. (2006). Mill’s theory of morally correct action. In The Blackwell guide to Mill’s ‘utilitarianism’ (ed. Henry R. West). Blackwell: Malden, MA.

Garza, A. P. (2011). Look Ma, no hands: Wrinkles and wrecks in the age of autonomous vehicles. New Engl Law Rev., 46 , 581–616.

Hart, H. L. A. (1963). Law, liberty, and morality . Oxford University Press.

Hevalke, A., & Nida-Rumelin, J. (2015). Responsibility for crashes of autonomous vehicles: An ethical analysis. Science and Engineering Ethics, 21 (3), 619–630.

Article   Google Scholar  

Holtug, N. (2002). The Harm Principle. Ethical Theory and Moral Practice, 5 (4), 357–389.

Jacobson, D. (2000). Mill on liberty, speech, and the free society. Philosophy and Public Affairs, 29 , 276–309.

Locke, J. (1689/1988). Two treatises of government . Ed. P. Laslett. Cambridge University Press.

Markovits, J. (2014). Moral reason . Oxford University Press.

Mill, J. S. (1859/1999). On liberty . Broadview Press.

Norris Turner, P. (2014). ‘Harm’ and Mill’s Harm Principle. Ethics, 124 (2), 299–326.

Nozick, R. (1974). Anarchy, state, and utopia . Basic Books.

Rawls, J. (2007). Lectures on the history of political philosophy . Harvard University Press.

Raz, J. (1986). The morality of freedom . Oxford University Press.

Shladover, S. (2016). The truth about ‘‘self-driving” cars. Scientific American., 314 (6), 52–57.

Sparrow, R., & Howard, M. (2017). When human beings are like drunk robots: Driver vehicles, ethics, and the future of transport. Transportation Research Part c: Emerging Technologies, 80 , 206–215.

Thomson, J. J. (1990). The realm of rights . Harvard University Press.

United States of Department of Transportation. (2016). 2014 motor vehicle crashes: Overview. https://crashstates.nhtsa.dot.gov

World Health Organization. (2018). Global Status Report on Road Safety 2018. World Health Organization, Switzerland. https://www.who.int/publications/i/item/9789241565684

Download references

Open Access funding provided by the IReL Consortium

Author information

Authors and affiliations.

Trinity College Dublin, Dublin, Ireland

William Ratoff

You can also search for this author in PubMed   Google Scholar

Contributions

The author (WR) conceived and wrote all of the manuscript.

Corresponding author

Correspondence to William Ratoff .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The author declares no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Ratoff, W. Self-driving Cars and the Right to Drive. Philos. Technol. 35 , 57 (2022). https://doi.org/10.1007/s13347-022-00551-1

Download citation

Received : 15 July 2021

Accepted : 11 June 2022

Published : 23 June 2022

DOI : https://doi.org/10.1007/s13347-022-00551-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Self-driving cars
  • Right to drive
  • Find a journal
  • Publish with us
  • Track your research

Loyola University > Center for Digital Ethics & Policy > Research & Initiatives > Essays > Archive > 2018 > Self-Driving Car Ethics

Self-driving car ethics, october 10, 2018.

Earlier this spring 49-year-old Elaine Herzberg was walking her bike across the street in Tempe, Ariz., when she was  hit and killed  by a car traveling at over 40 miles an hour.

There was something unusual about this tragedy: The car that hit Herzberg was driving on its own. It was an autonomous car being tested by Uber.

It’s not the only car crash connected to autonomous vehicles (AVs) as of late. In May, a Tesla on “autopilot” mode   accelerated briefly  before hitting the back of a fire truck, injuring two people.

The accidents unearthed debates that have long been simmering around the ethics of self-driving cars. Is this technology really safer than human drivers? How do we keep people safe while this technology is being developed and tested? In the event of a crash, who is responsible: the developers who create faulty software, the human in the driver’s seat who fails to recognize the system failure, or one of the hundreds of other hands that touched the technology along the way?

The need for driving innovation is clear: Motor vehicle deaths   topped  40,000 in 2017 according to the National Safety Council. A   recent study  by RAND Corporation estimates that putting AVs on the road once the technology is just 10 percent better than human drivers could save thousands of lives. Industry leaders continue to push ahead with development of AVs: Over $80 billion has been invested so far in AV technology, the Brookings Institute   estimated . Top automotive, rideshare and technology companies   including  Uber, Lyft, Tesla, and GM have self-driving car projects in the works. GM   has plans  to release a vehicle that does not need a human driver--and won’t even have pedals or a steering wheel--by 2019.

But as the above crashes indicate, there are questions to be answered before the potential of this technology is fully realized.

Ethics in the programming process

Accidents involving self-driving cars are usually due to sensor error or software error, explains Srikanth Saripalli, associate professor in mechanical engineering at Texas A&M University, in   The Conversation . The first issue is a technical one: Light Detection and Ranging (LIDAR) sensors won’t detect obstacles in fog, cameras need the right light, and radars aren’t always accurate. Sensor technology continues to develop, but there is still significant work needed for self-driving cars to drive safely in icy, snowy and other adverse conditions. When sensors aren’t accurate, it can cause errors in the system that likely wouldn’t trip up human drivers. In the case of Uber’s accident, the sensors identified Herzberg (who was walking her bike) as a pedestrian, a vehicle and finally a bike “with varying expectations of future travel path,” according to a National Transportation Safety Board (NTSB) preliminary   report  on the incident. The confusion caused a deadly delay--it was only 1.3 seconds before impact that the software indicated that emergency brakes were needed.

Self-driving cars are programmed to be rule-followers, explained Saripalli, but the realities of the road are usually a bit more blurred. In a 2017 accident in Tempe, Ariz., for example, a human-driven car attempted to turn left through three lanes of traffic and   collided  with a self-driving Uber. While there isn’t anything inherently unsafe about proceeding through a green light, a human driver might have expected there to be left-turning vehicles and slowed down before the intersection, Saripalli pointed out. “Before autonomous vehicles can really hit the road, they need to be programmed with instructions about how to behave when other vehicles do something out of the ordinary,” he writes.

However, in both the Uber accident that killed Herzberg and the Tesla collision mentioned above, there was a person behind the wheel of the car who wasn’t monitoring the road until it was too late. Even though both companies require that drivers keep their hands on the wheel and eyes on the road in case of a system error, this is a reminder that humans are prone to mistakes, accidents and distractions--even when testing self-driving cars. Can we trust humans to be reliable backup drivers when something goes wrong?

Further, can we trust that companies will be thoughtful--and ethical--about the expectations for backup drivers in the race for miles? Backup drivers who worked for Uber   told CityLab  that they worked eight to ten hour shifts with a 30 minute lunch and were often pressured to forgo breaks. Staying alert and focused for that amount of time is already challenging. With the false security of self-driving technology, it can be tempting to take a quick mental break while on the road. “Uber is essentially asking this operator to do what a robot would do. A robot can run loops and not get fatigued. But humans don’t do that,” an operator told CityLab.

The limits of the trolley scenario

Despite the questions that these accidents raise about the development process, the ethics conversation up to this point has largely been focused on the moment of impact. Consider the “ trolley problem ,” a hypothetical ethical brain teaser frequently brought up in the debate over self-driving cars. If an AV is faced with an inevitable fatal crash, whose life should it save? Should it prioritize the lives of the pedestrian? The passenger? Saving the most lives? Saving the lives of the young or elderly?

Ethical questions abound in every engineering and design decision, engineering researchers Tobias Holstein, Gordana Dodig-Crnkovic and Patrizio Pelliccione argue in their recent paper,   Ethical and Social Aspects of Self-Driving Cars , ranging from software security (can the car be hacked?) to privacy (what happens to the data collected by the car sensors?) to quality assurance (how often does a car like this need maintenance checks?). Furthermore, the researchers note that some ethics are directly at odds with the private industry’s financial incentives: Should a car manufacturer be allowed to sell cheaper cars outfitted with cheaper sensors? Could a customer choose to pay more for a feature that lets them influence the decision-making of the vehicle in fatal situations? How transparent should the technology be, and how will that be balanced with intellectual property that is vital to a competitive advantage?

The future impact of this technology hinges on these complex and bureaucratic “mundane ethics,” points out Johannes Himmelreich, interdisciplinary ethics fellow at Stanford University in   The Conversation . We need to recognize that big moral quandaries don’t just happen five seconds before the point of impact, he writes. Programmers could choose to optimize acceleration and braking to reduce emissions or improve traffic flow. But even these decisions pose big questions for the future of society: Will we prioritize safety or mobility? Efficiency or environmental concerns?

Ethics and responsibility

Lawmakers have already begun making these decisions. State governments and municipalities have scrambled to play host to the first self-driving car tests, in hopes of attracting lucrative tech companies, jobs and an innovation-friendly reputation. Arizona governor Doug Ducey has been one of the most vocal proponents,   welcoming Uber  when the company was kicked out of San Francisco for testing without a permit.

Currently there is a   patchwork  of   laws and executive orders  at the state level that regulate self-driving cars. Varying laws make testing and the eventual widespread roll-out more complicated and, as it is, it is likely that self-driving cars will need a   completely unique set  of safety regulations. Outside of the US, there has been more concrete discussion. Last summer Germany adopted   the world’s first ethical guidelines  for driverless cars. The rules state that human lives must take priority over damage to property and in the case of unavoidable human accident, a decision cannot be made based on “age, gender, physical or mental constitution,” among other stipulations.

There has also been discussion as to whether consumers should have the ultimate choice over AV ethics. Last fall, researchers at the European University Institute suggested the implementation of an “ ethical knob ,” as they call it, in which the consumer would set the software’s ethical decision-making to altruistic (preference for third parties), impartial (equal importance to all parties) or egoistic (preference for all passengers in the vehicle) in the case of an unavoidable accident. While their approach certainly still poses problems (a road in which every vehicle prioritizes the safety of its own passengers could create more risk), it does reflect public opinion. In a   series of surveys , researchers found that people believe in utilitarian ethics when in comes to self-driving cars--AVs should minimize casualties in the case of an unavoidable accident--but wouldn’t be keen on riding in a car that would potentially value the lives of multiple others over their own.

This dilemma sums up the ethical challenges ahead as self driving technology is tested, developed and increasingly driving next to us on the roads. The public wants safety for the most people possible, but not if it means sacrificing one’s own safety or the safety of loved ones. If people will put their lives in the hands of sensors and software, thoughtful ethical decisions will need to be made to ensure a death like Herzberg’s isn’t inevitable on the journey to safer roads.

Karis Hustad  is a Denmark-based freelance journalist covering technology, business, gender, politics and Northern Europe. She previously reported for  The Christian Science Monitor  and  Chicago Inno . Follow her on Twitter  @karishustad  and see more of her work at  karishustad.com .

Research & Initiatives

Return to top.

  • Support LUC
  • Directories
  • Email Sakai Kronos LOCUS Employee Self-Service OneDrive Password Self-Service More Resources
  • Symposia Archive
  • Upcoming Events
  • Past Events
  • Publications
  • CDEP in the News

© Copyright & Disclaimer 2024

Self Driving Cars - Free Essay Examples And Topic Ideas

Self-driving cars, a burgeoning field within automotive technology and artificial intelligence, promise to revolutionize transportation. Essays might explore the technological advancements enabling autonomous vehicles, the potential benefits regarding safety, efficiency, and environmental impact. Moreover, discussions might extend to the ethical, legal, and societal challenges posed by self-driving cars, such as data privacy, liability in case of accidents, and the potential displacement of jobs. We have collected a large number of free essay examples about Self Driving Cars you can find at PapersOwl Website. You can use our samples for inspiration to write your own essay, research paper, or just to explore a new topic for yourself.

Revolution in Technology – Self Driving Cars

Humans are distinguishable from all other life on Earth due to their remarkable intelligence and need to advance and revolutionize the world around them. Our ancestors have worked tirelessly to renovate and make the world that we are so familiar with today. As more and more time passes, the technological advancements that people are achieving are happening more rapidly and more groundbreaking than ever before. The true meaning of the word automobile, is a car that drives itself. Intelligent minds […]

Self Driving Cars – Waymo

A self-driving car can be defined as ""a vehicle that can guide itself without human conduction"" (Techopedia, n.d.). Various companies are building these companies, with Waymo at the forefront of the industry. Waymo is a company owned by Alphabet Inc., located in Phoenix, Arizona. They started as Google's self-driving car project, then became an independent company. Their most recent vehicles are completely driverless, no longer equipped with steering wheels or pedals. These autonomous cars are a major technological advance that […]

The Economic Impact of Self-Driving Cars

The automobile company is drastically changing over the years. Due to advancements in technology, driverless cars are in the near future. A self-driving car is a motor vehicle that is capable of automated driving and navigating entirely without direct human input. Autonomous cars are able to use cameras, sensory, GPS location, and computer systems to operate accurately and efficiently. Driverless cars have quickly become the most discussed new technologies that will be arriving within the next few years. Once these […]

We will write an essay sample crafted to your needs.

Self-driving Cars are Safer, Better and won’t Get Tired

As the thirst for new technology thrives, the market for the tech doubles in volume each year. A larger market needs a larger supply and the tech giants are delivering, on their promise. The arguably most socially disturbing tech company is Tesla, Inc. formerly Tesla Motors. Founded in 2003, and by 2008 they released their first all-electric car, which kicked off the whole entire phenomenon, now regarded as the most relevant tech giant. Self-driving cars are safer, better and won't […]

Self-Driving Cars was Invented by a Man Name William Bertelsen

He was a reported by a popular scientist for making a Self-Driving Cars in August. William Bertelsen was born in Moline, Illinois on May 20, 1920. He then died on July, 2009 at Rock Island Illinois. William R. Bertelsen was an American inventor who pioneered in the field of air-cushion Vehicles, and inventor of the Aero mobile, which was credited as the first hovercraft to carry a human over land and over water. In 1959, William Bertelsen became the unlikely […]

How Safe are Self-driving Cars?

Self-driving cars are harmful to society because they will decrease safety and cause confusion. There are many disadvantages that these autonomous vehicles hold, including the price. Most importantly, there could be unavoidable accidents caused by the lack of a brain in these driverless cars. Autonomous driving is no longer a futuristic dream, it is becoming a reality. Self driving cars are automobiles which require little to no human involvement. Car companies, such as Tesla, have been equipping their vehicles with […]

A Great Impact of Technology on Cars

Technology has a great impact on our lives and it took over the world. It has quickly developed and changed people life. As new generations develop, technological grew. Some believe that technology has had a good impact on our lives. Others like to believe that technology brings a lot of negative effects to our personal and social life every day. We now depend on technology, which more and more things in life get automated. we begin to use less of […]

Research on Self-driving Cars

I am doing my research on self-driving cars. A self-driving is basically just a car that drives itself. It is a car or truck in which human drivers are never required to take control to safely operate the vehicle. Also known as autonomous or ""driverless"" cars, they combine sensors and software to control, navigate, and drive the vehicle. You might ask, well who developed or invented this car? Well, German engineers led by scientist Ernst Dickmaans developed it. Decades before […]

Are Self-driving Cars Good for the People the Environment and the Future?

Cars have changed history and transportation. But, will self-driving cars change the future also? Self-driving cars may have more of a chance to get lost, however, they can the most updated maps in the downloads. Technology can bring big changes into someone's everyday life. Now, the driverless car is also set to bring big changes. It too could become part of everyday life in the near future. Self-driving cars can change everything. ""Imagine that instead of going in your car, […]

Self-Driving Cars Case Study

In March 2018, one of Uber's self-driving cars struck and killed an Arizona pedestrian. Uber reacted to the March incident by suspending the trial phase of their self-driving cars. Uber now plans to monitors drivers to make sure they are paying attention, have two human drivers that can manually operate the self-driving car, and have the car’s automatic braking system active at all times. Uber and self-driving cars rightfully received a lot of criticism after the fatal accident back in […]

Electric Cars Vs Gas Cars for Today’s Market

Abstract Today’s car market is vastly expanding, and with all the options, it is hard to figure out which car or truck to choose. This report compares the differencing between electric and gasoline-powered cars. Using the cost, market, and practicality of each gasoline and electric motor conclusions can be made on which power source makes more sense to invest in for today’s market. The cost to own and drive an electric car is about one-third the cost to drive a […]

Additional Example Essays

  • Drunk Driving
  • Thesis and Preview: Drunk Driving
  • Substance Abuse and Mental Illnesses
  • Socioautobiography Choices and Experiences Growing up
  • Comparison Of Introverts VS Extroverts
  • The Cask of Amontillado Literary Analysis
  • Colonism in Things Fall Apart
  • Marriage and Symbolism in "A Doll's House"
  • Racism in A Raisin in the Sun
  • The Yellow Wallpaper Feminism
  • Symbolism in “Hills like White Elephants”
  • Legal And The Nco Leader Informative Essay

How To Write an Essay About Self Driving Cars

Understanding self-driving cars.

Before writing an essay about self-driving cars, it's essential to comprehend what they are and the technology behind them. Self-driving cars, also known as autonomous vehicles, are cars or trucks in which human drivers are never required to take control to safely operate the vehicle. These vehicles use a combination of sensors, cameras, radar, and artificial intelligence to navigate and drive. Begin your essay by explaining the technology that enables these cars to operate, including machine learning and sensor fusion. Discuss the different levels of vehicle automation, from partially automated to fully autonomous, and the key companies and players in the field.

Developing a Thesis Statement

A strong essay on self-driving cars should be anchored by a clear, focused thesis statement. This statement should present a specific viewpoint or argument about autonomous vehicles. For example, you might discuss the potential impact of self-driving cars on safety and traffic, analyze the ethical implications of autonomous driving decisions, or explore the challenges facing widespread adoption. Your thesis will guide the direction of your essay and provide a structured approach to your analysis.

Gathering Supporting Evidence

Support your thesis with relevant data, research, and examples. This might include studies on the safety of self-driving cars, surveys on public opinion regarding autonomous vehicles, or real-world data on traffic efficiency and environmental impact. Use this evidence to support your thesis and build a persuasive argument. Remember to consider different perspectives and address potential counterarguments to your thesis.

Analyzing the Impact of Self-Driving Cars

Dedicate a section of your essay to analyzing the potential impact of self-driving cars. Discuss various aspects, such as the implications for road safety, changes in transportation infrastructure, and effects on industries like insurance and logistics. Explore both the potential benefits and drawbacks, ensuring a balanced view. For instance, consider how autonomous vehicles could reduce accidents caused by human error but might also lead to challenges in cybersecurity and data privacy.

Concluding the Essay

Conclude your essay by summarizing your main points and restating your thesis in light of the evidence and discussion provided. Your conclusion should tie together your analysis and emphasize the significance of self-driving cars in shaping the future of transportation. You might also want to highlight areas where further research or development is needed, or the potential for societal changes driven by the adoption of autonomous vehicles.

Reviewing and Refining Your Essay

After completing your essay, review and edit it for clarity and coherence. Ensure that your arguments are well-structured and supported by evidence. Check for grammatical accuracy and ensure that your essay flows logically from one point to the next. Consider seeking feedback from peers or instructors to further improve your essay. A well-crafted essay on self-driving cars will not only demonstrate your understanding of the topic but also your ability to engage with complex technological and societal issues.

1. Tell Us Your Requirements

2. Pick your perfect writer

3. Get Your Paper and Pay

Hi! I'm Amy, your personal assistant!

Don't know where to start? Give me your paper requirements and I connect you to an academic expert.

short deadlines

100% Plagiarism-Free

Certified writers

For IEEE Members

Ieee spectrum, follow ieee spectrum, support ieee spectrum, enjoy more free content and benefits by creating an account, saving articles to read later requires an ieee spectrum account, the institute content is only available for members, downloading full pdf issues is exclusive for ieee members, downloading this e-book is exclusive for ieee members, access to spectrum 's digital edition is exclusive for ieee members, following topics is a feature exclusive for ieee members, adding your response to an article requires an ieee spectrum account, create an account to access more content and features on ieee spectrum , including the ability to save articles to read later, download spectrum collections, and participate in conversations with readers and editors. for more exclusive content and features, consider joining ieee ., join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of spectrum’s articles, archives, pdf downloads, and other benefits. learn more about ieee →, join the world’s largest professional organization devoted to engineering and applied sciences and get access to this e-book plus all of ieee spectrum’s articles, archives, pdf downloads, and other benefits. learn more about ieee →, access thousands of articles — completely free, create an account and get exclusive content and features: save articles, download collections, and talk to tech insiders — all free for full access and benefits, join ieee as a paying member., what self-driving cars tell us about ai risks, 5 conclusions from an automation expert fresh off a stint with the u.s. highway safety agency.

A white car with instruments mounted on the roof and with the word “Cruise” emblazoned on its front door stands in a pedestrian crossing on a busy street in San Francisco, with people walking past it

This self-driving Cruise robotaxi got stuck at a crossroads in San Francisco in 2019, inconveniencing pedestrians.

In 2016, just weeks before the Autopilot in his Tesla drove Joshua Brown to his death , I pleaded with the U.S. Senate Committee on Commerce, Science, and Transportation to regulate the use of artificial intelligence in vehicles. Neither my pleading nor Brown’s death could stir the government to action.

Since then, automotive AI in the United States has been linked to at least 25 confirmed deaths and to hundreds of injuries and instances of property damage.

The lack of technical comprehension across industry and government is appalling. People do not understand that the AI that runs vehicles—both the cars that operate in actual self-driving modes and the much larger number of cars offering advanced driving assistance systems (ADAS)—are based on the same principles as ChatGPT and other large language models (LLMs). These systems control a car’s lateral and longitudinal position—to change lanes, brake, and accelerate—without waiting for orders to come from the person sitting behind the wheel.

Both kinds of AI use statistical reasoning to guess what the next word or phrase or steering input should be, heavily weighting the calculation with recently used words or actions. Go to your Google search window and type in “now is the time” and you will get the result “now is the time for all good men.” And when your car detects an object on the road ahead, even if it’s just a shadow, watch the car’s self-driving module suddenly brake.

Neither the AI in LLMs nor the one in autonomous cars can “understand” the situation, the context, or any unobserved factors that a person would consider in a similar situation. The difference is that while a language model may give you nonsense, a self-driving car can kill you.

In late 2021, despite receiving threats to my physical safety for daring to speak truth about the dangers of AI in vehicles, I agreed to work with the U.S. National Highway Traffic Safety Administration (NHTSA) as the senior safety advisor. What qualified me for the job was a doctorate focused on the design of joint human-automated systems and 20 years of designing and testing unmanned systems, including some that are now used in the military, mining, and medicine.

My time at NHTSA gave me a ringside view of how real-world applications of transportation AI are or are not working. It also showed me the intrinsic problems of regulation, especially in our current divisive political landscape. My deep dive has helped me to formulate five practical insights. I believe they can serve as a guide to industry and to the agencies that regulate them.

1. Human errors in operation get replaced by human errors in coding

Proponents of autonomous vehicles routinely assert that the sooner we get rid of drivers, the safer we will all be on roads. They cite the NHTSA statistic that 94 percent of accidents are caused by human drivers. But this statistic is taken out of context and inaccurate. As the NHTSA itself noted in that report, the driver’s error was “the last event in the crash causal chain…. It is not intended to be interpreted as the cause of the crash.” In other words, there were many other possible causes as well, such as poor lighting and bad road design.

Moreover, the claim that autonomous cars will be safer than those driven by humans ignores what anyone who has ever worked in software development knows all too well: that software code is incredibly error-prone, and the problem only grows as the systems become more complex.

While a language model may give you nonsense, a self-driving car can kill you.

Consider these recent crashes in which faulty software was to blame. There was the October 2021 crash of a Pony.ai driverless car into a sign, the April 2022 crash of a TuSimple tractor trailer into a concrete barrier, the June 2022 crash of a Cruise robotaxi that suddenly stopped while making a left turn, and the March 2023 crash of another Cruise car that rear-ended a bus .

These and many other episodes make clear that AI has not ended the role of human error in road accidents. That role has merely shifted from the end of a chain of events to the beginning—to the coding of the AI itself. Because such errors are latent, they are far harder to mitigate. Testing, both in simulation but predominantly in the real world, is the key to reducing the chance of such errors, especially in safety-critical systems. However, without sufficient government regulation and clear industry standards, autonomous-vehicle companies will cut corners in order to get their products to market quickly.

2. AI failure modes are hard to predict

A large language model guesses which words and phrases are coming next by consulting an archive assembled during training from preexisting data. A self-driving module interprets the scene and decides how to get around obstacles by making similar guesses, based on a database of labeled images—this is a car, this is a pedestrian, this is a tree—also provided during training. But not every possibility can be modeled, and so the myriad failure modes are extremely hard to predict. All things being equal, a self-driving car can behave very differently on the same stretch of road at different times of the day, possibly due to varying sun angles. And anyone who has experimented with an LLM and changed just the order of words in a prompt will immediately see a difference in the system’s replies.

One failure mode not previously anticipated is phantom braking. For no obvious reason, a self-driving car will suddenly brake hard, perhaps causing a rear-end collision with the vehicle just behind it and other vehicles further back. Phantom braking has been seen in the self-driving cars of many different manufacturers and in ADAS-equipped cars as well.

Ross Gerber, behind the wheel, and Dan O’Dowd, riding shotgun, watch as a Tesla Model S, running Full Self-Driving software, blows past a stop sign.

THE DAWN PROJECT

The cause of such events is still a mystery. Experts initially attributed it to human drivers following the self-driving car too closely (often accompanying their assessments by citing the misleading 94 percent statistic about driver error). However, an increasing number of these crashes have been reported to NHTSA. In May 2022, for instance, the NHTSA sent a letter to Tesla noting that the agency had received 758 complaints about phantom braking in Model 3 and Y cars. This past May, the German publication Handelsblatt reported on 1,500 complaints of braking issues with Tesla vehicles, as well as 2,400 complaints of sudden acceleration. It now appears that self-driving cars experience roughly twice the rate of rear-end collisions as do cars driven by people.

Clearly, AI is not performing as it should. Moreover, this is not just one company’s problem—all car companies that are leveraging computer vision and AI are susceptible to this problem.

As other kinds of AI begin to infiltrate society, it is imperative for standards bodies and regulators to understand that AI failure modes will not follow a predictable path. They should also be wary of the car companies’ propensity to excuse away bad tech behavior and to blame humans for abuse or misuse of the AI.

3. Probabilistic estimates do not approximate judgment under uncertainty

Ten years ago, there was significant hand-wringing over the rise of IBM’s AI-based Watson, a precursor to today’s LLMs. People feared AI would very soon cause massive job losses, especially in the medical field. Meanwhile, some AI experts said we should stop training radiologists .

These fears didn’t materialize . While Watson could be good at making guesses, it had no real knowledge, especially when it came to making judgments under uncertainty and deciding on an action based on imperfect information. Today’s LLMs are no different: The underlying models simply cannot cope with a lack of information and do not have the ability to assess whether their estimates are even good enough in this context.

These problems are routinely seen in the self-driving world. The June 2022 accident involving a Cruise robotaxi happened when the car decided to make an aggressive left turn between two cars. As the car safety expert Michael Woon detailed in a report on the accident , the car correctly chose a feasible path, but then halfway through the turn, it slammed on its brakes and stopped in the middle of the intersection. It had guessed that an oncoming car in the right lane was going to turn, even though a turn was not physically possible at the speed the car was traveling. The uncertainty confused the Cruise, and it made the worst possible decision. The oncoming car, a Prius, was not turning, and it plowed into the Cruise, injuring passengers in both cars.

Cruise vehicles have also had many problematic interactions with first responders, who by default operate in areas of significant uncertainty. These encounters have included Cruise cars traveling through active firefighting and rescue scenes and driving over downed power lines . In one incident, a firefighter had to knock the window out of the Cruise car to remove it from the scene. Waymo, Cruise’s main rival in the robotaxi business, has experienced similar problems .

These incidents show that even though neural networks may classify a lot of images and propose a set of actions that work in common settings, they nonetheless struggle to perform even basic operations when the world does not match their training data. The same will be true for LLMs and other forms of generative AI . What these systems lack is judgment in the face of uncertainty, a key precursor to real knowledge.

4. Maintaining AI is just as important as creating AI

Because neural networks can only be effective if they are trained on significant amounts of relevant data, the quality of the data is paramount. But such training is not a one-and-done scenario: Models cannot be trained and then sent off to perform well forever after. In dynamic settings like driving, models must be constantly updated to reflect new types of cars, bikes, and scooters, construction zones, traffic patterns, and so on.

In the March 2023 accident, in which a Cruise car hit the back of an articulated bus, experts were surprised, as many believed such accidents were nearly impossible for a system that carries lidar, radar, and computer vision. Cruise attributed the accident to a faulty model that had guessed where the back of the bus would be based on the dimensions of a normal bus; additionally, the model rejected the lidar data that correctly detected the bus.

Software code is incredibly error-prone, and the problem only grows as the systems become more complex.

This example highlights the importance of maintaining the currency of AI models . “Model drift” is a known problem in AI, and it occurs when relationships between input and output data change over time. For example, if a self-driving car fleet operates in one city with one kind of bus, and then the fleet moves to another city with different bus types, the underlying model of bus detection will likely drift, which could lead to serious consequences.

Such drift affects AI working not only in transportation but in any field where new results continually change our understanding of the world. This means that large language models can’t learn a new phenomenon until it has lost the edge of its novelty and is appearing often enough to be incorporated into the dataset. Maintaining model currency is just one of many ways that AI requires periodic maintenance , and any discussion of AI regulation in the future must address this critical aspect.

5. AI has system-level implications that can’t be ignored

Self-driving cars have been designed to stop cold the moment they can no longer reason and no longer resolve uncertainty. This is an important safety feature. But as Cruise, Tesla, and Waymo have demonstrated, managing such stops poses an unexpected challenge.

A stopped car can block roads and intersections, sometimes for hours, throttling traffic and keeping out first-response vehicles. Companies have instituted remote-monitoring centers and rapid-action teams to mitigate such congestion and confusion, but at least in San Francisco, where hundreds of self-driving cars are on the road , city officials have questioned the quality of their responses.

Self-driving cars rely on wireless connectivity to maintain their road awareness, but what happens when that connectivity drops? One driver found out the hard way when his car became entrapped in a knot of 20 Cruise vehicles that had lost connection to the remote-operations center and caused a massive traffic jam.

Of course, any new technology may be expected to suffer from growing pains, but if these pains become serious enough, they will erode public trust and support. Sentiment towards self-driving cars used to be optimistic in tech-friendly San Francisco, but now it has taken a negative turn due to the sheer volume of problems the city is experiencing. Such sentiments may eventually lead to public rejection of the technology if a stopped autonomous vehicle causes the death of a person who was prevented from getting to the hospital in time.

So what does the experience of self-driving cars say about regulating AI more generally? Companies not only need to ensure they understand the broader systems-level implications of AI, they also need oversight—they should not be left to police themselves. Regulatory agencies must work to define reasonable operating boundaries for systems that use AI and issue permits and regulations accordingly. When the use of AI presents clear safety risks, agencies should not defer to industry for solutions and should be proactive in setting limits.

AI still has a long way to go in cars and trucks. I’m not calling for a ban on autonomous vehicles. There are clear advantages to using AI, and it is irresponsible for people to call on a ban, or even a pause, on AI. But we need more government oversight to prevent the taking of unnecessary risks.

And yet the regulation of AI in vehicles isn’t happening yet. That can be blamed in part on industry overclaims and pressure, but also on a lack of capability on the part of regulators. The European Union has been more proactive about regulating artificial intelligence in general and in self-driving cars particularly. In the United States, we simply do not have enough people in federal and state departments of transportation that understand the technology deeply enough to advocate effectively for balanced public policies and regulations. The same is true for other types of AI.

This is not any one administration’s problem. Not only does AI cut across party lines, it cuts across all agencies and at all levels of government. The Department of Defense, Department of Homeland Security, and other government bodies all suffer from a workforce that does not have the technical competence needed to effectively oversee advanced technologies, especially rapidly evolving AI.

To engage in effective discussion about the regulation of AI, everyone at the table needs to have technical competence in AI. Right now, these discussions are greatly influenced by industry (which has a clear conflict of interest) or Chicken Littles who claim machines have achieved the ability to outsmart humans. Until government agencies have people with the skills to understand the critical strengths and weaknesses of AI, conversations about regulation will see very little meaningful progress.

Recruiting such people can be easily done. Improve pay and bonus structures, embed government personnel in university labs, reward professors for serving in the government, provide advanced certificate and degree programs in AI for all levels of government personnel, and offer scholarships for undergraduates who agree to serve in the government for a few years after graduation. Moreover, to better educate the public, college classes that teach AI topics should be free.

We need less hysteria and more education so that people can understand the promises but also the realities of AI.

  • Understanding the Risks Associated With AI ›
  • Too Perilous For AI? EU Proposes Risk-Based Rules ›
  • Deep Learning Makes Driverless Cars Better at Spotting Pedestrians ›
  • Toyota's Gill Pratt on Self-Driving Cars and the Reality of Full Autonomy ›
  • Fear’s Neural Hallmarks Make AI Drive More Safely - IEEE Spectrum ›
  • AI Self-Recognition Creates Chances for New Security Risks - IEEE Spectrum ›
  • Cruising Toward Self-Driving Cars: Standards and Testing Will Help ... ›
  • The future of autonomous vehicles (AV) | McKinsey ›
  • Automated Vehicle Safety | NHTSA ›

Mary (Missy) L. Cummings , a senior member of IEEE, is a professor in the Department of Electrical and Computer Engineering and the Department of Computer Science, Duke Institute for Brain Sciences (DIBS), Duke University. As a specialist in systems automation and the way that people use it, she recently served as a safety consultant for the National Highway Traffic Safety Administration.

SHAIK ATIF

Despite the

fact that autonomous transport largely solves the problem of a driver’s inattention on the road, we, alas, cannot talk about his complete safety. Below you can see the most common AI risks for self-driving cars and the whole automotive industry.

Secure AI Awareness

Secure AI Assessment

Secure AI Assurance

Michael C

One possible early solution, is to implement diverse redundant control systems, where both/all the systems are compared for the quality of resolution, or the ‘second’ system is constantly evaluating the performance of the first - to provide a safety net of options when the primary algorithm is deadlocked or the conclusion unresolved.

Not a perfect answer, but an order of magnitude better than depending on a single actor to make every decision.

Vaibhav Sunder

To be able to understand commercial technology the legacy is important. Thus, GUI based Operating System Microsoft Windows crashed on it's introduction video. It is nearly exactly like that.

Because the code is so huge, because the front-end seems so intuitive, we falter at the need for post-launch pruning and hurry. There were jokes in Urdu of a man losing millions because his mobile phone/handy did not get network in Pakistan around the time Nokia 6600 was introducing SEAsia to digital media on phone MMCards. In Hindi highways people are seen stalking trucks nowadays. Don'tGoogle :)

NIST Announces Post-Quantum Cryptography Standards

Level up your leadership skills with ieee courses, amazon vies for nuclear-powered data center.

Home

  • Announcements

Self-Driving Car Ethics

Earlier this spring 49-year-old Elaine Herzberg was walking her bike across the street in Tempe, Ariz., when she was hit and killed by a car traveling at over 40 miles an hour.

There was something unusual about this tragedy: The car that hit Herzberg was driving on its own. It was an autonomous car being tested by Uber.

It’s not the only car crash connected to autonomous vehicles (AVs) as of late. In May, a Tesla on “autopilot” mode accelerated briefly before hitting the back of a fire truck, injuring two people.

The accidents unearthed debates that have long been simmering around the ethics of self-driving cars. Is this technology really safer than human drivers? How do we keep people safe while this technology is being developed and tested? In the event of a crash, who is responsible: the developers who create faulty software, the human in the driver’s seat who fails to recognize the system failure, or one of the hundreds of other hands that touched the technology along the way?

The need for driving innovation is clear: Motor vehicle deaths topped 40,000 in 2017 according to the National Safety Council. A recent study by RAND Corporation estimates that putting AVs on the road once the technology is just 10 percent better than human drivers could save thousands of lives. Industry leaders continue to push ahead with development of AVs: Over $80 billion has been invested so far in AV technology, the Brookings Institute estimated . Top automotive, rideshare and technology companies including Uber, Lyft, Tesla, and GM have self-driving car projects in the works. GM has plans to release a vehicle that does not need a human driver--and won’t even have pedals or a steering wheel--by 2019.

But as the above crashes indicate, there are questions to be answered before the potential of this technology is fully realized.

Ethics in the programming process

Accidents involving self-driving cars are usually due to sensor error or software error, explains Srikanth Saripalli, associate professor in mechanical engineering at Texas A&M University, in The Conversation . The first issue is a technical one: Light Detection and Ranging (LIDAR) sensors won’t detect obstacles in fog, cameras need the right light, and radars aren’t always accurate. Sensor technology continues to develop, but there is still significant work needed for self-driving cars to drive safely in icy, snowy and other adverse conditions. When sensors aren’t accurate, it can cause errors in the system that likely wouldn’t trip up human drivers. In the case of Uber’s accident, the sensors identified Herzberg (who was walking her bike) as a pedestrian, a vehicle and finally a bike “with varying expectations of future travel path,” according to a National Transportation Safety Board (NTSB) preliminary report on the incident. The confusion caused a deadly delay--it was only 1.3 seconds before impact that the software indicated that emergency brakes were needed.

Self-driving cars are programmed to be rule-followers, explained Saripalli, but the realities of the road are usually a bit more blurred. In a 2017 accident in Tempe, Ariz., for example, a human-driven car attempted to turn left through three lanes of traffic and collided with a self-driving Uber. While there isn’t anything inherently unsafe about proceeding through a green light, a human driver might have expected there to be left-turning vehicles and slowed down before the intersection, Saripalli pointed out. “Before autonomous vehicles can really hit the road, they need to be programmed with instructions about how to behave when other vehicles do something out of the ordinary,” he writes.

However, in both the Uber accident that killed Herzberg and the Tesla collision mentioned above, there was a person behind the wheel of the car who wasn’t monitoring the road until it was too late. Even though both companies require that drivers keep their hands on the wheel and eyes on the road in case of a system error, this is a reminder that humans are prone to mistakes, accidents and distractions--even when testing self-driving cars. Can we trust humans to be reliable backup drivers when something goes wrong?

Further, can we trust that companies will be thoughtful--and ethical--about the expectations for backup drivers in the race for miles? Backup drivers who worked for Uber told CityLab that they worked eight to ten hour shifts with a 30 minute lunch and were often pressured to forgo breaks. Staying alert and focused for that amount of time is already challenging. With the false security of self-driving technology, it can be tempting to take a quick mental break while on the road. “Uber is essentially asking this operator to do what a robot would do. A robot can run loops and not get fatigued. But humans don’t do that,” an operator told CityLab.

The limits of the trolley scenario

Despite the questions that these accidents raise about the development process, the ethics conversation up to this point has largely been focused on the moment of impact. Consider the “ trolley problem ,” a hypothetical ethical brain teaser frequently brought up in the debate over self-driving cars. If an AV is faced with an inevitable fatal crash, whose life should it save? Should it prioritize the lives of the pedestrian? The passenger? Saving the most lives? Saving the lives of the young or elderly?

Ethical questions abound in every engineering and design decision, engineering researchers Tobias Holstein, Gordana Dodig-Crnkovic and Patrizio Pelliccione argue in their recent paper, Ethical and Social Aspects of Self-Driving Cars , ranging from software security (can the car be hacked?) to privacy (what happens to the data collected by the car sensors?) to quality assurance (how often does a car like this need maintenance checks?). Furthermore, the researchers note that some ethics are directly at odds with the private industry’s financial incentives: Should a car manufacturer be allowed to sell cheaper cars outfitted with cheaper sensors? Could a customer choose to pay more for a feature that lets them influence the decision-making of the vehicle in fatal situations? How transparent should the technology be, and how will that be balanced with intellectual property that is vital to a competitive advantage?

The future impact of this technology hinges on these complex and bureaucratic “mundane ethics,” points out Johannes Himmelreich, interdisciplinary ethics fellow at Stanford University in The Conversation . We need to recognize that big moral quandaries don’t just happen five seconds before the point of impact, he writes. Programmers could choose to optimize acceleration and braking to reduce emissions or improve traffic flow. But even these decisions pose big questions for the future of society: Will we prioritize safety or mobility? Efficiency or environmental concerns?

Ethics and responsibility

Lawmakers have already begun making these decisions. State governments and municipalities have scrambled to play host to the first self-driving car tests, in hopes of attracting lucrative tech companies, jobs and an innovation-friendly reputation. Arizona governor Doug Ducey has been one of the most vocal proponents, welcoming Uber when the company was kicked out of San Francisco for testing without a permit.

Currently there is a patchwork of laws and executive orders at the state level that regulate self-driving cars. Varying laws make testing and the eventual widespread roll-out more complicated and, as it is, it is likely that self-driving cars will need a completely unique set of safety regulations. Outside of the US, there has been more concrete discussion. Last summer Germany adopted the world’s first ethical guidelines for driverless cars. The rules state that human lives must take priority over damage to property and in the case of unavoidable human accident, a decision cannot be made based on “age, gender, physical or mental constitution,” among other stipulations.

There has also been discussion as to whether consumers should have the ultimate choice over AV ethics. Last fall, researchers at the European University Institute suggested the implementation of an “ ethical knob ,” as they call it, in which the consumer would set the software’s ethical decision-making to altruistic (preference for third parties), impartial (equal importance to all parties) or egoistic (preference for all passengers in the vehicle) in the case of an unavoidable accident. While their approach certainly still poses problems (a road in which every vehicle prioritizes the safety of its own passengers could create more risk), it does reflect public opinion. In a series of surveys , researchers found that people believe in utilitarian ethics when in comes to self-driving cars--AVs should minimize casualties in the case of an unavoidable accident--but wouldn’t be keen on riding in a car that would potentially value the lives of multiple others over their own.

This dilemma sums up the ethical challenges ahead as self driving technology is tested, developed and increasingly driving next to us on the roads. The public wants safety for the most people possible, but not if it means sacrificing one’s own safety or the safety of loved ones. If people will put their lives in the hands of sensors and software, thoughtful ethical decisions will need to be made to ensure a death like Herzberg’s isn’t inevitable on the journey to safer roads.

Karis Hustad is a Denmark-based freelance journalist covering technology, business, gender, politics and Northern Europe. She previously reported for The Christian Science Monitor and Chicago Inno. Follow her on Twitter @karishustad and see more of her work at karishustad.com . 

Add new comment

Restricted html.

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.

Copyright © Center for Digital Ethics & Policy 2010-2017.

Privacy Policy

About Stanford GSB

  • The Leadership
  • Dean’s Updates
  • School News & History
  • Commencement
  • Business, Government & Society
  • Centers & Institutes
  • Center for Entrepreneurial Studies
  • Center for Social Innovation
  • Stanford Seed

About the Experience

  • Learning at Stanford GSB
  • Experiential Learning
  • Guest Speakers
  • Entrepreneurship
  • Social Innovation
  • Communication
  • Life at Stanford GSB
  • Collaborative Environment
  • Activities & Organizations
  • Student Services
  • Housing Options
  • International Students

Full-Time Degree Programs

  • Why Stanford MBA
  • Academic Experience
  • Financial Aid
  • Why Stanford MSx
  • Research Fellows Program
  • See All Programs

Non-Degree & Certificate Programs

  • Executive Education
  • Stanford Executive Program
  • Programs for Organizations
  • The Difference
  • Online Programs
  • Stanford LEAD
  • Seed Transformation Program
  • Aspire Program
  • Seed Spark Program
  • Faculty Profiles
  • Academic Areas
  • Awards & Honors
  • Conferences

Faculty Research

  • Publications
  • Working Papers
  • Case Studies

Research Hub

  • Research Labs & Initiatives
  • Business Library
  • Data, Analytics & Research Computing
  • Behavioral Lab

Research Labs

  • Cities, Housing & Society Lab
  • Golub Capital Social Impact Lab

Research Initiatives

  • Corporate Governance Research Initiative
  • Corporations and Society Initiative
  • Policy and Innovation Initiative
  • Rapid Decarbonization Initiative
  • Stanford Latino Entrepreneurship Initiative
  • Value Chain Innovation Initiative
  • Venture Capital Initiative
  • Career & Success
  • Climate & Sustainability
  • Corporate Governance
  • Culture & Society
  • Finance & Investing
  • Government & Politics
  • Leadership & Management
  • Markets and Trade
  • Operations & Logistics
  • Opportunity & Access
  • Technology & AI
  • Opinion & Analysis
  • Email Newsletter

Welcome, Alumni

  • Communities
  • Digital Communities & Tools
  • Regional Chapters
  • Women’s Programs
  • Identity Chapters
  • Find Your Reunion
  • Career Resources
  • Job Search Resources
  • Career & Life Transitions
  • Programs & Webinars
  • Career Video Library
  • Alumni Education
  • Research Resources
  • Volunteering
  • Alumni News
  • Class Notes
  • Alumni Voices
  • Contact Alumni Relations
  • Upcoming Events

Admission Events & Information Sessions

  • MBA Program
  • MSx Program
  • PhD Program
  • Alumni Events
  • All Other Events

Exploring the Ethics Behind Self-Driving Cars

How do you code ethics into autonomous automobiles? And who is responsible when things go awry?

August 13, 2015

Illustration by Abigail Goh

Imagine a runaway trolley barreling down on five people standing on the tracks up ahead. You can pull a lever to divert the trolley onto a different set of tracks where only one person is standing. Is the moral choice to do nothing and let the five people die? Or should you hit the switch and therefore actively participate in a different person’s death?

In the real world, the “trolley problem” first posed by philosopher Philippa Foot in 1967 is an abstraction most won’t ever have to actually face. And yet, as driverless cars roll into our lives, policymakers and auto manufacturers are edging into similar ethical dilemmas.

Quote One of the questions that comes up in class discussions is whether, as a driver, you should be able to program a degree of selfishness, making the car save the driver and passengers rather than people outside the car. Attribution Ken Shotts

For instance, how do you program a code of ethics into an automobile that performs split-second calculations that could harm one human over another? Who is legally responsible for the inevitable driverless-car accidents — car owners, carmakers, or programmers? Under what circumstances is a self-driving car allowed to break the law? What regulatory framework needs to be applied to what could be the first broad-scale social interaction between humans and intelligent machines?

Ken Shotts and Neil Malhotra , professors of political economy at Stanford GSB, along with Sheila Melvin , mull the philosophical and psychological issues at play in a new case study titled “ ‘The Nut Behind the Wheel’ to ‘Moral Machines’: A Brief History of Auto Safety .” Shotts discusses some of the issues here:

self driving cars essay conclusion

What are the ethical issues we need to be thinking about in light of driverless cars?

This is a great example of the “trolley problem.” You have a situation where the car might have to make a decision to sacrifice the driver to save some other people, or sacrifice one pedestrian to save some other pedestrians. And there are more subtle versions of it. Say there are two motorcyclists, one is wearing a helmet and the other isn’t. If I want to minimize deaths, I should hit the one wearing the helmet, but that just doesn’t feel right.

These are all hypothetical situations that you have to code into what the car is going to do. You have to cover all these situations, and so you are making the ethical choice up front.

It’s an interesting philosophical question to think about. It may turn out that we’ll be fairly consequentialist about these things. If we can save five lives by taking one, we generally think that’s something that should be done in the abstract. But it is something that is hard for automakers to talk about because they have to use very precise language for liability reasons when they talk about lives saved or deaths.

self driving cars essay conclusion

What are the implications of having to make those ethical choices in advance?

Right now, we make those instinctive decisions as humans based on our psychology. And we make those decisions erroneously some of the time. We make mistakes, we mishandle the wheel. But we make gut decisions that might be less selfish than what we would do if we were programming our own car. One of the questions that comes up in class discussions is whether, as a driver, you should be able to program a degree of selfishness, making the car save the driver and passengers rather than people outside the car. Frankly, my answer would be very different if I were programming it for driving alone versus having my 7-year-old daughter in the car. If I have her in the car, I would be very, very selfish in my programming.

Who needs to be taking the lead on parsing these ethical questions — policymakers, the automotive industry, philosophers?

The reality is that a lot of it will be what the industry chooses to do. But then policymakers are going to have to step in at some point. And at some point, there are going to be liability questions.

There are also questions about breaking the law. The folks at the Center for Automotive Research at Stanford have pointed out that there are times when normal drivers do all sorts of illegal things that make us safer. You’re merging onto the highway and you go the speed of traffic, which is faster than the speed limit. Someone goes into your lane and you briefly swerve into an oncoming lane. In an autonomous vehicle, is the “driver” legally culpable for those things? Is the automaker legally culpable for it? How do you handle all of that? That’s going to need to be worked out. And I don’t know how it is going to be worked out, frankly. Just that it needs to be.

self driving cars essay conclusion

Are there any lessons to be learned from the history of auto safety that could help guide us?

Sometimes eliminating people’s choices is beneficial. When seatbelts were not mandatory in cars, they were not supplied in cars, and when they were not mandatory to be used, they were not used. Looking at cost-benefit analysis, seatbelts are incredibly cost effective at saving lives, as is stability control. There are real benefits to having things like that mandated so that people don’t have the choice not to buy them.

The liability system can also induce companies to include automated safety features. But that actually raises an interesting issue, which is that in the liability system, sins of commission are punished more severely than sins of omission. If you put in airbags and the airbag hurts someone, that’s a huge liability issue. Failing to put in the airbag and someone dies? Not as big of an issue. Similarly, suppose that with a self-driving car, a company installs safety features that are automated. They save a lot of lives, but some of the time they result in some deaths. That safety feature is going to get hit in the liability system, I would think.

What sort of regulatory thickets are driverless cars headed into?

When people talk about self-driving cars, a lot of the attention falls on the Google car driving itself completely. But this really is just a progression of automation, bit by bit by bit. Stability control and anti-lock brakes are self-driving–type features, and we’re just getting more and more of them. Google gets a lot of attention in Silicon Valley, but the traditional automakers are putting this into practice.

So you could imagine different platforms and standards around all this. For example, should this be a series of incremental moves or should it be a big jump all the way to a Google-style self-driving car? Setting up different regulatory regimes would favor one of those approaches over the other. I’m not sure whether it’s the right policy, but incremental moves could be a good policy. But it also would be really good from the perspective of the auto manufacturers, and less good from the perspective of Google. And it could be potentially to a company’s advantage if they could try to influence the direction that the standards go in a way that favors their technology. This is something that companies moving into this area have to think about strategically, in addition to thinking about the ethical stuff.

self driving cars essay conclusion

What other big ethical questions do you see coming down the road?

At some point, do individuals get banned from having the right to drive? It sounds really far-fetched now. Being able to hit the road and drive freely is a very American thing to do. It feels weird to take away something that feels central to a lot of people’s identity.

But there are precedents for it. The one that Neil Malhotra, one of my coauthors on this case, pointed out is building houses. This used to be something we all did for ourselves with no government oversight 150 years ago. That’s a very immediate thing — it’s your dwelling, your castle. But if you try to build a house in most of the United States nowadays, there are all sorts of rules for how you have to do the wiring, how wide this has to be, how thick that has to be. Every little detail is very, very tightly regulated. Basically, you can’t do it yourself unless you follow all those rules. We’ve taken that out of individuals’ hands because we viewed there were beneficial consequences of taking it out of individuals’ hands. That may well happen for cars.

self driving cars essay conclusion

Graphics sources: newyorkologist.org; oldcarbrocheres.com; National Museum of American History; Academy of Achievement; iStock/hxdbzxy; Reuters/Stephen Lam.

For media inquiries, visit the Newsroom .

Explore More

Navigating the ai revolution: practical insights for entrepreneurs, invisible matchmakers: how algorithms pair people with opportunities, co-intelligence: an ai masterclass with ethan mollick, editor’s picks.

self driving cars essay conclusion

‘The Nut Behind the Wheel’ to ‘Moral Machines:’ A Brief History of Auto Safety Neil Malhotra, Ken Shotts, Sheila Melvin

  • See the Current DEI Report
  • Supporting Data
  • Research & Insights
  • Share Your Thoughts
  • Search Fund Primer
  • Teaching & Curriculum
  • Affiliated Faculty
  • Faculty Advisors
  • Louis W. Foster Resource Center
  • Defining Social Innovation
  • Impact Compass
  • Global Health Innovation Insights
  • Faculty Affiliates
  • Student Awards & Certificates
  • Changemakers
  • Dean Jonathan Levin
  • Dean Garth Saloner
  • Dean Robert Joss
  • Dean Michael Spence
  • Dean Robert Jaedicke
  • Dean Rene McPherson
  • Dean Arjay Miller
  • Dean Ernest Arbuckle
  • Dean Jacob Hugh Jackson
  • Dean Willard Hotchkiss
  • Faculty in Memoriam
  • Stanford GSB Firsts
  • Class of 2024 Candidates
  • Certificate & Award Recipients
  • Dean’s Remarks
  • Keynote Address
  • Teaching Approach
  • Analysis and Measurement of Impact
  • The Corporate Entrepreneur: Startup in a Grown-Up Enterprise
  • Data-Driven Impact
  • Designing Experiments for Impact
  • Digital Marketing
  • The Founder’s Right Hand
  • Marketing for Measurable Change
  • Product Management
  • Public Policy Lab: Financial Challenges Facing US Cities
  • Public Policy Lab: Homelessness in California
  • Lab Features
  • Curricular Integration
  • View From The Top
  • Formation of New Ventures
  • Managing Growing Enterprises
  • Startup Garage
  • Explore Beyond the Classroom
  • Stanford Venture Studio
  • Summer Program
  • Workshops & Events
  • The Five Lenses of Entrepreneurship
  • Leadership Labs
  • Executive Challenge
  • Arbuckle Leadership Fellows Program
  • Selection Process
  • Training Schedule
  • Time Commitment
  • Learning Expectations
  • Post-Training Opportunities
  • Who Should Apply
  • Introductory T-Groups
  • Leadership for Society Program
  • Certificate
  • 2024 Awardees
  • 2023 Awardees
  • 2022 Awardees
  • 2021 Awardees
  • 2020 Awardees
  • 2019 Awardees
  • 2018 Awardees
  • Social Management Immersion Fund
  • Stanford Impact Founder Fellowships
  • Stanford Impact Leader Prizes
  • Social Entrepreneurship
  • Stanford GSB Impact Fund
  • Economic Development
  • Energy & Environment
  • Stanford GSB Residences
  • Environmental Leadership
  • Stanford GSB Artwork
  • A Closer Look
  • California & the Bay Area
  • Voices of Stanford GSB
  • Business & Beneficial Technology
  • Business & Sustainability
  • Business & Free Markets
  • Business, Government, and Society Forum
  • Get Involved
  • Second Year
  • Global Experiences
  • JD/MBA Joint Degree
  • MA Education/MBA Joint Degree
  • MD/MBA Dual Degree
  • MPP/MBA Joint Degree
  • MS Computer Science/MBA Joint Degree
  • MS Electrical Engineering/MBA Joint Degree
  • MS Environment and Resources (E-IPER)/MBA Joint Degree
  • Academic Calendar
  • Clubs & Activities
  • LGBTQ+ Students
  • Military Veterans
  • Minorities & People of Color
  • Partners & Families
  • Students with Disabilities
  • Student Support
  • Residential Life
  • Student Voices
  • MBA Alumni Voices
  • A Week in the Life
  • Career Support
  • Employment Outcomes
  • Cost of Attendance
  • Knight-Hennessy Scholars Program
  • Yellow Ribbon Program
  • BOLD Fellows Fund
  • Application Process
  • Loan Forgiveness
  • Contact the Financial Aid Office
  • Evaluation Criteria
  • GMAT & GRE
  • English Language Proficiency
  • Personal Information, Activities & Awards
  • Professional Experience
  • Letters of Recommendation
  • Optional Short Answer Questions
  • Application Fee
  • Reapplication
  • Deferred Enrollment
  • Joint & Dual Degrees
  • Entering Class Profile
  • Event Schedule
  • Ambassadors
  • New & Noteworthy
  • Ask a Question
  • See Why Stanford MSx
  • Is MSx Right for You?
  • MSx Stories
  • Leadership Development
  • How You Will Learn
  • Admission Events
  • Personal Information
  • GMAT, GRE & EA
  • English Proficiency Tests
  • Career Change
  • Career Advancement
  • Daycare, Schools & Camps
  • U.S. Citizens and Permanent Residents
  • Requirements
  • Requirements: Behavioral
  • Requirements: Quantitative
  • Requirements: Macro
  • Requirements: Micro
  • Annual Evaluations
  • Field Examination
  • Research Activities
  • Research Papers
  • Dissertation
  • Oral Examination
  • Current Students
  • Education & CV
  • International Applicants
  • Statement of Purpose
  • Reapplicants
  • Application Fee Waiver
  • Deadline & Decisions
  • Job Market Candidates
  • Academic Placements
  • Stay in Touch
  • Faculty Mentors
  • Current Fellows
  • Standard Track
  • Fellowship & Benefits
  • Group Enrollment
  • Program Formats
  • Developing a Program
  • Diversity & Inclusion
  • Strategic Transformation
  • Program Experience
  • Contact Client Services
  • Campus Experience
  • Live Online Experience
  • Silicon Valley & Bay Area
  • Digital Credentials
  • Faculty Spotlights
  • Participant Spotlights
  • Eligibility
  • International Participants
  • Stanford Ignite
  • Frequently Asked Questions
  • Operations, Information & Technology
  • Organizational Behavior
  • Political Economy
  • Classical Liberalism
  • The Eddie Lunch
  • Accounting Summer Camp
  • California Econometrics Conference
  • California Quantitative Marketing PhD Conference
  • California School Conference
  • China India Insights Conference
  • Homo economicus, Evolving
  • Political Economics (2023–24)
  • Scaling Geologic Storage of CO2 (2023–24)
  • A Resilient Pacific: Building Connections, Envisioning Solutions
  • Adaptation and Innovation
  • Changing Climate
  • Civil Society
  • Climate Impact Summit
  • Climate Science
  • Corporate Carbon Disclosures
  • Earth’s Seafloor
  • Environmental Justice
  • Operations and Information Technology
  • Organizations
  • Sustainability Reporting and Control
  • Taking the Pulse of the Planet
  • Urban Infrastructure
  • Watershed Restoration
  • Junior Faculty Workshop on Financial Regulation and Banking
  • Ken Singleton Celebration
  • Marketing Camp
  • Quantitative Marketing PhD Alumni Conference
  • Presentations
  • Theory and Inference in Accounting Research
  • Stanford Closer Look Series
  • Quick Guides
  • Core Concepts
  • Journal Articles
  • Glossary of Terms
  • Faculty & Staff
  • Researchers & Students
  • Research Approach
  • Charitable Giving
  • Financial Health
  • Government Services
  • Workers & Careers
  • Short Course
  • Adaptive & Iterative Experimentation
  • Incentive Design
  • Social Sciences & Behavioral Nudges
  • Bandit Experiment Application
  • Conferences & Events
  • Reading Materials
  • Energy Entrepreneurship
  • Faculty & Affiliates
  • SOLE Report
  • Responsible Supply Chains
  • Current Study Usage
  • Pre-Registration Information
  • Participate in a Study
  • Founding Donors
  • Location Information
  • Participant Profile
  • Network Membership
  • Program Impact
  • Collaborators
  • Entrepreneur Profiles
  • Company Spotlights
  • Seed Transformation Network
  • Responsibilities
  • Current Coaches
  • How to Apply
  • Meet the Consultants
  • Meet the Interns
  • Intern Profiles
  • Collaborate
  • Research Library
  • News & Insights
  • Program Contacts
  • Databases & Datasets
  • Research Guides
  • Consultations
  • Research Workshops
  • Career Research
  • Research Data Services
  • Course Reserves
  • Course Research Guides
  • Material Loan Periods
  • Fines & Other Charges
  • Document Delivery
  • Interlibrary Loan
  • Equipment Checkout
  • Print & Scan
  • MBA & MSx Students
  • PhD Students
  • Other Stanford Students
  • Faculty Assistants
  • Research Assistants
  • Stanford GSB Alumni
  • Telling Our Story
  • Staff Directory
  • Site Registration
  • Alumni Directory
  • Alumni Email
  • Privacy Settings & My Profile
  • Success Stories
  • The Story of Circles
  • Support Women’s Circles
  • Stanford Women on Boards Initiative
  • Alumnae Spotlights
  • Insights & Research
  • Industry & Professional
  • Entrepreneurial Commitment Group
  • Recent Alumni
  • Half-Century Club
  • Fall Reunions
  • Spring Reunions
  • MBA 25th Reunion
  • Half-Century Club Reunion
  • Faculty Lectures
  • Ernest C. Arbuckle Award
  • Alison Elliott Exceptional Achievement Award
  • ENCORE Award
  • Excellence in Leadership Award
  • John W. Gardner Volunteer Leadership Award
  • Robert K. Jaedicke Faculty Award
  • Jack McDonald Military Service Appreciation Award
  • Jerry I. Porras Latino Leadership Award
  • Tapestry Award
  • Student & Alumni Events
  • Executive Recruiters
  • Interviewing
  • Land the Perfect Job with LinkedIn
  • Negotiating
  • Elevator Pitch
  • Email Best Practices
  • Resumes & Cover Letters
  • Self-Assessment
  • Whitney Birdwell Ball
  • Margaret Brooks
  • Bryn Panee Burkhart
  • Margaret Chan
  • Ricki Frankel
  • Peter Gandolfo
  • Cindy W. Greig
  • Natalie Guillen
  • Carly Janson
  • Sloan Klein
  • Sherri Appel Lassila
  • Stuart Meyer
  • Tanisha Parrish
  • Virginia Roberson
  • Philippe Taieb
  • Michael Takagawa
  • Terra Winston
  • Johanna Wise
  • Debbie Wolter
  • Rebecca Zucker
  • Complimentary Coaching
  • Changing Careers
  • Work-Life Integration
  • Career Breaks
  • Flexible Work
  • Encore Careers
  • Join a Board
  • D&B Hoovers
  • Data Axle (ReferenceUSA)
  • EBSCO Business Source
  • Global Newsstream
  • Market Share Reporter
  • ProQuest One Business
  • RKMA Market Research Handbook Series
  • Student Clubs
  • Entrepreneurial Students
  • Stanford GSB Trust
  • Alumni Community
  • How to Volunteer
  • Springboard Sessions
  • Consulting Projects
  • 2020 – 2029
  • 2010 – 2019
  • 2000 – 2009
  • 1990 – 1999
  • 1980 – 1989
  • 1970 – 1979
  • 1960 – 1969
  • 1950 – 1959
  • 1940 – 1949
  • Service Areas
  • ACT History
  • ACT Awards Celebration
  • ACT Governance Structure
  • Building Leadership for ACT
  • Individual Leadership Positions
  • Leadership Role Overview
  • Purpose of the ACT Management Board
  • Contact ACT
  • Business & Nonprofit Communities
  • Reunion Volunteers
  • Ways to Give
  • Fiscal Year Report
  • Business School Fund Leadership Council
  • Planned Giving Options
  • Planned Giving Benefits
  • Planned Gifts and Reunions
  • Legacy Partners
  • Giving News & Stories
  • Giving Deadlines
  • Development Staff
  • Submit Class Notes
  • Class Secretaries
  • Board of Directors
  • Health Care
  • Sustainability
  • Class Takeaways
  • All Else Equal: Making Better Decisions
  • If/Then: Business, Leadership, Society
  • Grit & Growth
  • Think Fast, Talk Smart
  • Spring 2022
  • Spring 2021
  • Autumn 2020
  • Summer 2020
  • Winter 2020
  • In the Media
  • For Journalists
  • DCI Fellows
  • Other Auditors
  • Academic Calendar & Deadlines
  • Course Materials
  • Entrepreneurial Resources
  • Campus Drive Grove
  • Campus Drive Lawn
  • CEMEX Auditorium
  • King Community Court
  • Seawell Family Boardroom
  • Stanford GSB Bowl
  • Stanford Investors Common
  • Town Square
  • Vidalakis Courtyard
  • Vidalakis Dining Hall
  • Catering Services
  • Policies & Guidelines
  • Reservations
  • Contact Faculty Recruiting
  • Lecturer Positions
  • Postdoctoral Positions
  • Accommodations
  • CMC-Managed Interviews
  • Recruiter-Managed Interviews
  • Virtual Interviews
  • Campus & Virtual
  • Search for Candidates
  • Think Globally
  • Recruiting Calendar
  • Recruiting Policies
  • Full-Time Employment
  • Summer Employment
  • Entrepreneurial Summer Program
  • Global Management Immersion Experience
  • Social-Purpose Summer Internships
  • Process Overview
  • Project Types
  • Client Eligibility Criteria
  • Client Screening
  • ACT Leadership
  • Social Innovation & Nonprofit Management Resources
  • Develop Your Organization’s Talent
  • Centers & Initiatives
  • Student Fellowships

Home — Essay Samples — Science — Technology & Engineering — Self-Driving Cars

one px

Essays on Self-driving Cars

Self-driving cars: building the world’s most experienced driver, autonomous vehicles in 2023, made-to-order essay as fast as you need it.

Each essay is customized to cater to your unique preferences

+ experts online

Personal Statement on Self Driving Cars: Are They Beneficial

The rise of self-driving cars: a scholarly perspective for students, self-driving cars: what lies behind their capabilities, long-term impacts and challenges for self-driving cars, let us write you an essay from scratch.

  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours

How Self Driving Cars Will Solve The Problems of Drivers

Pros and cons: the multifaceted impacts of self-driving cars, why we should use self driving cars: personal list of reasons, self-driving cars: revolutionizing the future of transportation, get a personalized essay in under 3 hours.

Expert-written essays crafted with your exact needs in mind

Innovative Self Driving Cars: Who Holds Responsibility

Disadvantages of self-driving cars, pros and cons of driverless cars, should self-driving cars replace human drivers, the fascinating history of cars, car classification: an overview of vehicle types, the legal frameworks for self-driving cars, relevant topics.

  • Genetic Engineering
  • Engineering
  • Intelligent Machines
  • Mechanical Engineering
  • Civil Engineering
  • Electronics

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy . We’ll occasionally send you promo and account related email

No need to pay just yet!

We use cookies to personalyze your web-site experience. By continuing we’ll assume you board with our cookie policy .

  • Instructions Followed To The Letter
  • Deadlines Met At Every Stage
  • Unique And Plagiarism Free

self driving cars essay conclusion

Ethical Dilemmas Surrounding Self-Driving Cars Case Study

  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment

Introduction

Countries are embracing technology, which is changing the operations of various organizations, particularly in dynamic and competitive environments. Through innovations, companies can increase their productivity, which allows them to sell products at lower prices and of high quality. Customers’ purchasing power has tremendously improved, leading to economic growth in various regions.

In the recent past, the idea of self-driving cars was welcomed with much excitement. This was major because such innovation would enhance safety by eliminating human errors, which are the main cause of accidents on roads (Sam et al., 2016). However, during the testing of a driverless car by Tesla, a driver lost his life as the car crashed while trying to avoid a truck. In the Uber crash, a prototype vehicle struck the sidewalk killing a woman. The video taken showed that the driver in the car was shocked and could not do anything to save the woman (Lin, 2015). Based on these two cases and many others, this innovation is now facing numerous dilemmas forcing its proponents to pause and think of possible solutions to address the underlying issues.

Empathetic drivers would try to swerve a car to avoid hitting a pedestrian crossing the road based on instinct. In other scenarios, a motorist may choose to hit a cyclist with a helmet instead of one without because minimal damage would occur. However, for driverless cars, the dilemma of who should take responsibility for the cause of accidents is still debated. It is not yet clear whether to blame the programmers, automakers, or policymakers for the origin of accidents. After configuring autonomous cars with coded programs, they will make choices based on those instructions despite the circumstances. These vehicles rely on the Lidar and sensors such as cameras and radar to detect hazards on roads (Yun et al., 2016). Currently, automated companies use a system that is not standardized, as different choices are made depending on the core values and mission of an organization. The main concern is whether a self-driving car can make the same or better choices than humans.

Massachusetts Institute of Technology (MIT) has developed a moral machine that intends to explore possible choices for autonomous cars while on the road. Moral decisions are subject to biases since people hold different views in societies (Awad, 2017). What a scientist prefers may appear immoral to a religious leader and vice versa. Such differences are some of the dilemmas and drawbacks challenging this innovation. For instance, when I played the moral machine game, the result suggested that a driverless car should make a decision and hit pedestrians crossing the road wrongfully and save the passengers instead of plunging onto an oncoming truck. This outcome is clearly against my preferred ethical lens, which is a relationship. Drivers must adhere to ethical principles to ensure safety and good relationships. However, with an autonomous car, this will be a challenge since this is a robotic technology that does not have feelings or attitudes.

Based on the relationship lens, individuals need to use their reasoning skills or rationality to make decisions that would result in fairness, justice, and equality in the community. When faced with a controversial situation, critical thinking is applied to seek the truth and to ensure the common good is achieved as opposed to autonomy. People with this focus value their relationships with others in the community and strive to attain fairness and justice (Graham, 2018). The powerless and the vulnerable populations are equal to the rest despite the circumstances. To this end, when either faced with a scenario where a decision has to be made to hit wrongfully crossing pedestrians, save passengers in the car, or strive to save all stakeholders, the relationship lens would emphasize the latter.

Every life matters, which means that both stakeholders, the passengers, and pedestrians, rank equally. Nevertheless, the results lens focuses on the outcome and encourages individuals to be mindful of their self-interest or autonomy (West et al., 2016). The application of instinct or sensibility only aids in determining what is good for an individual. The common good is achievable by following one’s intuition and putting personal interests first. As such, this ethical lens ranks passengers above the pedestrians since the driver or autonomous car should prioritize saving those in the car first. Pedestrians are liable for breaking the law and, therefore, should pay for wrong decision-making.

The relationship lens gives intense preference for rationality and equality to achieve justice and fairness in a community when faced with a dispute. The powerless or vulnerable populations are treated equally as everyone else. The application of ideal actions and behaviors is based on truth and impartiality. Policymakers establish unbiased systems for the resolution of disputes as care values are accorded to everybody despite their status in society. Those found guilty of harming others are held into account, and rational reasoning is taken as the best perspective as opposed to pettiness (Roorda & Gullickson, 2019). The programming of autonomous cars should be done in such a manner that it applies rationality and equality to ensure the safety of all people. With full implementation of this lens, empathy, wisdom, and justice would be emphasized, enabling fair judgments to avoid adverse effects on either the pedestrians or passengers. This was the basis of building automated cars; giving more focus on an ethical relationship lens will help in improving this innovation, as safety will be maximized, reducing accidents and deaths.

Conversely, the application of the resulting lens in this scenario will lead to an unethical decision in the end. This is due to the fact that instincts and emotions influence behaviors and actions. To uphold self-interest, an autonomous car will hit pedestrians to save the lives of the passengers. Since everyone has the free will to choose how to behave and act to achieve personal goals and the greater good, those in the car would seek their safety first instead of pedestrians’ (Lin, 2015). The general belief is that people will make ethical decisions and take responsibility for their own actions. Consequently, since the foot travelers are in the wrong, the decision to hit them is justifiable in the courts. The autonomous car will not be responsible for damages or injuries incurred. This lens presents an unethical decision and may result in many accidents in the future. Humans have feelings and should use them to make fair and just decisions. Killing pedestrians on the basis that they are wrong is immoral. Therefore, this lens should not apply when establishing rules and regulations for automated cars.

In conclusion, to resolve the ethical dilemma caused by the introduction of self-driving cars on the road, the formulation of regulations should uphold justice and equality values. Every life matters, and there are no lesser beings than the rest. As such, research needs to be done by people involved, including policymakers, automakers, programmers, and philosophers, to try to create automated vehicles which will offer safer rides than traditional drivers. Accidents occur in a matter of seconds, and this poses many challenges for drivers to make suitable decisions. Since driverless cars can detect hazards on the road earlier, the decisions taken based on the programs should protect everybody using the road. The technology currently uses Lidar and sensors to detect danger. The system should be enhanced further to be able to notice oncoming vehicles far enough for emergency brakes to be applied. Programming of the cars should ensure safety by notifying drivers to respond immediately in case of emergency, especially in high traffic areas. This will ensure that the common good, as emphasized through the relationship lens, is optimized.

Since ethical conflicts are best resolved through engaging the antagonist parties, using an ethical lens inventory posed some challenges since coming up with fair and legitimate decisions proved difficult. I struggled to analyze the dilemma since, in doing so, I had to choose one right course while negating another. It was merely doing something right and wrong altogether. The desire to make a decision sometimes pushed me to overlook the facts, values, and opinions of other parties, hence making the whole process extremely difficult. However, choosing the most important values was easy since no answer was right or wrong. Upon filling in the inventory, a printout describing the preferred lens is automatically generated by the evaluation tool.

Awad, E. (2017). Moral machines: Perception of moral judgment made by machines (Doctoral dissertation, Massachusetts Institute of Technology). Web.

Graham, P. (2018). Ethics in critical discourse analysis. Critical Discourse Studies , 15 (2), 186−203. Web.

Lin, P. (2015). The ethical dilemma of self-driving cars . TED. Web.

Roorda, M., & Gullickson, A. M. (2019). Developing evaluation criteria using an ethical lens. Evaluation Journal of Australasia , 19 (4), 179−194. Web.

Sam, D., Velanganni, C., & Evangelin, T. E. (2016). A vehicle control system using a time synchronized Hybrid VANET to reduce road accidents caused by human error. Vehicular Communications , 6 , 17−28. Web.

West, D., Huijser, H., & Heath, D. (2016). Putting an ethical lens on learning analytics. Educational Technology Research and Development , 64 (5), 903−922. Web.

Yun, J. J., Won, D., Jeong, E., Park, K., Yang, J., & Park, J. (2016). The relationship between technology, business model, and market in autonomous car and intelligent robot industries. Technological Forecasting and Social Change , 103 , 142−155. Web.

  • Contrast Between Electric and Gasoline Cars
  • Driving in Summer and Winter: Comparative Analysis
  • Policy for Vehicles with Automated Driving Systems
  • Developing Strategic Plan for TLC Commission Future Self-Driving Cars
  • Disruptive Innovations: Self-Driving Vehicle Technology
  • Speed Limits: Arguments For and Against
  • The Concept of Search Engine Analysis
  • Traffic Fines and Penalties Effectiveness in the Uae
  • Wearing of Seat Belts: The Types, Effectiveness
  • Risks for the Ontario Auto Parts Supply Chain in Canada
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2022, July 31). Ethical Dilemmas Surrounding Self-Driving Cars. https://ivypanda.com/essays/ethical-dilemmas-surrounding-self-driving-cars/

"Ethical Dilemmas Surrounding Self-Driving Cars." IvyPanda , 31 July 2022, ivypanda.com/essays/ethical-dilemmas-surrounding-self-driving-cars/.

IvyPanda . (2022) 'Ethical Dilemmas Surrounding Self-Driving Cars'. 31 July.

IvyPanda . 2022. "Ethical Dilemmas Surrounding Self-Driving Cars." July 31, 2022. https://ivypanda.com/essays/ethical-dilemmas-surrounding-self-driving-cars/.

1. IvyPanda . "Ethical Dilemmas Surrounding Self-Driving Cars." July 31, 2022. https://ivypanda.com/essays/ethical-dilemmas-surrounding-self-driving-cars/.

Bibliography

IvyPanda . "Ethical Dilemmas Surrounding Self-Driving Cars." July 31, 2022. https://ivypanda.com/essays/ethical-dilemmas-surrounding-self-driving-cars/.

Self-Driving Cars Essays

The future of supply chain management and acquisition, management information systems, self-driving cars should or should not be legal, popular essay topics.

  • American Dream
  • Artificial Intelligence
  • Black Lives Matter
  • Bullying Essay
  • Career Goals Essay
  • Causes of the Civil War
  • Child Abusing
  • Civil Rights Movement
  • Community Service
  • Cultural Identity
  • Cyber Bullying
  • Death Penalty
  • Depression Essay
  • Domestic Violence
  • Freedom of Speech
  • Global Warming
  • Gun Control
  • Human Trafficking
  • I Believe Essay
  • Immigration
  • Importance of Education
  • Israel and Palestine Conflict
  • Leadership Essay
  • Legalizing Marijuanas
  • Mental Health
  • National Honor Society
  • Police Brutality
  • Pollution Essay
  • Racism Essay
  • Romeo and Juliet
  • Same Sex Marriages
  • Social Media
  • The Great Gatsby
  • The Yellow Wallpaper
  • Time Management
  • To Kill a Mockingbird
  • Violent Video Games
  • What Makes You Unique
  • Why I Want to Be a Nurse
  • Send us an e-mail

IMAGES

  1. Self- driving cars

    self driving cars essay conclusion

  2. The case of self driving cars

    self driving cars essay conclusion

  3. PPT

    self driving cars essay conclusion

  4. contoh essay self driving car.docx

    self driving cars essay conclusion

  5. ≫ Future of Self Driving Automobiles Free Essay Sample on Samploon.com

    self driving cars essay conclusion

  6. 📗 Essay Example on Self-Driving Cars: The Future of Automobile

    self driving cars essay conclusion

COMMENTS

  1. Self-Driving Cars: Revolutionizing the Future of Transportation: [Essay

    In conclusion, self-driving cars have the power to revolutionize transportation as we know it. Their potential to increase road safety, improve mobility, and reduce traffic congestion cannot be underestimated. ... Long-term Impacts and Challenges for Self-driving Cars Essay. Self-driving cars, also known as autonomous vehicles, have been a ...

  2. Self-driving Cars: The technology, risks and possibilities

    Essentially, a self-driving car needs to perform three actions to be able to replace a human driver: to perceive, to think and to act (Figure 1). These tasks are made possible by a network of high-tech devices such as cameras, computers and controllers. Figure 1: Like a human driver, a self-driving car executes a cycle of perceiving the ...

  3. Are Autonomous Cars Really Safer Than Human Drivers?

    The following essay is reprinted with permission from The Conversation, an online publication covering the latest research.. Much of the push toward self-driving cars has been underwritten by the ...

  4. Argumentative Essay On Self-Driving Cars Essay

    Here is a counter argument to that claim. Self-driving cars have been tested extensively and have proven to be much safer than human-driven cars. In fact, studies have shown that self-driving cars are far less likely to get into accidents than human-driven cars. Self-driving cars also have the potential to reduce traffic congestion and save lives.

  5. How driverless cars will change our world

    By 2031, "full-self driving - human-level or above, in all possible conditions, where you can put kids by themselves in the car to send them to arbitrary locations without worrying - is not ...

  6. Self-Driven Cars from the Future Perspectives Essay

    The first examples of the self-driven technology are all driver assistance systems. These include parktronic, cruise control, blind spot monitoring, and lane control. Due to them, one relies less and less on driving skills (Robert 687). The development of such technologies began in the 80s of the twentieth century.

  7. Self-driving Cars Argumentative Essay

    The study found that self-driving cars could reduce the number of vehicles on the road by up to 2.5 million. This would free up an estimated 1.9 million hours of travel time and reduce fuel consumption by up to 1.5 billion gallons. In addition, self-driving cars would reduce the number of accidents by up to 80%.

  8. Self-Driving Vehicles—an Ethical Overview

    The introduction of self-driving vehicles gives rise to a large number of ethical issues that go beyond the common, extremely narrow, focus on improbable dilemma-like scenarios. This article provides a broad overview of realistic ethical issues related to self-driving vehicles. Some of the major topics covered are as follows: Strong opinions for and against driverless cars may give rise to ...

  9. Essay: Self-driving cars

    General Motors went a step further and created a series of cars called "Firebirds", that were supposed to be self-driven cars that would be on the market by 1975. This became a popular topic in the media and led to many interested journalists and reporters to be allowed to test drive these cars.

  10. Long-term Impacts and Challenges for Self-driving Cars

    In conclusion, the future with self-driving cars presents a mix of opportunities and challenges that will shape our societies, economies, and environments in profound ways. By understanding and addressing the implications of widespread self-driving car adoption, we can harness the transformative potential of this technology while mitigating its ...

  11. Seven Arguments Against the Autonomous-Vehicle Utopia

    But in a duo of essays in 2017, ... The transportation reporter and self-driving car skeptic Christian Wolmar once asked a self-driving-car security specialist named Tim Mackey to lay out the problem.

  12. Conclusion

    Conclusion. As technology expands throughout the world, self-driving cars will become the future mode of transportation universally. The legal, ethical, and social implications of self-driving cars surround the ideas of liability, responsibility, and efficiency. Autonomous vehicles will benefit the economy through fuel efficiency, the ...

  13. Self-driving Cars and the Right to Drive

    Every year, 1.35 million people are killed on roads worldwide and even more people are injured. Emerging self-driving car technology promises to cut this statistic down to a fraction of the current rate. On the face of it, this consideration alone constitutes a strong reason to legally require — once self-driving car technology is widely available and affordable — that all vehicles on ...

  14. Self-Driving Car Ethics

    Self-driving cars are programmed to be rule-followers, explained Saripalli, but the realities of the road are usually a bit more blurred. In a 2017 accident in Tempe, Ariz., for example, a human-driven car attempted to turn left through three lanes of traffic and collided with a self-driving Uber. While there isn't anything inherently unsafe ...

  15. Self Driving Cars

    Words: 923 Pages: 3 7598. A self-driving car can be defined as ""a vehicle that can guide itself without human conduction"" (Techopedia, n.d.). Various companies are building these companies, with Waymo at the forefront of the industry. Waymo is a company owned by Alphabet Inc., located in Phoenix, Arizona.

  16. What Self-Driving Cars Tell Us About AI Risks

    AI has system-level implications that can't be ignored. Self-driving cars have been designed to stop cold the moment they can no longer reason and no longer resolve uncertainty. This is an ...

  17. Self-driving Cars: History, Advantages and Disadvantages

    Advantages of self-driving vehicles. The inclusion of assistive computer technology into vehicles, such as the use of GPS, cameras, stability control systems and assisted brakes, have been seen to improve the safety of passengers and the quality at which people drive [5]. Northern Australia has already adapted the use of a self-driving car ...

  18. Self-Driving Car Ethics

    Self-driving cars are programmed to be rule-followers, explained Saripalli, but the realities of the road are usually a bit more blurred. In a 2017 accident in Tempe, Ariz., for example, a human-driven car attempted to turn left through three lanes of traffic and collided with a self-driving Uber. While there isn't anything inherently unsafe ...

  19. Exploring the Ethics Behind Self-Driving Cars

    Stability control and anti-lock brakes are self-driving-type features, and we're just getting more and more of them. Google gets a lot of attention in Silicon Valley, but the traditional automakers are putting this into practice. So you could imagine different platforms and standards around all this.

  20. Essays on Self-driving Cars

    2 pages / 916 words. Introduction Self-driving cars have long been the subject of both fascination and skepticism, promising a future where vehicles navigate the roads autonomously, revolutionizing the way we travel. In 2023, the technology and industry surrounding self-driving cars have made significant strides towards mainstream adoption.

  21. Self Driving Cars Essay

    In conclusion self-driving cars are programmed enough to be driven. Self-driving cars the pose a potential threat U.S. roads should not be allowed to be driven. In the article, it claimed that if you were to get in an accident while driving your self-driving car you would be responsible. Wait a minute, if you weren't driving you technically ...

  22. Ethical Dilemmas Surrounding Self-Driving Cars Case Study

    Get a custom case study on Ethical Dilemmas Surrounding Self-Driving Cars. In the recent past, the idea of self-driving cars was welcomed with much excitement. This was major because such innovation would enhance safety by eliminating human errors, which are the main cause of accidents on roads (Sam et al., 2016).

  23. Self-Driving Cars Essay Examples

    Self-Driving Cars Essays. The Future of Supply Chain Management and Acquisition. ... Self-driving cars are gradually getting into the United States and other European Union countries. As a result, decision-makers like the government must decide how the law should be applied. When a self-driving vehicle causes harm, it is difficult to determine ...

  24. 13 Reasons Self-Driving Cars Are a Bad Idea (It's Not As Obvious As You

    The rise of self-driving cars may have a destructive effect on millions who depend on driving jobs such as cabs, buses, or trucks. Truck driving itself is the most standard source of income. If ...