As seen on:

SMH Logo News Logo

Call 1300 303 181

The Pros and Cons of Driverless Cars

In any discussion of road safety and keeping crash-related deaths down, you’re always going to come back to the human factor. Most times, people doing silly things are what cause crashes, whether the silly thing is misjudging the speed to take a corner at in the wet, reading a text message while driving and not noticing that the car is drifting, or getting behind the wheel when a bit tiddly. Is the answer then to eliminate the human factor altogether and adopt driverless cars, much in the same way that aircraft have adopted autopilot systems?

What Google’s driverless car looks like.

There are tons of reasons why driverless cars (aka autonomous cars, self-driving cars and autonomous cars) could be a good idea, and just as many reasons why they’re not.

Arguments in favour of driverless cars include the following:

  • Robots and computer systems don’t get tired, drunk or distracted.
  • Computer systems can calculate the perfect speed to negotiate corners.
  • Autonomous cars automatically detect if they’re drifting out of a lane and correct it instantly (some cars do this already even if they’re driven by a real live human being).
  • In theory, computer systems don’t make mistakes, slip or get careless.

What we hoped driverless cars would look like.

In short, a driverless car eliminates the human factor.  After all, the proverb “to err is human” has been around since before cars were invented.  Computerised systems aren’t subject to the limitations of being human and fallible.

However, a modern twist on the old proverb says that although to err may be human, to really mess things up, use a computer. This brings us neatly to the arguments against driverless cars:

  • All new software systems are prone to teething troubles, glitches and bugs when first released. This is mildly annoying on your office computer but could be fatal at worst and expensive at best in a car.
  • We all know that electronics seem to develop a mind of their own and do weird things that we don’t expect them to unless we’re super-geeks.
  • Artificial intelligence can’t cope with really busy situations. Busy car parks and places where pedestrians and cars share the road are particularly confusing for autonomous car systems. Just think of all the ways that people indicate “After you,” in these situations – a wave of the hand(s) that can be big or small or just about any direction, a quick jerk of the head, a smile, mouthing the words… Then you’ve got all those “You idiot!” gestures. A human recognises these instantly; computers often struggle.
  • Weather can affect the sensors, especially extreme weather such as snow or heavy rain where you really need to take care.
  • Autonomous systems need very detailed up-to-date maps so they “know” the right speed for corners and the best routes. This means continual updates are needed – hello, big data bills! And what happens when something’s changed unexpectedly on the road surface, such as oil spills, debris from a crash or gravel?
  • Computers can be hacked and jammed, sometimes remotely. Anybody seen Fast and Furious 8 where this happens? (Yes, I know it’s fiction but who hasn’t had problems with viruses or experienced remote access in a desktop.  It’s plausible!)

  • People may come to rely on automatic systems so much that they might not know how to react properly if the computer systems fail (and we all know that computers crash now and again).
  • Avoiding collisions with large animals on rural roads is harder than you think. Take the example of Volvo : their system worked fine on Swedish wildlife like caribou and elk, but when they tried it out Down Under, the system didn’t recognise kangaroos as large animals to be avoided.
  • Autonomous systems probably can’t tell the difference between a dead hedgehog in the middle of the road (which you don’t mind hitting) and Mother Duck waiting for ducklings (which you want to stop for).
  • Taxi drivers and chauffeurs would be out of a job.

There are also a ton of ethical and moral issues involved with driverless cars.  If a driverless car does crash and kill someone, who’s responsible? The “driver” or the manufacturer of the computer systems and software?  How will a computer make decisions in the case of an unavoidable crash.  For example, if the algorithm is set to minimise the amount of harm or damage caused and kill the fewest people, and it detects that it’s going to hit a bus on a bridge, will it decide that the “best” option is to go off the bridge, because that will only kill the occupants of the driverless car rather than possibly all the occupants of the bus (just stop and imagine what that would be like for the driver for a moment… and what if that bus is actually empty?).

What’s more, we all know that horrible things like car bombings and jerks ramming crowds on purpose are bad enough, but at least the driver puts him/herself at some risk.  What’s to stop a terrorist loading up a driverless car up with explosives and setting the vehicle to go all by itself?

On a lighter note, a lot of people simply enjoy driving. If we want a system that allows us to sit back and relax while we get to work that also cuts down on the need for parking spaces and reduces congestion, this already exists and it’s called “public transport” or at least “car pooling”. But that still includes the human factor…

At the moment, fully driverless cars where the person in the front seat can more or less go to sleep or bury his/her head in the daily news aren’t allowed on our roads.  At the moment, even the most automated systems still require a driver who’s alert and ready to take over if things get hairy, much like what happens in aircraft.  But who knows which way things will go in the future?

Leave a reply