As seen on:

SMH Logo News Logo

Call 1300 303 181

Australia’s Best New Car News, Reviews and Buying Advice

The Top Seven Things Autonomous Cars Can’t Handle

 

My  last post had some rather grim news to do with autonomous cars (aka driverless cars) not quite doing what they are supposed to do.  That was an example of things going badly wrong with the sensor systems that are supposed to make driverless cars so much safer and better than real live humans.  However, on a slightly lighter note, there are quite a few things that most of us drivers handle sometimes daily without much fuss that send autonomous cars into a full-on wobbly.

 

#1. Kangaroos

OK, so the design teams working with Volvo’s autonomous cars in Sweden had it all sorted for the sort of large animals that are likely to hang around on roads in Scandinavia.  The sensors can handle moose, elk and deer, detecting the beasties and stopping the car in time. However, it’s a different story down here in Australia.  The system just can’t cope with kangaroos, which are large animals that we’re likely to get on country roads – they’re certainly the large animals involved in most animal-related crashes.  You see, the system doesn’t see an animal, recognise it and estimate the distance and take appropriate action the way a human does.  The system uses the ground as a reference point to estimate the distance between the animal and the machine… and roos don’t stay on the ground when they’re on the move.  The sensors also have trouble recognizing a kangaroo as a kangaroo because from the perspective of a computer, a kangaroo in motion and a roo resting quietly beside the road are completely different shapes and look like totally different things.  Then you’ve got the problem with roos that human drivers have to cope with: the fact that they can get a top speed of 70 km/h and can seemingly explode out of nowhere right into your path.  If the roo has been behind a bush or something, then the sensors can’t see it and you can’t see it, so you’d better have roo bars fitted.

#2 Car Washes

Some people get a little bit phobic about those automated car washes, although others enjoy them.  There’s always that little moment when you see the big whirling brushes descend and you hope like mad that the sensors telling them when to stop aren’t going to fail, crushing the top of your vehicle, shattering your windscreen and thrashing you with hundreds of little rubber whips.  But what happens when an automatic car wash meets an autonomous car?

Well, an autonomous car can get into the car wash without any problems.  However, the vigorous action of the washer plus all the soapy foam don’t agree well with the sensors, so getting out of the car wash and driving on may be another story.  You see, the sensors have to be clear of any grime or debris to work properly and if there’s soap left on them, they can’t see.  And there is soap left on them afterwards.  At worst, the car wash knocks the sensors off or damages them, which makes for a very, very expensive fix.

You have to take your pick: is washing your car by hand every time worth the convenience of a car that drives itself?

#3 Bad Weather

Self-driving tech works nicely in fine, sunny weather.  However, put it in heavy rain, snow or ice and it throws a very, very big wobbly.  Humans know – or ought to know – that when it’s raining, you take it nice and slow around the corners, watch out for pools of water that could get you aquaplaning and to keep the speed down.  Now, you’d think that because we have rain-sensing wipers, an autonomous car should be able to recognise that it’s raining and adjust itself accordingly.  Unfortunately, it can’t.  It probably can’t tell the difference between a light shower and a tropical monsoon.  Google hasn’t even put its self-driving cars through tests in heavy rains yet, but they already know that snow is a big problem for autonomous cars because they can’t see the road markings that help them stay in their lanes and get around corners.  As for ice, they have problems detecting this as well.  Even if humans have trouble spotting black ice and frost on the road, we know that on a nippy day when you have to put on a nice woolly jersey, there’s likely to be a bit of ice on that corner there where the trees cast a shadow on the road all day.

#4 Potholes

Apparently, the only holes in the road that a self-driving car can detect are the big ones made by your local road repair crew that have cones around them.  The little blips that are hard on your tyres and suspension aren’t picked up – they are below the surface of the road and they’re not on any of the mapping systems that these cars use.  So an autonomous car won’t dodge potholes.  Ouch.

#5 Newly Altered Road Layouts

Self-driving cars, especially the ones being worked on by Google, rely on really good maps to know (a) where in the world they are and (b) what the road is supposed to look like.  Don’t underestimate the latter bit – this is one way that driverless cars can pick obstacles: some systems scan the area around them and compare this with an image of what the road and its surroundings usually look like (letterboxes, lamp posts, etc.) and reacts accordingly.  However, if they don’t have these detailed maps, then things get a bit fun.  As happened recently in Arizona, if the local supermarket has decided to change the layout of the carpark with its entrances and exits, a driverless car might still think that the best way to get out is via what is now a new set of stairs.  Self-drive vehicles also go to pieces with new subdivisions and places where massive road works and new road layouts are going on: drivers from Christchurch, New Zealand, report that your common or garden GPS throws a wobbly about all the new roads and other bits resulting from the post-earthquake reconstruction.

#6 Shared Areas

Shared areas – places where pedestrians can go on the road at the same time as cars – are touted as being a way forward for cities of the future.  The trouble is that driverless cars are very rule-based, and when it comes to shared areas, there are no set rules.  Each interaction between driver and pedestrian, or between driver and driver, is a new situation.  Nobody’s got official right of way, so we use our social knowledge to ensure that everyone gets where they want to go without anyone getting hurt.  A human driver can see that the pair of pedestrians chatting with coffee in hand staring at each other aren’t about to try crossing the road.  A robot/computer/self-driving car just sees human shapes and can’t see what they’re doing or predict what they’re about to do.  Similarly, there are tons and tons of ways that drivers and pedestrians go through the whole “After you” “No, after you,” exchange.  How we conduct these wordless conversations can be anything from a large Italian-style gesticulation to a simple jerk of the head or a raised eyebrow.  It involves hands, arms, heads, facial expressions and mouthing words on the part of both parties – or just the driver, if he/she spots a mum struggling with a pram and a cantankerous toddler plus a bunch of shopping bags.  Our gestures and our decisions depend on how we’re feeling, our stress levels, the other party involved (the puzzled looking tourist versus the businessperson talking on the phone while striding forward in a rush versus the bunch of teenage girls fooling around).  And in some places, a human driver can recognise a familiar face, stop, wind down the window and have a wee chat.  And all these variables are simply too complex, too individual and too unpredictable to be programmed into a machine.

#7 Pesky Human Beings

As an old road safety campaign stated, humans are unpredictable (and so are some animals, like the idiot dogs who stand there all dopey in the middle of the road staring at you as you brake and yell at them).  A computer system relies on the situations and courses of appropriate action that have been programmed into it.  The trouble is that not everything that people do goes according to the rules – and don’t we just know it!

Here are a few examples of pesky human behaviours and situations – all of which a human driver can recognise and deal with – that would throw a driverless car:

  • A cop on point duty directing traffic because of an accident on the road ahead or similar – a person standing there waving arms is not something a computer system is used to
  • A ball bouncing out into the road: if a human sees this, he/she knows that some child might dash onto the road to retrieve it, but a computer sensor can’t tell a ball from a plastic bag flying loose and won’t react… it certainly won’t start keeping an extra look out for kids.
  • Kids coming out from school: they’re supposed to be sensible on the roads and not do anything silly, but there’s that occasional child who rushes across the road shouting “Mummy!” unexpectedly. Most of us should know that one should slow down and keep an extra lookout at certain times around schools.
  • Hitchhikers: We know what the backpack, the extended thumb and the cardboard sign reading “Gold Coast” means, and we can also make split-second decisions regarding how dodgy the hitchhiker looks, how much space we’ve got in the car, where we’re going and how urgent our journey is, and use all this to decide whether or not to pick up the hitchhiker.
  • Situational ethics: it doesn’t happen very often, but what about when you’ve got a choice between two evils?  This comes down to morals, ethics and the value of life.  Sometimes, for a human, the choice is comparatively easy: in a choice between hitting Granny and hitting the stray dog, most of us would swerve to take the dog out.  Similarly, if you have to negotiate a flock of sheep, the farmer and his/her sheepdog, we know that if things get really bad, you avoid the dog and the farmer at all costs but you can hit the sheep.  At the moment, sensors have trouble getting beyond “Obstacle A” versus “Obstacle B”.  Even if they can tell people from animals, can they go further?  Can they distinguish one human from another?  And if so, how do they decide who not to hit?

  http://credit-n.ru/kreditnye-karty-blog-single.html

2 comments

  1. Bev Webbe says:

    What interesting possibilities for driverless vehicles. I think it is still some way to go before we can trust their judgment.

    April 25th, 2018 at 4:23 pm