Self-driving cars are only in beta and humans are already attacking them

self-driving cars
Image credit: marat marihal /

ITEM: Humans are attacking self-driving cars in California. Well, two of them, anyway.

A number of autonomous-car tests are underway on the streets of San Francisco and elsewhere in Silicon Valley. So far in 2018 there have been six reported collisions with self-driving cars – two of which involved humans, and both of which were apparently intentional, reports The Guardian:

On 10 January, a pedestrian in San Francisco’s Mission District ran across the street to confront a GM Cruise autonomous vehicle that was waiting for people to cross the road, according to an incident report filed by the car company. The pedestrian was “shouting”, the report states, and “struck the left side of the Cruise AV’s rear bumper and hatch with his entire body”.

No injuries occurred, but the car’s left tail light was damaged.

In a separate incident just a few blocks away on 28 January, a taxi driver in San Francisco got out of his car, approached a GM Cruise autonomous vehicle and “slapped the front passenger window, causing a scratch”.

Granted, two incidents hardly indicate a wave of pedestrian backlash against driverless cars. That said, the Guardian story does place them in the context of San Franciscans also attacking Knightscope security robots on the street:

While most residents simply complained about the robot’s presence, one person reportedly “put a tarp over it, knocked it over and put barbecue sauce on all the sensors”.

This isn’t new – BoingBoing relates an incident from a couple of years ago where someone sent a telepresence robot into a bar, upon which patrons began angrily throwing things at it.

Again, these are isolated, anecdotal incidents, at least one of which involved excessive alcohol consumption. So it doesn’t represent a vanguard movement to overthrow our robot overlords before they have a chance to take over.

On the other hand, it’s worth noting how at least some humans react to the presence of machines operating themselves in public spaces (by which I mean places that aren’t home or a workplace setting, where Roombas or warehouse robots wouldn’t seem out of place).

That could simply be the product of techno-paranoia fueled by too many Hollywood movies about robots gone wrong, or media reports about AI and robots taking over human jobs. Perhaps it’s also a case of people feeling a bit unsettled by the sight of a machine out on the street doing things on its own.

But it’s interesting that the animosity towards automated machines extends to self-driving cars. Certainly many people would have trouble trusting a car with no human driver, whether they’re a pedestrian, a passenger or driving their own car next to it on a city street or freeway. But how many would actually attack an autonomous car?

Not many, perhaps. But then self-driving cars aren’t very common yet. Once they become commercially available – even if the main customer segment is initially limited to transport companies creating ride-hailing fleets – we’ll probably see more incidents like this, whether from drunks, disgruntled employees, psychopaths, teenage punks, kids who don’t know better, or the sort of mean people who do terrible things to pets for amusement. Of which, unfortunately, there are many in the world.

On the less extreme side, we may also see people reject the notion of self-driving cars because it just doesn’t seem right somehow. Maybe they’ll be nervous enough to complain to city councillors or mayors who may then decide to Do Something About It.

Or not. But I would never discount the dark side of human nature as we see self-driving cars and other autonomous machines enter the mainstream. As MOV.AI founder and CEO Limor Schweitzer pointed out at MWC last week, AI-powered robots are safe as long as you engineer them properly – and as long as humans don’t abuse them in ways the engineers didn’t anticipate.

That’s not a new idea, either. Recent research papers have explored the idea of the consequences of humans bullying robots – particularly robots with AI capabilities. There are potential risks of humans getting hurt as robots respond in unexpected ways. And while a self-driving car isn’t likely to intentionally run you over in retaliation, human vandalism could unintentionally cause it to do something it’s not supposed to do – like run a red light or vault a sidewalk full of people.

Presumably the various car makers and their vendor partners are taking such factors into account and engineering failsafes to prevent such things from happening. Still, as Douglas Adams once wrote:

A common mistake that people make when trying to design something completely foolproof is to underestimate the ingenuity of complete fools.

Be the first to comment

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.