Self-driving cars ‘more likely’ to drive into black people

  07 March 2019    Read: 2535
Self-driving cars ‘more likely’ to drive into black people

Technology used in self-driving cars has a racial bias that makes autonomous vehicles more likely to drive into black people, a new study claims.

Researchers at the Georgia Institute of Technology found that state-of-the-art detection systems, such as the sensors and cameras used in self-driving cars, are better at detecting people with lighter skin tones.

That makes them less likely to spot black people and to stop before crashing into them, the authors note.

The researchers said they undertook the study after observing higher error rates for certain demographics by such systems.

Tests on eight image-recognition systems found this bias held true, with their accuracy proving five per cent less accurate on average for people with darker skin.

To prove the hypothesis, the scientists divided a large pool of pedestrian images into two groups of lighter and darker skin using the Fitzpatrick scale – a scientific way of classifying skin colour.

Even when changing the time of day or obstructing the image-detection systems view, the average accuracy remained the same. 

"We hope this study provides compelling evidence of the real problem that may arise if this source of capture bias is not considered before deploying these sort of recognition models," the study concluded.

AI researcher Kate Crawford, who was not involved in the study, highlighted the dangers of such systems if these issues are not addressed by the companies developing self-driving cars.

“Pedestrian deaths by self-driving cars are already here – but they're not evenly distributed,” she tweeted.

Other AI experts responded to her tweet by highlighting that the paper did not use datasets used by autonomous vehicle developers, so may not reflect the actual accuracy of real-world systems.

“In an ideal world, academics would be testing the actual models and training sets used by autonomous car manufacturers,” she responded.

“But given those are never made available (a problem in itself), papers like these offer strong insights into very real risks.”

It is not the first time that machine learning and vision systems have been shown to have an in-built bias. 

In January, researchers at the Massachusetts Institute of Technology (MIT) found that Amazon's facial recognition software Rekognition had a harder time identifying a person's gender if they were female or darker-skinned.

 

The Independent


More about: Artificialintelligence  


News Line