Motorola's New Body Cam AI Stops Just Short of Dystopian Surveillance

We may earn a commission from links on this page.

Unsurprisingly, the latest AI advancement in body camera technology comes no closer to increasing police accountability or officer transparency. As the public’s push for body cameras has died down, tech companies are now making their own appeal for body cameras to the police departments that buy them: offering sharper, smarter surveillance.

On Monday, Motorola Solutions (the legal successor of the defunct Motorola, Inc.) announced a new partnership with machine learning startup Neurala to use object recognition AI in its Si500 body cameras. Neurala’s tech introduces advanced recognition capabilities they say could help officers find missing children or suspects at large. Civil libertarians were immediately concerned this would be used for face recognition, a long-dreaded event horizon for personal privacy where officers could immediately ascertain the identities of anyone they come across, at any time. Motorola and Neurala have denied the technology will be used this way, though concerns remain.

Advertisement
Advertisement

As co-founder Max Versace explained, Neurala will train its software to recognize objects worn by either the suspect or missing person, not find their face. Object recognition and face recognition both pose serious privacy concerns, but they aren’t the same.

Advertisement

“You’re looking for a child in a blue shirt, red shoes and white pants and black hair, [the AI] can learn [that] and then push it to all the devices,” Versace told Gizmodo. “Or, if you have specific picture of that person wearing those items, you can do that.”

As Versace imagines, a parent could give officers a photo of their child. The AI is then trained to recognize the objects in the photo and an update is pushed to all CCTV and body cameras to flag those objects in real time.

Advertisement

“You can have a thousand eyes looking for a particular object,” Versace said.

This isn’t the same as face recognition, which is a three-step process. First, AI has to scan and measure your face to calculate its unique measurements (the width of your eyes, the length of your forehead, etc.) that make up your “faceprint.” That faceprint is then compared to a database of stored faceprints and “recognition” occurs if there’s a match.

Advertisement

Cops have access to millions of faceprints because they’re sourced from official documents like driver’s license photos and passports. Neurala says it won’t scan faces or match them to databases and, furthermore, most children aren’t in any ID database. But that doesn’t mean there aren’t troubling aspects about the company’s proposal.

As Versace explained, Neurala would match objects in real time, requiring cops to have their cameras constantly on and scanning. That’s likely to give privacy experts pause, as they generally agree that cops should be required to (a) inform people when they’re being recorded, and (b) turn off the cameras when people ask. Most of the people recorded by always-on body cameras won’t be suspects of any crime, yet they will be filmed and analyzed—without their knowledge or consent.

Advertisement

Additionally, the objects cops are searching for aren’t necessarily neutral. Imagine cops training software to flag anyone wearing a hijab or burqa. Object recognition can automate targeted profiling, making it easier for cops to surveil certain communities. Versace is aware of the potential for AI to be misused, but said it’s not the technology we should be wary of.

“Human beings are more dangerous than AI,” Versace said. “We should worry about us first and then worry about AI.”

Advertisement