EN: Opinion — Artificial Intelligence in video surveillance

 
Surveillance.jpeg
 

In the past 7th of January, SIC Notícias (portuguese news network) broadcast a short (2 1/2 min) piece on an initiative promoted by Portuguese Minister of Home Affairs Eduardo Cabrita, to install an artificial intelligence (A.I.) -powered video surveillance system in the cities of Leiria and Portimão. In Leiria, the existing 19 cameras would be complemented with a further 42, and in Portimão the system would be instated with 61 cameras. The AI system would be able to identify details such as “hair colour and clothing” and would bring “important advances to security and the perception of security” by the population, according to Minister Cabrita.

HAL.jpg

The news further adds that the Portuguese National Commission for Data Protection (CNPD in the portuguese acronym) has issued an official statement disapproving of the initiative, arguing that it present a “high risk to citizens’ privacy, by the amount and type of information the systems allows to collect”, and that “at any point is it justified to employ this technology and functionality”.

Halfway through my dinner, I raised the volume of the TV. The mayors of both cities stated that the use of AI was promoted by the Public Security Police (the national civil preventive police force of Portugal), but it turns out the project was not at all fundamented in this regard. 

Allow me to establish: I don’t have anything against video surveillance as an interventive measure. Well implemented (and there’s a fantastically well argued statement by the CNPD about it), this tool can increase the perception of security of a region, despite that the mere fact that having cameras aimed at us by itself can change the way we behave in public.

What bothered me was the unjustified use of a technology that sounds like a futuristic movie, but consists solely of a software that analyses the captured video, identifies individuals and behaviours, and can produce reports that help human beings to make decisions (such as to increase the amount of police officers in certain areas).

Artificial Intelligence systems used by police forces tend to be setup in a pretty straightforward way: they identify questionable behaviours, they ask a human being “do you think this is dangerous?”, and they’ll learn what to look out for. If they make a mistake, (mark a dubious behaviour as dangerous, deploy officers and they check that it was harmless, or the other way around), no human being will report back “you got it wrong, next time be a little more conservative”. These systems become automated forms of unconscious behaviours of whomever trained them.

In the USA, the PredPol system finds geographical patterns of higher rates of crime reports, and suggests a higher policing of the affected areas. More agents in the area lead to more reports — for both big and small crimes — and more reports lead to more policing. The cycle is self-sustained, with areas considered increasingly worse without nobody really understanding why.

Bunny cop.png

In South Africa, similar systems employed by private security companies tend to further entrench racial segregation (by pushing crime out from privileged “white” neighbourhoods and into mostly “black” slums. The systems, trained by professionals with unconscious racial biases, automatically identify black mailmen, black construction workers and black power company technicians as dangerous, triggering alerts to guards, as problematic behaviours by white people don’t trigger warnings as often; the system isn’t told of its mistakes and persists.

These systems can definitely bring more quality to policing and public safety if well implemented, and if they’re designed to learn from their own mistakes. But faced with the total absence of arguments in favour of a technology that signals ignorance on the matter from all people involved, it seems like for now we’re better off with bored human beings blankly staring at CCTV screens.