The government has created its own AI tech to spot extremist content online


The UK government has created its own artificial intelligence tool to help identify extremist content online in its latest bid to force tech companies to tackle the issue.

The technology is expected to help smaller companies with identifying and removing content that promotes terrorism, something that the government has been critical of. Larger tech companies such as Facebook have started using their own AI tools to remove terror content.

Read more: Unilever threatens to pull advertising spend with tech giants

But Home secretary Amber Rudd said that she would not rule out forcing tech companies to use the technology by law in an interview with the BBC.

The tool, developed with UK company ASI Data Science at a cost of £600.000, claims to be able to identify 94 per cent of the online activity of Islamic State (IS), with 99 per cent accuracy. Humans then have to assess the content and make a decision on removing it.

"We know that automatic technology like this, can heavily disrupt the terrorists’ actions, as well as prevent people from ever being exposed to these horrific images. This government has been taking the lead worldwide in making sure that vile terrorist content is stamped out," said Rudd on a visit to Silicon Valley to meet with tech companies.

Read more: Government to review how Facebook pays for news content

According to analysis by the Home Office, IS has been using more than 400 platforms last year, 145 them since July new ones.

Unilever, one of the world's biggest advertisers, yesterday piled pressure on tech companies, threatening to pull its advertising from their online platforms if they fail to tackle issues such as fake news and a toxic culture.

Original Article

Leave a Reply