Europe
2023.11.15 20:35 GMT+8

How AI is helping to tackle online hatred

Updated 2023.11.15 20:35 GMT+8
Ken Browne in Lisbon

Racial abuse is the most predominant online hate crime, and sporting figures have been particularly targeted. But artificial intelligence (AI) has allowed governing bodies in sport to uncover those responsible and help bring abusers to justice. 

Manchester United's Marcus Rashford, Arsenal's Bukayo Saka and Real Madrid's Vinicius Jr are among those high-profile footballers who know what it's like to feel hate just because of the color of their skin.‌

Social media has given a new platform for rage and the anonymity to get away with it. After England's Euro 2020 final loss on penalties, Rashford and Saka suffered such shocking racial attacks that the Professional Footballers Association (PFA) decided to act.

‌A huge study analyzed some 6 million online posts over 12 months, and researchers found that offenders are not as anonymous as they think.

Thanks to AI, online abusers aren't as anonymous as they think. /Igor Stevanovic/500px/CFP

‌"The conclusion that we came to was that on the one hand, unlike what some of the companies have said, it is technically possible to identify who is behind certain comments," says Maheta Molango, CEO of the UK's PFA.

"You can report the behavior to the police, which is what the PFA is doing. And second, you can also report that behavior to the clubs, meaning that the club could potentially adopt strong measures against the perpetrator. Preventing them or banning them from a stadium sometimes is as powerful as a deterrent."

READ MORE

Protests: Why Europe is split over Israel-Gaza war

Greeks spending 50% of their salary on rent

Is this Europe's ugliest fountain?

The key to this kind of study is scale, and AI has a powerful capacity to identify and categorize hundreds of millions of posts – automatically flagging dangerous content and alerting to dangers in the real world. Augmented AI helps human experts to flag high-risk cases to the authorities.

‌"There was one case where an individual, a U.S. athlete at the Tokyo Olympics, was on our top list for most abuse being sent," says Jonathan Hirshler, co-founder and CEO of data science company Signify. "We noticed that hundreds of messages were coming from the same account – and some quite nasty. 

"We recognized the person's account was in the vicinity of where the athlete had been training. The threat level was raised to much higher. Law enforcement then gathered further evidence and there was a prosecution."

While AI alone cannot solve the issue of online abuse, developments in this field can certainly help unveil criminals who try to hide behind online anonymity.

Subscribe to Storyboard: A weekly newsletter bringing you the best of CGTN every Friday

Copyright © 

RELATED STORIES