By Andrew Warshaw
October 22 – A new report into the extent to which professional footballers are attacked online has thrown up damning and disturbing findings, with 43% of English Premier League players surveyed having experienced “targeted racist abuse”.
The study was carried out by England’s Professional Football Association in partnership with data science company, Signify Group, using machine learning systems to analyse messages sent publicly via Twitter to 44 high-profile current and former players.
During the final six weeks of last season’s Premier League following the launch of ‘Project Restart’, Signify analysed 825,515 tweets directed at the selected players, identifying over 3,000 explicitly abusive messages, 56% of which were racist.
Abusive emoji posts constituted 29% of the racist messages players received while a disturbing 50% of the total online abuse recorded was received by just three players.
“I don’t know how many times I need to say this, but football and the social media platforms need to step up, show real leadership and take proper action in tackling online abuse,” said England and Manchester City forward, Raheem Sterling, who has been abused on several occasions. “The technology is there to make a difference, but I’m increasingly questioning if there is the will.”
“This report confirms what we have known for a while – that social media can be a battleground of hate with few consequences for abusers,” said Sanjay Bhandari, chair of leading anti-discrimination charity Kick It Out.
“We need government, law enforcement, the leagues and clubs to commit to working together to fill in those cracks in the enforcement system. We need better government regulation and improved sharing of data and intelligence to scratch below the surface, in order to understand and address root causes. Crucially, we need fans and social media organisations to be part of the solution. This is a behavioural and technological problem. We need behavioural and technological solutions.”
Simone Pound, head of equalities at the PFA, agreed that concerted action was needed. “Social media companies must do more to address abuse on their channels and not consider it an expected experience,” he said.
As a result of Signify’s forensic capability to source where abuse is coming from, thousands of culprits can now be named and shamed for discriminatory insults, threats and images that have caused pain and hurt to so many players and their families, often for years on end.
Signify uses a mixture of Artificial Intelligence and human expertise to process huge amounts of publicly available data and uncover the real identities of online abusers, crucially without breaking any privacy laws.
Commenting on the latest findings, Signify’s CEO, Jonathan Hirshler said: “This has been an important initiative, developed with the PFA, to support and protect players, staff and their families online through tangible action.
“It is a continuation of our work to help move the default response to online abuse from reactive to proactive. Driven by our proprietary threat monitoring service – Threat Matrix – we have been able to identify, analyse, source and de-anonymise targeted online abuse.”
Contact the writer of this story at firstname.lastname@example.org