clear
Thursday, 24 January 2019
The Automated Speech Police
Share
clear

Teaching computers to weed out online hate speech is a terrible idea.

by Paula Boddington

There are plenty of reasons to worry about the concept of ‘hate speech’. There are also specific concerns about the notion of Islamophobia, especially in light of controversial recent moves by the All Party Parliamentary Group (APPG) on British Muslims to produce a definition. Both concepts are subjective and hard to pin down. But it gets worse. For around the globe, a cottage industry is springing up, attempting to devise ways to automate the detection of online ‘hate speech’ in general, and of ‘Islamophobia’ in particular.

The aura of scientific objectivity that goes along with the computerised detection of ‘hate’ online is very dangerous. You can’t make a loose and fuzzy idea rigorous by getting complicated algorithms and sophisticated statistical analysis to do your dirty work for you. But you can make it look that way. And worryingly, many of those working to automate ‘hate speech’ detection have direct influence on governments and tech firms.

Those working on such tools often see ‘hate speech’ as a problem worsened by technology. Hence they assume that the solution is more technology. For example, the Anti-Defamation League (ADL) is ‘teaching machines to recognise hate’ by working to produce an Online Hate Index. The ADL argues that a combination of a team of human assessors and a ‘constantly evolving process of machine learning’ can help us learn more about ‘hate’ online and ‘push for the changes necessary to ensure that online communities are safe and inclusive spaces’.

But in truth projects like this will only fuel attempts by social-media platforms and governments to chill debate. The APPG on British Muslims at least recognised that any definition of Islamophobia must not rule out genuine criticism of religion. But it is not at all clear that those working on the automated detection of Islamophobia and other ‘hate speech’ are taking steps to protect legitimate criticism and opinion.

In a recent article, two researchers at the Oxford Internet Institute, Bertie Vidgen and Taha Yasseri, discuss a tool they have built to ‘detect the strength of Islamophobic hate speech on Twitter’. Their work merits more scrutiny, not least because anything produced within prestigious universities, like Oxford, may have disproportionate influence on policy and practice. While, again, they nod in the piece to the difficulty of defining and detecting Islamophobia, they steam on regardless.

The researchers took samples from the Twitter accounts of four mainstream British political parties: UKIP, the Conservatives, the Liberal Democrats and Labour. It then incorporated 45 additional ‘far right’ groups, drawn from anti-fascist group Hope Not Hate’s ‘State of Hate’ reports. For academics trying to be rigorous, this is unfortunate, since it is not always clear how consistently Hope Not Hate applies the label ‘far right’. What’s more, Hope Not Hate has a regrettable habit of calling people ‘wallies’, which hardly makes their work appear rigorous or impartial.

Islamophobia is defined, in this study, as ‘any content which is produced or shared which expresses indiscriminate negativity against Islam or Muslims’. Attempting to introduce a degree of nuance, a distinction is made between ‘strong Islamophobia’ and ‘weak Islamophobia’.

The methodology Vidgen and Yasseri use is similar to that of the ADL – they had humans assess tweets, then used machine learning to train computers to continue the work. The first weak spot is, of course, the human assessors. The authors report that three unnamed ‘experts’ graded tweets from ‘strong Islamophobia’ to ‘weak Islamophobia’ to ‘no Islamophobia’. I’d be willing to bet a fiver that not one of these ‘experts’ is critical of the concept of hate speech. Broad agreement on grading between these ‘experts’ is hailed as proof of their rigour – but it may simply be proof that they share certain biases. The subsequent application of machine learning would only magnify such bias.

Worse still, there are no examples given here of tweets and their classification. Instead we just have an illustration of ‘weak’ Islamophobia, as ‘sharing a news story about a terrorist attack and explicitly foregrounding the fact that the perpetrator is a Muslim’. This is flawed. After all, in the wake of a terrorist attack, it is a reflex of some on social media to deny that it has any connection to Islam, even when the evidence suggests otherwise. In response, other social-media users often point out that the attack definitely does have something to do with Islam. And besides, simply highlighting the apparent ideology of a terrorist is hardly hateful in itself.

What is also absent from Vidgen and Yasseri’s analysis are accounts of any prominent atheists, secularists, Muslim reformers or ex-Muslims. Accounts devoted to scholarly critique of Islam might reasonably be presumed to have some basis in fact and reason, and would surely be useful in training data for machine learning. There are plenty of generalised truths about any religion which can be expressed in negative terms. In the case of Islam, these could appear as ‘strong Islamophobia’. But no attempt appears to have been made to exempt such legitimate criticisms.

Vidgen and Yasseri, like so many others, fail to distinguish between people and ideas, between Muslims and Islam. This is a strange, but widespread, phenomenon. From this perspective, to attack someone’s beliefs is to attack their very essence. People are ideas, ideas are people, and critiquing one is a body blow to the other. This subjectivist, relativist position feeds into the concept of hate speech, and the claim that offensive speech is harmful. But in the context of Islam this produces a particularly curious spectacle: a subjectivist worldview being used to defend a religion that is supposedly based on the teachings of an eternal, unchanging deity.

In the end, the issue of hate speech is far more complex than many researchers might like to make out. Policing hate speech is often about deciding whose opinions need protection and whose don’t. At the very least, let’s not hand over that process to machines.

First published in Spiked.

Paula Boddington is a senior research fellow at Cardiff University. From 2015-18, she was a senior researcher in the department of computer science at the University of Oxford, working on the ethics of artificial intelligence. Her latest book is Towards a Code for Artificial Intelligence.

clear
Posted on 01/24/2019 3:59 AM by Paula Boddington
Comments
24 Jan 2019
Send an emailHoward Nelson
Let's wake up folks and realize that there is no such thing as Islamophobia, the irrational, unjustified fear of Islam and its expressions by its adherents, devotees. // What is irrational about fearing an ideology that promotes subjugation, mutilation, and murder of its non-members, as well as its devotees performing those heinous acts? Hundreds of such acts in the name of Islam are executed annually. // Commands for such barbaric behavior are engraved in Islam's immutable Koran, Hadiths, and daily common Sharia laws. //. What we do have, justfiably, is Islammetusia, the rational fear of Islam and its vicious expressions against nonMuslims. // The APPG definition of Islamophobia fails to adequately define phobia, racism, expressions of Muslimness and reduces to an exercise in taqiyya, kitman, obfuscation, and banal BS.

Available on Amazon US
and Amazon UK


Available on Amazon
and Amazon UK.


Amazon donates to World Encounter Institute Inc when you shop at smile.amazon.com/ch/56-2572448. #AmazonSmile #StartWithaSmile

Subscribe

Categories

Adam Selene (1) A.J. Caschetta (7) Alexander Murinson (1) Andrew Harrod (2) Bat Ye'or (6) Brex I Teer (7) Brian of London (32) Christina McIntosh (862) Christopher DeGroot (2) Conrad Black (433) Daniel Mallock (4) David P. Gontar (7) David Solway (78) David Wemyss (1) Dexter Van Zile (74) Dr. Michael Welner (3) Emmet Scott (1) Eric Rozenman (3) Esmerelda Weatherwax (9350) Fergus Downie (5) Fred Leder (1) Friedrich Hansen (7) G. Murphy Donovan (59) Gary Fouse (124) Geert Wilders (13) Geoffrey Botkin (1) Geoffrey Clarfield (325) Hannah Rubenstein (3) Hossein Khorram (2) Hugh Fitzgerald (20838) Ibn Warraq (10) Ilana Freedman (2) James Como (19) James Robbins (1) James Stevens Curl (2) Janice Fiamengo (1) Jerry Gordon (2504) Jerry Gordon and Lt. Gen. Abakar M. Abdallah (1) Jesse Sandoval (1) John Constantine (119) John Hajjar (5) John M. Joyce (388) Jonathan Ferguson (1) Jonathan Hausman (4) Joseph S. Spoerl (10) Kenneth Lasson (1) Kenneth Timmerman (25) Lorna Salzman (9) Louis Rene Beres (37) Marc Epstein (7) Mark Anthony Signorelli (11) Mark Durie (7) Mary Jackson (5066) Matthew Hausman (39) Michael Curtis (555) Michael Rechtenwald (3) Mordechai Nisan (2) Moshe Dann (1) NER (2587) New English Review Press (27) Nidra Poller (73) Nonie Darwish (7) Norman Berdichevsky (86) Paul Weston (5) Paula Boddington (1) Peter McLoughlin (1) Philip Blake (1) Phyllis Chesler (49) Rebecca Bynum (7170) Richard Butrick (24) Richard Kostelanetz (16) Richard L. Benkin (21) Richard L. Cravatts (7) Richard L. Rubenstein (44) Robert Harris (84) Sally Ross (37) Sam Bluefarb (1) Sha’i ben-Tekoa (1) Springtime for Snowflakes (4) Stephen Schecter (1) Steve Hecht (25) Ted Belman (8) The Law (90) Theodore Dalrymple (828) Thomas J. Scheff (6) Thomas Ország-Land (3) Tom Harb (3) Tyler Curtis (1) Walid Phares (29) Winfield Myers (1) z - all below inactive (7) z - Ares Demertzis (2) z - Andrew Bostom (74) z - Andy McCarthy (536) z - Artemis Gordon Glidden (881) z - DL Adams (21) z - John Derbyshire (1013) z - Marisol Seibold (26) z - Mark Butterworth (49) z- Robert Bove (1189) zz - Ali Sina (2)
clear
Site Archive