Facebook admits flaw in image moderation after BBC report

Gigacycle > Technology News  > Facebook admits flaw in image moderation after BBC report

Facebook admits flaw in image moderation after BBC report

A Facebook executive has admitted to MPs its moderating process “was not working” following a BBC investigation.

BBC News reported 100 posts featuring sexualised images and comments about children, but 82 were deemed not to “breach community standards”.

Facebook UK director Simon Milner told MPs the problem was now fixed.

He was speaking to the Commons Home Affairs committee alongside bosses from Twitter and Google as part of an investigation into online hate crime.

The BBC investigation reported dozens of posts through the website tool, including images from groups where users were discussing swapping what appeared to be child abuse material.

  • Facebook criticised over child images

When journalists went back to Facebook with the images that had not been taken down, the company reported them to the police and cancelled an interview, saying in a statement: “It is against the law for anyone to distribute images of child exploitation.”

On Tuesday, Mr Milner, who is the firm’s head of policy, told the Commons Home Affairs Select Committee the reports had exposed a flaw in its content moderation process.

“We welcome when a journalist or a safety organisation contacts us and says we think there is something going wrong on your platform,” he said.

“We welcome that because we know that we do not always get it right.”

The executive said there had been an issue in connecting images with comments written alongside them, but they had “now fixed that problem.”

The content flagged up by the BBC had since been addressed, reviewed and taken off Facebook, he said.

‘Money out of hate’

Labour MP Chuka Umunna focused his questioning on Google-owned YouTube, which he accused of making money from “videos peddling hate” on its platform.

A recent investigation by the Times found adverts were appearing alongside content from supporters of extremist groups, making them around £6 per 1,000 viewers, as well as making money for the company.

Mr Umunna said: “Your operating profit in 2016 was $30.4bn.

“Now, there are not many business activities that somebody openly would have to come and admit… that they are making money and people who use their platform are making money out of hate.

“You, as an outfit, are not working nearly hard enough to deal with this.”

Peter Barron, vice president of communications and public affairs at Google Europe, told the committee the cash made from the videos in question was “very small amounts”, but added that the firm was “working very hard in this area” to stop it happening again.

Yvette Cooper, who is chairwoman of the committee, turned her attention to Twitter.

The shadow home secretary said she had personally reported a user who had tweeted a “series of racist, vile and violent attacks” against political figures such as German Chancellor Angela Merkel and London Mayor Sadiq Khan, but the user had not been removed.

Nick Pickles, head of public policy and government for Twitter in the UK, said the company acknowledged it was “not doing a good enough job” at responding to reports from users.

“We don’t communicate with the users enough when they report something, we don’t keep people updated enough and we don’t communicate back enough when we do take action,” he said.

“I am sorry to hear those reports had not been looked at. We would have expected them to have been looked at certainly by the end of today, particularly for violent threats.”

When the BBC checked the account after the committee session, it had been suspended.

‘Terrible reputation’

Ms Cooper said she found none of the responses from the executives to her questions “particularly convincing”.

She added: “We understand the challenges that you face and technology changes very fast, but you all have millions of users in the United Kingdom and you make billions of pounds from these users, [yet] you all have a terrible reputation among users for dealing swiftly with content even against your own community standards.

“Surely when you manage to have such a good reputation with advertisers for targeting content and for doing all kinds of sophisticated things with your platforms, you should be able to do a better job in order to be able to keep your users safe online and deal with this type of hate speech.”

Go to Source

No Comments

Sorry, the comment form is closed at this time.