YouTube eliminates five times more videos and channels in the second quarter of 2019 for containing hate speech

YouTube has removed five times more individual videos and channels for showing hate speech in the second quarter of 2019 compared to the previous one, data that the company identifies with the update of its hate speech policy for the month of June.

In early June, the technology company introduced an update in its policies, focused on hate speech, with the aim of clarifying what they understand as such and reinforcing the fight against supremacism and the contents that deny the Holocaust.

This policy was also complemented by actions aimed at reducing the dissemination of content that is within the “limit of what is allowed”, such as the one that defends that the Earth is flat, YouTube explained then.

This update has had a “profound impact” that has been noted in the figures contained in its latest Report on the Reinforcement of the Community Guidelines, corresponding to the second quarter of 2019.

In this period, they have removed more than 100,000 individual videos, five times more than in the previous quarter. Also the channels removed have been multiplied by five, to exceed 17,000. In the case of comments, the company has withdrawn “nearly double” in this quarter, more than 500 million.

This increase in the elimination due to hate speech is due “in part, to the withdrawal of old comments, videos and channels that were previously allowed”, as indicated by the company, but which ceased to be with the reinforcement of policies introduced the June update.

This update was also aimed at reducing the visibility of videos not allowed. In this sense, and in collaboration with a team of more than 10,000 people, the company has withdrawn about 30,000 videos “in the last month”, which together have generated 3 percent of the visits that the videos have had on making point in the same period.

The company has also noted in its official blog that 87 percent of the nine million videos that the company has removed from the platform for violating its policies was first detected by its artificial intelligence system.

YouTube began using artificial intelligence in 2017 to detect content that potentially violates the platform’s policies, which are subsequently reviewed by a human team.

This system is “particularly effective in detecting content that seems to be the same, such as ‘spam’ or adult content,” as well as hate speech or violent content, but the human factor allows “making nuanced decisions” that depend on context.

Part of these efforts are focused on identifying and removing content that violates policies before it is seen by users. In this regard, the company has indicated more than 80 percent of the self-detected videos were removed before receiving a visit.

© 2019 Europa Press.


Please enter your comment!
Please enter your name here


Here are 9 things you should know ahead of SpaceX’s historic launch

2 astronauts to launch from American soil for first time since 2011 Florida’s Space Coast is preparing for liftoff -- the first launch since 2011...

Trump gives the company that owns TikTok 90 days to end its operations in the US and get rid of the data

The President of the United States, Donald Trump, signed an executive order this Friday approving a period of 90 days for ByteDance, the company...

Hospitalized in critical situation in Portland the surfer Sunny García

 Portland (OR), .- The professional American surfer Vincent "Sunny" Garcia was hospitalized urgently Tuesday for reasons that are still unknown and is in an...

New designer collection Carlos Campos pays tribute to Juan Gabriel

The Honduran designer Carlos Campos today paid tribute to the late Mexican singer Juan Gabriel with a collection inspired by the different styles of...

Florida will allocate $ 500 million to school safety after slaughter

The governor of Florida (USA), Rick Scott, announced today that he will destine 500 million dollars to improve school safety and mental health and...