-10.1 C
New York
Monday, December 23, 2024

Buy now

YouTube eliminates five times more videos and channels in the second quarter of 2019 for containing hate speech

YouTube has removed five times more individual videos and channels for showing hate speech in the second quarter of 2019 compared to the previous one, data that the company identifies with the update of its hate speech policy for the month of June.

In early June, the technology company introduced an update in its policies, focused on hate speech, with the aim of clarifying what they understand as such and reinforcing the fight against supremacism and the contents that deny the Holocaust.

This policy was also complemented by actions aimed at reducing the dissemination of content that is within the “limit of what is allowed”, such as the one that defends that the Earth is flat, YouTube explained then.

This update has had a “profound impact” that has been noted in the figures contained in its latest Report on the Reinforcement of the Community Guidelines, corresponding to the second quarter of 2019.

In this period, they have removed more than 100,000 individual videos, five times more than in the previous quarter. Also the channels removed have been multiplied by five, to exceed 17,000. In the case of comments, the company has withdrawn “nearly double” in this quarter, more than 500 million.

This increase in the elimination due to hate speech is due “in part, to the withdrawal of old comments, videos and channels that were previously allowed”, as indicated by the company, but which ceased to be with the reinforcement of policies introduced the June update.

This update was also aimed at reducing the visibility of videos not allowed. In this sense, and in collaboration with a team of more than 10,000 people, the company has withdrawn about 30,000 videos “in the last month”, which together have generated 3 percent of the visits that the videos have had on making point in the same period.

The company has also noted in its official blog that 87 percent of the nine million videos that the company has removed from the platform for violating its policies was first detected by its artificial intelligence system.

YouTube began using artificial intelligence in 2017 to detect content that potentially violates the platform’s policies, which are subsequently reviewed by a human team.

This system is “particularly effective in detecting content that seems to be the same, such as ‘spam’ or adult content,” as well as hate speech or violent content, but the human factor allows “making nuanced decisions” that depend on context.

Part of these efforts are focused on identifying and removing content that violates policies before it is seen by users. In this regard, the company has indicated more than 80 percent of the self-detected videos were removed before receiving a visit.

© 2019 Europa Press.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

21,156FansLike
3,912FollowersFollow
2,245SubscribersSubscribe
- Advertisement -spot_img

Latest Articles