an article by Mark Ledwich (software engineer) and Anna Zaitsev (The University of California, Berkeley, USA) published in First Monday Volume 25 Number 3 (March 2020)
Abstract
The role that YouTube and its behind-the-scenes recommendation algorithm plays in encouraging online radicalisation has been suggested by both journalists and academics alike.
This study directly quantifies these claims by examining the role that YouTube’s algorithm plays in suggesting radicalised content. After categorising nearly 800 political channels, we were able to differentiate between political schemas in order to analyse the algorithm traffic flows out and between each group.
After conducting a detailed analysis of recommendations received by each channel type, we refute the popular radicalisation claims. On the contrary, these data suggest that YouTube’s recommendation algorithm actively discourages viewers from visiting radicalising or extremist content.
Instead, the algorithm is shown to favour mainstream media and cable news content over independent YouTube channels with a slant towards left-leaning or politically neutral channels.
Our study thus suggests that YouTube’s recommendation algorithm fails to promote inflammatory or radicalised content, as previously claimed by several outlets.
Full text (HTML) with lots of graphs and charts to explain the words to people like me who learn best visually.
Labels:
YouTube, recommendation_algorithm, radicalisation,
provides links to information management, information sources and other "useful stuff" with a strong bias towards the management of information in a careers guidance context. As of 17 July 2017 I include mental health information and comment. I tweet as @careersinfo and am on LinkedIn and Facebook as Hazel Edmunds
No comments:
Post a Comment