Abstract: YouTube’s “up next” feature algorithmically selects, suggests, and displays videos to watch after the one that is currently playing. This feature has been criticized for limiting users’ exposure to a range of diverse media content and information sources; meanwhile, YouTube has reported that they have implemented various technical and policy changes to address these concerns. However, there is little publicly available data to support either the existing concerns or YouTube’s claims of having addressed them. Drawing on the idea of “platform observability,” this article combines computational and qualitative methods to investigate the types of content that the algorithms underpinning YouTube’s “up next” feature amplify over time, using three keyword search terms associated with sociocultural issues where concerns have been raised about YouTube’s role: “coronavirus,” “feminism,” and “beauty.” Over six weeks, we collected the videos (and their metadata, including channel IDs) that were highly ranked in the search results for each keyword, as well as the highly ranked recommendations associated with the videos. We repeated this exercise for three steps in the recommendation chain and then examined patterns in the recommended videos (and the channels that uploaded the videos) for each query and their variation over time. We found evidence of YouTube’s stated efforts to boost “authoritative” media outlets, but at the same time, misleading and controversial content continues to be recommended. We also found that while algorithmic recommendations offer diversity in videos over time, there are clear “winners” at the channel level that are given a visibility boost in YouTube’s “up next” feature. However, these impacts are attenuated differently depending on the nature of the issue.
Keywords: algorithms; automation; content moderation; digital methods; platform governance; YouTube