Negotiated Autonomy: The Role of Social Media Algorithms in Editorial Decision Making

Social media platforms have increasingly become an important way for news organizations to distribute content to their audiences. As news organizations relinquish control over distribution, they may feel the need to optimize their content to align with platform logics to ensure economic sustainability. However, the opaque and often proprietary nature of platform algorithms makes it hard for news organizations to truly know what kinds of content are preferred and will perform well. Invoking the concept of algorithmic ‘folk theories,’ this article presents a study of in-depth, semi-structured interviews with 18 U


Introduction
On January 11th, 2018, Facebook co-Founder and CEO Mark Zuckerberg announced that the company would change its newsfeed algorithm to feature more posts from close friends and family and fewer posts from public brands and media. In the press release accompanying Zuckerberg's announcement, Adam Mosseri, Facebook's Head of News at the time, gestured to the implications of these changes for news and media brands writing: As we make these updates, Pages may see their reach, video watch time and referral traffic decrease. The impact will vary from Page to Page, driven by factors including the type of content they produce and how people interact with it. (Mosseri, 2018) The announcement sent shockwaves through newsrooms that relied on the social media platform as a way to distribute content to audiences, highlighting the complexity and precarity of using third party platforms as a main avenue for news distribution (Zantal-Wiener, 2019).
In order to survive in a competitive market, news organizations may feel the need to optimize their content to fit with the logic of social media platforms' distribution algorithms in ways that potentially conflict with normative principles of journalism. This study seeks to elaborate how the editorial practices of gatekeeping and news selection are influenced by journalists' understandings of social media distribution algorithms. While journalism scholars have previously considered this question on a theoretical level (e.g., Caplan & boyd, 2018;Poell & van Dijck, 2014), this research takes an empirical approach, conducting 18 in-depth, semi-structured interviews with current newsroom professionals to explicate: 1) How they make sense of the proprietary algorithms that power social media platforms and 2) In what ways they perceive these understandings to influence their decision-making process when selecting news items for coverage. Our findings suggest that journalists understand social media distribution algorithms as filters that decide whether or not their audiences see their content based on a variety of factors, including but not limited to engagement or engage-ability of content, publisher size, payment, and political ideology. Further, our findings indicate that while journalists' understandings of how these platform algorithms function has become a new consideration in gatekeeping practices, the extent to which these algorithmic understandings influence their gatekeeping practices is often negotiated against traditional journalistic conceptions of newsworthiness and journalistic autonomy.

Literature Review
Gatekeeping theory was first developed by Kurt Lewin (1947) as a way to explain the forces that impact food consumption. The concept was introduced into communications studies by David Manning White (1950) in his study of the various forces that influenced small-town newspaper editor Mr. Gates' decisions on whether or not to turn an event on the wire service into a news item. White concluded that gatekeeping in journalism was highly subjective, based on the editor's personal preferences and valuation of events (White, 1950). Subsequent studies challenged White's conclusions, arguing that an individual editor's subjectivity was often influenced by the larger structural and organizational constraints of the newsrooms and corporations for which they worked (Gieber, 1956;McNelly, 1959;Shoemaker & Vos, 2009). Shoemaker and Reese (2014) developed a five-level hierarchical model for thinking about how media content is shaped, in which they argued that media content is influenced by individual workers, routines, organizational structures, social institutions surrounding media organizations, and ideological hegemony.
The basis for determining what does and does not become news is predicated on whether an occurrence meets a certain standard of newsworthiness (Shoemaker & Vos, 2009). Several studies have evaluated news output as a means of determining what kinds of content are considered newsworthy, finding that various news values are considered, such as: timeliness, geographic location and proximity to audience, sensationalism, extreme valence, novelty, celebrity, sensationalism, and controversy (Galtung & Ruge, 1965;Harcup & O'Neill, 2001, 2017. Recent scholarship has reconsidered these traditional understandings of gatekeeping and newsworthiness in the age of digital media (Heinderyckx & Vos, 2016). Traditional practices of gatekeeping were constructed in an age where news reporters and editors had little to no direct contact with their audiences, and decisions were based on normative assumptions about the role of journalism in society (Tandoc & Vos, 2016). However, the increased use of the web for news distribution has given readers new opportunities to exert influence in the gatekeeping process. Not only can news readers directly amplify certain stories online after publication (Singer, 2014), but analytic tools allow their news consumption habits to be tracked and fed back into professional gatekeeping decisions and determinations of newsworthiness (Anderson, 2011;Tandoc, 2014;Vu, 2014).
Social media platforms have also come to play an increasingly important role in shaping gatekeeping practices (Bell & Owen, 2017;Shearer & Grieco, 2019) and determining what news stories actually reach audiences post-publication (Hermida, 2020;Thorson & Wells, 2016). Tandoc and Vos (2016) argue that the use of social media by newsrooms renegotiates traditional understandings of journalistic autonomy, as journalists increasingly look to audiences to assess and reaffirm a story's newsworthiness. Further, other scholars have suggested that shareability, the likelihood that a story will be shared or commented on via social media, and whether or not a topic is trending on social media, have become new metrics for assessing newsworthiness (Harcup & O'Neill, 2017;Welbers & Opgenhaffen, 2018).
Social media's influence on editorial decision making is implicitly linked to journalists' attempts to understand and navigate the private and proprietary algorithms on which these platforms are built. An algorithm, broadly, is a series of encoded procedures or rules that translates information input to solve a problem or achieve a desired information output (Knuth, 1968). These systems assert both epistemological and ideological power through patterns of inclusion, prioritization, filtering, classification, and association (Diakopoulos, 2019;Gillespie, 2014). When news editors and reporters incorporate ideas of social media success into their gatekeeping processes, these power dynamics between the platform's imperatives and news organizations, as platform users, may manifest themselves in the news production process. Caplan and boyd (2018) suggest that algorithmically-driven technologies such as social media platforms and search engines structure the industries that use them through isomorphism; as news organizations become increasingly dependent on these platforms to reach their audiences what these platforms consider relevant or newsworthy may begin to structure what newsrooms see as newsworthy. Similarly, Vos and Russell (2019) argue that through social and search platforms Silicon Valley, as an institution, asserts regulatory and normative pressures on gatekeeping by structuring understandings of newsworthiness through algorithms and the ideological imperatives beneath them. In this way, the unique decoupling of news production and distribu-tion facilitated by social media platforms has the potential to threaten normative understandings of newsworthiness that may come into tension with more algorithmically driven understandings (Napoli, 2019).
Recent theoretical models of digital gatekeeping have attempted to tease out how algorithms may factor into modern gatekeeping practices. While not all of these models specifically consider the role of platform algorithms in shaping how journalists understand newsworthiness (Thorson & Wells, 2016;Wallace, 2018), those models that do, often argue that gatekeeping norms are increasingly oriented to what news is popular on these platforms (Heinderyckx & Vos, 2016;Poell & van Dijck, 2014) and journalistic conceptions of these algorithmic filters may further mediate the production of news, even prior to actual distribution on platforms (Napoli, 2019).
Teasing this out is often made difficult by the opaque and often proprietary nature of these algorithms (Diakopoulos, 2019). Thus, to understand how algorithmically-driven platforms may shape editorial decisions in modern newsrooms, this study employs a concept which previous scholars have termed algorithmic 'folk theories' (Bucher, 2017;DeVito, Birnholtz, Hancock, French, & Liu, 2018;DeVito, Gergel, & Birnholtz, 2017;Eslami et al., 2016). Eslami et al. (2016) argue that since users of these platforms are unable to truly know how these platforms function, they develop 'folk theories' as a way to conceptualize, understand, and navigate their behavior on these platforms. These 'folk theories' are key in shaping how users interact with algorithmically-driven platforms (Bucher, 2017;Eslami et al., 2015). These informal beliefs do not act as a measure of accuracy, but rather as a metric against which researchers can evaluate how non-authoritative understandings guide users' behaviors through these systems in ways that may differ from the actual technological functions these users seek to understand (Eslami et al., 2016;French & Hancock, 2017). DeVito et al. (2018) found that these 'folk theories' can be drawn from a diverse set of information sources including both endogenous information, such a individuals' own experiences on the platform and platform features, and exogenous information, such as information gained through press and conversations within social and familial networks.
While algorithmic 'folk theories' have been looked at in a variety of platform and computer-mediated contexts, to our knowledge they have yet to be applied specifically to the field of journalism. Unlike everyday social media platform users, journalism professionals may gain additional insight into how platform algorithms function through the explicit use of social media analytic tools, yet they can still never truly know if these understandings are accurate due to the proprietary and opaque nature of these algorithms. Thus, their understandings of the algorithm are still non-authoritative and are not necessarily congruent with the actual technological systems on which they are based, fitting within the definition of 'folk theories.' This study investigates what 'folk theories' journalists use to understand how social media distribution algorithms function and how they may or may not use these understandings to guide their behavior in optimizing their content for success on these platforms. Drawing on hierarchical understandings of gatekeeping this article also aims to understand how algorithmic 'folk theories' may impact gatekeeping at different levels (Shoemaker & Reese, 2014). Thus, the main research questions posed by this project are: RQ1: What algorithmic 'folk theories' permeate journalistic practices?
RQ2: How and to what extent do journalists perceive these algorithmic 'folk theories' to influence their editorial decision making and gatekeeping practices at various levels?

Methods
To answer our research questions, we conducted qualitative, semi-structured interviews with professional journalists across a range of U.S.-based news organizations.

Recruitment
Potential interviewees were recruited using three main strategies. First, we searched the professional networking site, LinkedIn, using a series of keywords for jobs relating to news gatekeeping including: 'editor,' 'editorial,' 'editorial producer,' 'booking producer,' 'social media editor,' 'audience engagement editor,' 'analytics editor,' and 'content strategist.' Initial keywords were based on past literature on gatekeeping, and were iteratively expanded based upon the jobs returned in searches. Potential participants were contacted with a link to a short screening survey, and subsequently asked for an interview if appropriate for the study. The second recruitment strategy used was snowball sampling. After completing interviews, we asked participants to suggest colleagues in similar roles at their own or other news organizations, who we similarly screened. The third recruitment strategy was public posts via Facebook, Twitter, and relevant professional Slack channels. A link to the screening survey was included in the public post. If an individual filled out the screening survey and indicated that editorial decision making was part of their job, they were contacted for an interview.

Participants
Recruitment culminated in interviews with 18 professional journalists (denoted P1-P18 for attribution in findings), all currently working for news organizations in the U.S. conducted between August 2019 and February 2020. On average, interviews lasted 65 minutes. Participants were asked about their general gatekeeping practices, the role of social media in influencing these gatekeep-ing practices, their understanding of social media algorithms, and how they believed their conceptions of these social media algorithms influenced their gatekeeping practices. Interviewees' positionality ranged across the field of journalism, both in terms of the market orientation of the news organizations they worked for and the positions they held within those news organizations, allowing us to gather a broad cross section of views. Three sets of two interviewees worked for the same news organization in different roles, allowing us to compare their perspectives. Further, some of the participants had experience working at more than one kind of newsroom and were able to speak to the different ways social media functioned in the various newsrooms they had worked in (see Table 1). The broad representation of roles and news organization types present in our sample allowed us to compare a range of 'folk theories' to reveal overarching patterns and reach saturation with respect to understanding emergent themes.

Analysis
Interviews were coded using qualitative thematic coding. Themes were derived both deductively from the posed research questions and inductively as new themes arose across the interviews (Gibbs, 2007). Interviews were constantly compared to elucidate new themes and patterns occurring across interviews (Strauss & Corbin, 1997).

What Algorithmic 'Folk Theories' Permeate Newsrooms?
Of the 18 journalists interviewed for this study, 15 were actively aware that social media platforms were operated by an algorithm of some type and had given thought to how these algorithms function in relation to news distribution. These understandings came from a mix of information sources including direct communication from social media platforms in the form of press releases and company representatives or 'point people' who would communicate directly with editors about what kinds of content they want on their platforms, experimentation to see what types of content would perform well or be 'liked' by the algorithm on different social media platforms, and discussions in public discourse. Overwhelmingly, interviewees understood social media distribution algorithms as filters that did or did not allow audiences to be exposed to their content. Interviewees positioned social media algorithms as a critical intermediary in getting news to their audiences. What varied amongst interviewees' 'folk theories' were the elements that led the algorithm to boost or limit exposure of a post, including engagement, publisher attributes, and the specific platform they were using.

Engagement
The main factor interviewees cited in deciding what content social media algorithms did and did not surface was engagement. Participants believed the more a news story was engaged with by users, the more likely a story was to make it into more people's newsfeeds. However, there was no clear consensus on how engagement was measured. For instance, while P9 thought the algorithm measured all the various facets of engagement, such as liking, sharing, or commenting "coming up with some kind of a score for the likelihood that you'll like something similar," P3 said that at different times the algorithm may favor one form of engagement over others.
Understandings of why engagement was the main metric by which algorithmic decisions were made also varied. While some journalists thought the algorithm mainly used engagement to help bring users content they were likely to be interested in, others saw engagement as a way for platforms to manage content distribution. As P6 explained: Let's say [Facebook] exposes a post to 10,000 people within our network, if all 10,000 started to click on it and not only click but comment, it became apparent that Facebook would open up that post in a way to more people and we would see hundreds of thousands of our users start getting exposed to that post.
In this way, P6 understood the algorithm's basis of engagement as a means of deciding not just what an individual user would want to see, but what users generally would want to see.
Many participants also viewed engagement measures as imbued with the corporate impulses of the companies that create these algorithms. They suggested that these algorithms were driven by engagement to fulfill social media companies' goals to increase their own advertising revenue and keep users on their platform for as long as possible. As P14 put it: What the platforms are trying to do is to keep people on their apps for as long as possible and to entice them to come back to those apps over and over and over again….What they want for us to do to help them do that is to provide those users with content that they want to engage with regularly.
Along these lines, some participants suggested that these algorithms were inherently friendlier to certain kinds of content that would produce these engaged behaviors in users: "I've seen kind of like gruesome things do really well….Obviously, the platforms are incentivized to like keep you coming back to the platform as a user….Outrage is a powerful emotion and works for a lot of these platforms" (P12).
Referring specifically to content medium, P14 also noted, "The Facebook algorithm prefers video and For-profit, editor-in-chief advertising prefers images…the sort of the hierarchy and for-mat…from most engaging to least engaging is videos, images, text, links" (P14).

Publisher Attributes
Participants also noted that attributes of the publishers themselves may influence whether or not a story shows up in users' news feeds. For instance, some interviewees noted that social media algorithms may be friendlier to larger publishers, because they are more willing to pay for their content to be promoted or to 'pay to play.' As P7 suggested: The algorithm is more friendly towards larger publishers than its smaller publishers…they're probably forcing more publishers to pay to get that visibility, so they've pretty much cut down on the visibility that most publishers have on the web in an attempt to make them pay…for more of a visibility experience and for more of a chance to reach readers.
P7's comment speaks to the way some participants believed the algorithm may favor some news organizations over others based on the amount of economic and social capital they are seen as possessing within society. As P2 put it: Big news organizations like The New York Times and The Washington Post and even things like Mother Jones have better relationships with Facebook and Twitter than do smaller places. So even that level of it just having to do with how established a news organization is and how much time, especially Facebook gives them, contributes to the bias on the part of the programmers.
Some participants also suggested the algorithm may promote content based on the political ideology of a news outlet. During our conversation, P2 recalled a recent event that had led them to consider the way social media algorithms may take political ideology into account when promoting content: A big sort of underlying current has to do with a lot of complaints from people from the right-end of the spectrum in media…about the suppression quote unquote of conservative media and conservative voices….Facebook actually partnered, or were going to partner with The Daily Wire, which is Tucker Carlson's website to combat that suppression, which many of us in on the progressive side of things believe wasn't happening….I don't know how much of this actually happened because once the story came out, everyone kind of freaked out, but even just that public release of that statement kind of suggested a bias on the part of the people creating the algorithms.
Similarly, P11 argued that social media algorithms put people into ideological filter bubbles, and thus inherently take publisher ideology into account when promoting or suppressing content in users' newsfeeds.

Differences across Platforms
While interviewees spoke to general understandings of how social media algorithms worked, many of them noted that there were key differences in how they conceptualized the algorithm across various social media platforms. For instance, a handful of respondents noted that specifically on Facebook, a story would be demoted if it was posted twice within a short period of time: Facebook is so algorithmically interesting. We will not post the same story on Facebook within 48 hours of each other. That's kind of our tried and true rule because the way that the algorithm is, what I post right now, you could see in six hours. So that 48 hours kind of keeps it from having you being served the same story twice. (P15) As P12 elaborated, for this reason, they believed the Facebook algorithm would "penalize" their content if they post any one story too close together. These comments speak to a general trend across our interviews of positioning the Facebook algorithm as more heavy-handed when compared to other social media algorithms.
Comparing the newsfeed algorithm on Facebook and the homepage algorithm on Reddit, one interviewee noted: Facebook, you have to do a number of things before you post the content to make sure that it's seen by enough people….If not, Facebook is just kind of not going to do it. Whereas a Reddit, which is a more like kind of like user-generated forum I guess you would say, it's more about knowing how to approach the different communities to engage with your content based on the rules and parameters that they're setting up. One is like, you know, computer generated, one is user generated. (P5) As this comment points to, many of the participants conceptualized a clear distinction between the Facebook algorithm and other social media algorithms. Even though both platforms are based on user-generated content, Facebook's newsfeed algorithm was seen as automated, whereas the Reddit algorithm was seen as more dependent on actions of the users, due to its structure and use of subreddits. Making a similar comparison between that Facebook and Twitter algorithms, P3 noted: The reason that they've [Twitter] been able to sort of like skate under the radar is that their algorithm is a much lighter touch. It's always been dependent on what the people you follow are doing. So, when I'm approaching it from a content point of view I know that someone will see this if someone in their timeline retweets it, like that's a much different conversation than like Facebook wants shares, like we must write a thing that will be shared.
In these ways, interviewees saw the Facebook algorithm in particular as being more pointed and opaquer when compared to platforms such as Twitter and Reddit.

Influence on Editorial Decision Making
Six participants noted that they did not actively consider social media algorithms in any capacity in their editorial practices. Five of these journalists attributed this to the fact that either their newsroom is not concerned with social media audiences due to a niche news focus or they personally are not directly involved with or responsible for their newsroom's use of social media to distribute content. The last journalist who did not actively consider social media algorithms in their editorial decisions, P14, attributed this to the fact that they tend to find success on the platform, not through chasing the algorithm, but rather through focusing on the specific needs of their audience across various platforms. As they noted: We base our distribution decisions on the audiences that are built there. When we put things in those platforms, they tend to be successful because they're geared for that platform. They're just not geared toward that platform for the reason of the algorithm. They are geared for that platform for the benefit of the audience.
On the other end of the spectrum, only two participants said they had been explicitly told not to cover a story because the content would not perform well on social media due to the algorithm's basis on engagement. In one instance P2 recalled: There'd be a lot of pushback about investigations that I wanted to do on white supremacist stuff, nothing had been published about it or very little had been published about it…[it] would have been good journalism for the website to publish, but because it might not draw as much social media engagement, it was turned down.
Though their story met more traditional news values of timeliness, novelty, and importance, the fact that their editors presumed that it would not be engaging on social, and thus the algorithm would be unfriendly to the content, P2 was not allowed to cover the story. Similarly, P6 was once told by their editor that they could not pur-sue a story on homelessness in the surrounding region because the topic would not perform well with their audiences on social platforms.
It is important to note that in both these instances P2 and P6 were answering to more senior editors, and thus may not have been aware of the exact thought processes and factors that may have influenced these decisions, and in actuality, these decisions may not have been made due to algorithmic considerations. Further, because we did not talk to their editors, we have no way to confirm their interpretation of these events. However, we suggest that because these participants perceived that these stories were killed due to the algorithm's basis on engagement, their own algorithmic 'folk theories' structured their understanding of these events, and in turn, their future sense-making practices around editorial decision making.
For the rest of participants, social media algorithms influenced the editorial decision-making processes in more complex and subtle ways. For instance, in some newsrooms, guidelines for social media platform usage issued to news organizations became a factor in the editorial process. In a few instances, interviewees mentioned that these guidelines were reinscribed into their own newsroom's editorial guidelines. If there were certain kinds of language or imagery these platforms explicitly said would be suppressed by the algorithm, editors made a note to exclude this content from their reporting at large.
Some interviewees also mentioned that if they believed a story did not perform well due to the algorithm, this influenced future newsroom editorial conversations about how resources may be allocated to covering a similar topic in the future. In one instance P16, who works for a local news publication, said that due to low levels of engagement on a national political story, their newsroom shifted reporting resources to focus on a more prevalent local political story which garnered more audience engagement. They subsequently only used wire copy to cover the national story. Similarly, other interviewees noted that more resources, especially from social media teams, may be invested in stories they presumed would perform well on social. Thus, while feedback from the algorithm does not necessarily foreclose reporters and editors from pursuing important stories, in some cases, it may potentially make them more hesitant to consider coverage of these stories in the future or shift how they cover such stories because of a presumed low return on investment.

Influence on Content Presentation and Framing
The majority of participants said the main way social media algorithms influenced their reporting was on the level of content framing. As P3 noted, "I think a good story across platforms is a good story. I think that the way you present the story…that's what changes." All of our participants noted that they would not reject a story outright because it does align with their understanding of what content is preferred by platform algorithms. Rather, they try to find "different strategies to get Facebook's algorithm to cooperate" (P5), in terms of how they frame the story, tweaking headlines to be more engaging to readers, and being deliberate about the photos and videos they post alongside the stories. However, even to this point, some participants noted that they would only think about the algorithm in their framing to the extent that it aligned with their own editorial judgement. P14 noted how potential tension between traditional journalistic norms and social media algorithm's preference can limit the extent to which they reframe a story to be preferable to the algorithm: Facebook['s algorithm] really shares and engages with strongly worded arguments, but that doesn't neces-sarily…sometimes it does, but not always align with our editorial style and editorial angle. I'm not going to manufacture that kind of framing for a story that isn't really in line with that just because that's what Facebook's algorithm likes.
Thus, while some journalists believe social media algorithms have the ability to detect and promote more engaging content, they refuse to let this understanding completely dictate their editorial decision making.

Limitation of Influence
Despite these instances in which interviewees cited algorithmic folk theories in influencing aspects of their editorial decision making, all participants agreed that if they believed a story was worth covering, they personally would continue to pursue the story, whether or not they thought it would be promoted by the distribution algorithm. As P8 stated: Platforms are never telling you what to do week to week. It's more how the algorithms work….We tend to stay guided very much by editorial principles. So, we're trying to grow, we're trying to optimize, we're trying to find ways to engage…but not at the expense of our editorial identity.
Thus, while journalists' 'folk theories' of distribution algorithms may influence various aspects of their editorial decision-making process, it often does not supersede their autonomy as journalists to ultimately decide what is and what is not newsworthy. In part, this seems to be due to the opaque nature of these social media algorithms. As P17 aptly stated: It'd be a mistake to try to draw too many conclusions from the performance of any given [story]. It's a very messy ecosystem. So I don't think any of us are saying, "that didn't do well because of this algorithm and this is why"….I actually don't have very clear feedback on why things do well and why they don't.
Thus, the opaque nature of social media distribution algorithms may make journalists less inclined to give them weight, especially in light of other, perhaps more wellunderstood, means of reaching their audiences online such as their own online homepage, news aggregators, and SEO.

Moderating Factors in Algorithmic Influence
Drawing on Shoemaker and Reese's hierarchy of influences model (2014), here we elaborate some of the factors involved with journalists' understanding of algorithmic impacts on editorial decision making at different levels.

Organizational Level
On the organizational level, a media organization's business model, size/brand, medium, and editorial focus were all key modifying factors in influencing the extent to which journalists considered social media algorithms in their editorial practices. Journalists who worked for commercial (i.e., for-profit) news organizations whose financial success was heavily reliant on online advertising revenue were more attentive to social media algorithms in their editorial practices than those who worked for non-profit, donor funded news organizations. As P12 noted, unlike their previous experience working for a forprofit newspaper where the Facebook algorithm was often brought up in their editorial process, due to the high amounts of traffic they got from the platform, platform algorithms were not often brought up in the editorial process at their current non-profit news organization.
Interviewees who worked for larger, well-known news organizations often noted their newsroom was less concerned with considering social media algorithms in their gatekeeping practices. Talking about their current experience working at a large, mainstream national news outlet P6 said, "It's probably the first and only organization I've been at where they have such a global and national brand that the need to go viral through social media is not as important." Some interviewees also noted how different mediums may be more or less likely to consider social media as a primary channel for reaching their audiences. For instance, P1 said the reason they do not think about social media algorithms is because, "I don't work on those mediums. I worked for the broadcast medium, right? So, my priority and my goal is to service content that works for that medium. Anything else is secondary." Finally, the journalists interviewed for this study who have a more niche focus to their reporting attributed their lack of focus on social media algorithms to the fact that they are not on social media to reach a general audience, but a very specific audience.

Routine Level
On a routine level, the influence of social media algorithms seemed to be moderated by how much importance was placed on satisfying these algorithms within day-to-day newsroom practices. Journalists whose news organizations put more importance on social media felt more pressure to consider the algorithm in their editorial practices. Only four participants said they had not encountered discussions about these topics in their newsroom, and these four participants also did not think about social media algorithms in their editorial decisionmaking process.
Many interviewees who worked in social media and audience engagement related roles noted how they would often be tasked with helping editors and reporters translate content made for their primary platform(s) into what would be more satisfying to social media algorithms, in often indirect ways. As P4, who worked in audience engagement, put it: Algorithms are harder for people to grasp if they're not doing this every day. So, I might explain findings and I might explain an algorithm in the simplest way I can to get people to understand why our strategy is the way that it is, but it's not core to the conversations we're having with reporters and editors.
In this way, these specialized editors become intermediary gatekeepers who both control and temper other journalists' algorithmic understanding and thus, the influence of algorithms on editorial decisions. This intermediary role underscores the social dynamics inherent to routine practices which may end up privileging the algorithmic 'folk theories' of some, such as specialized editors or senior editors, in making final editorial decisions.

Individual Level
Participants' individual role in their news organization as well as their individual principles regarding their journalistic practices were key modifiers in the extent to which their understanding of social media algorithms influenced their editorial decision making. For example, P9 noted they did not think about social media algorithms at all on their editorial team, however this may be due to the fact that they report for a highly specialized division of their news organization. As a confirmation of this suspicion, P15, who worked in a role directly related to social media at the same news outlet, did consider the algorithm in their editorial practices and would occasionally prioritize sharing stories on social media they thought the algorithm would be favorable to.
Individual principles also played a role for some participants in the extent that they considered social media algorithms in their editorial decisions. Interviewees expressed that while some of their colleagues may be inclined to write a story because it would be algorithmically successful, they would not, due to their commitment to the normative principles of journalism.

Discussion and Conclusion
This study took an empirical approach to the question of how social media algorithms influence modern day gatekeeping practices by professional journalists. We utilized the concept of algorithmic 'folk theories' to ascertain journalists' understandings of social media distribution algorithms, and further, the extent to which they perceive these understandings to shape and influence their editorial decision-making process. Empirically examining current theoretical models that position these algorithmic imaginaries as key influencers in shaping newsroom editorial decisions (Heinderyckx & Vos, 2016;Napoli, 2019) our findings suggest that, while ideas about social media algorithms have become a new element influencing gatekeeping practices, especially with regards to content framing and resource allocation to stories, they do not completely capture journalists' editorial decisionmaking process. Their influence is often limited to the extent that algorithmic newsworthiness aligns with traditional understandings of newsworthiness. Further, we found social media algorithms in particular, are becoming less influential in overall editorial decision making, as journalists turn to other, often more clear channels to reach audiences off-platform, such as content aggregators and SEO.
Yet these algorithmic considerations still pose new sources of tension for journalists as they attempt to consider these new editorial impulses alongside traditional norms of editorial decision making. As P5 put it, news organizations can no longer "force feed" readers the content they believe is important, and may in some instances find their understanding of newsworthiness being more reactive to what their audience will and will not engage with. In this way, the process of gatekeeping becomes a contested ground in which journalistic autonomy, to varying degrees, is negotiated alongside impulses to create content in line with understandings of social media distribution algorithms. Journalists may feel pressure to focus on covering a narrowly defined category of 'quantifiably' engaging content, over content that may hold traditional values of newsworthiness and be important for people to know.
We use the term 'quantifiable' here as a means of distinguishing the forms of content we see these algorithms pointing to compared to engagement within the context of the emerging field of engaged journalism. Where engaged journalism pushes journalists to think about deeply engaging with the communities they serve to bring them topics and content that are important to their social realities or "giving the audience what they want" (Ferrucci, Nelson, & Davis, 2020, p. 1588; algorithms define engagement through the narrow lens of quantifiable metrics (Diakopoulos, 2019), which push journalists to focus on content audiences will click on. Further, while au-diences may click on a story because it is something they care about, they may also click on a story because it is sensational or evokes curiosity. In other words, what is 'quantifiably' engaging, is a much more reductive and often superficial understanding of engagement than it is in the context of engaged journalism. Modern day news editors must constantly weigh what they know as journalists to be important for creating an informed public against what they think the audience will find 'clickably' engaging, and thus, promoted through the algorithm. However, as noted, our findings suggest that these new impulses are rarely perceived to impede journalists from fulfilling their normative role of providing important social and political content.
Building on Shoemaker and Reese's (2014) hierarchical model of how media content is shaped, this article also argues that the degree to which journalists' algorithmic 'folk theories' influence gatekeeping differs at the organizational, routine, and individual levels. Importantly, in line with previous research at the organizational level, algorithmic understandings are often translated to larger newsrooms by social media or audience engagement editors (Assmann & Diakopoulos, 2017). In some instances, these editors tried to temper the degree of algorithmic influence on editors' and reporters' editorial decisions.
This study contributes to the growing literature on how digital circulation is changing and contesting traditional, normative ideas of journalistic gatekeeping and autonomy. As social media increasingly becomes an important news distribution channel, scholars have pointed to how traditional journalistic gatekeeping roles are continually contested and re-negotiated both prepublication (Tandoc & Vos, 2016) and post-publication (Hermida, 2020). Interrogating gatekeeping practices prepublication, our findings illuminate how professional journalists' understandings of social media algorithms act as a new force against which they must negotiate their traditional understandings of journalistic autonomy in their editorial practices. More broadly, findings from this study may inform journalism scholars, newsroom professionals, and even platform operators in understanding how conceptions of distribution algorithms are shaping the journalistic field.

Limitations
Findings from this study, while robust, are still based on a relatively small sample of journalists drawn from U.S.based newsrooms, where the news media has historically been positioned as an institution which promotes western democratic ideals of knowledge, political participation and free speech. This culturally specific positioning of the news media heavily influenced the theoretical frameworks and the normative principles of journalism that our study interrogates, limiting the generalizability of our finding to other, especially non-western, cultural contexts. Future research might expand upon these findings, especially in different cultural contexts, using methods such as representative surveys and quantitative data analyses.
Further, as an interview study, our data can only speak to what journalists perceive themselves to be doing, not what they actually do. Research on journalistic role performance (e.g., Mellado et al., 2020;Mellado & Van Dalen, 2014) has shown there is often a gap between journalists' perceptions of their performance and their actual performance both on the individual and organization level. We cannot account for response bias on the part of the journalists we interviewed, who despite the anonymous nature of the interviews, may not have been open to expressing or even aware of the full extent to which their algorithmic 'folk theories' influence their editorial decisions, especially in ways that would violate their own understandings of their normative journalistic roles. Future work should also consider complementary methods such as participant observation or content analysis in order to triangulate practices and outcomes relating to algorithmic 'folk theories.'