Watching the Watchdogs: Using Transparency Cues to Help News Audiences Assess Information Quality

The myriad of information sources available online can make it hard for the average reader to know whether a piece of content is credible or not. This research aims to understand if the public’s assessment of the credibility of information could be more accurate with the help of transparency features that act as heuristic cues under the elaboration likelihood model and the heuristic‐systematic model, and if the cues increase cognitive absorption. Two between‐subjects studies were performed, one with a young demographic ( N = 68) and another with a representative sample of the adult population ( N = 325). The stimuli contained information boxes designed to indicate that the story was not written in a traditional journalistic style (message cues) and missing background information on the author (source cues). Results show significant effects of the cues on credibility assessment and cognitive absorption


Introduction
Even though the decline of public trust in news is not homogeneous around the world, the trend has affected several nations (Hanitzsch et al., 2018).In a study conducted by the Reuters Institute on 46 markets, only 42% of participants said "they trust most news most of the time" (Newman et al., 2022, p. 10).Americans show even lower numbers.According to data by Gallup, a mere 34% of respondents said they trust mass media to report the news "fully, accurately and fairly" (Brenan, 2022).The reasons for distrust include suspicion that the media is pushing the economic and political agendas of the powerful (Newman & Fletcher, 2017), as well as a lack of transparency on news production, source selection, and funding (Gottfried et al., 2020).
In an attempt to regain public trust, news outlets observed that there was a public demand for transparency in politics, business, and international relations, among other fields at the end of the 20th century, and began to advocate for transparency as a replacement to objectivity (Craft & Vos, 2021).Objectivity had been the "moral philosophy" guiding journalism since the 1920s when journalists affected by the First World War propaganda and the rise of public relations shifted their focus to a form of reporting that emphasized fact-based reporting (Schudson, 1978).This approach emphasized certain practices such as the verification of information by consulting multiple sources and balancing various sides mentioned in a story (Kovach & Rosenstiel, 2001).In recent years, some in the field of journalism have suggested that fact-based, objective reporting is not enough (Chadha & Koliska, 2015;Masullo et al., 2022).
The push for transparency in news takes objectivity a step further and is an attempt to give audiences insights into the quality of the reporting as well as the quality of the news organizations and individuals who did the reporting (Chadha & Koliska, 2015).The purpose of this research is to explore the ways that transparency cues can be used to help audiences better evaluate the quality of news.Even though transparency has been conceptualized in a number of different ways, most of the approaches have focused on how journalists and newsrooms can make changes to their routines in ways that are more transparent (Craft & Vos, 2021).We take a different approach to transparency that explores the use of a more algorithmic form of transparency, meaning that the transparency features were not put in place by the news outlet but instead by a third-party algorithm.In our approach, we exposed participants to news stories that have been analyzed by an algorithm that provides readers with information about both the message and source quality of the news story.We explore how such algorithms might influence attitudes toward the credibility of the news and whether the presence of transparency cues increases the level of cognitive absorption of the story.

Literature Review
The literature suggests that transparency in the news typically falls under a broad umbrella of at least two categories: transparency practices in the newsroom and tools implemented by journalists to demonstrate transparency in news content (Chadha & Koliska, 2015).For example, newsrooms can include practices like clarifying news outlet affiliations and newsroom blogs or providing explanations about the editorial process in a newsroom (Heikkilä et al., 2014).Journalists, on the other hand, often practice transparency by including specific features in their stories such as external links to the primary sources of information, the embedding of original documents, the author's email, corrections, a space for reader commentary, or detailed time stamps of when the story was published and updated (Karlsson, 2010).
The effectiveness of these targeted practices has been debated.Karlsson and Clerwall (2018) found that the public did not think of transparency as an aspect of good and credible journalism, while Bhuiyan et al. (2021) found a more mixed set of results.In a series of qualitative interviews, the researchers found that a few participants believed that journalists could improve their credibility by being open about their biases, yet other participants preferred transparency tools that indicated objectivity and evidence of the information.Karlsson (2020Karlsson ( , p. 1808) ) suggests that this could happen because the transparency features are provided by the same source of information as the news story.In other words, all current attempts at providing transparency still rely on journalists or newsrooms providing additional details about their practices.Our approach does not focus on asking the public to trust the journalist or the newsroom and instead, it uses algorithms to provide audiences with quality indicators about the story itself.Our study focuses on transparency at a news item level with two features: an indicator of the quality of the source of the story itself and a second indicator of the quality of the message or news content.

The Importance of Source and Message Cues for Trust
When it comes to the evaluation of the credibility of a piece of news, researchers have suggested that audiences evaluate the credibility of both the news source and the message (i.e., story features).Research on source credibility includes a study that found no effect on credibility perception of information indicating the author's gender (Henke et al., 2021) and another that found positive effects of explanations of the journalist's stance on the issue (Karlsson et al., 2014).Specifically related to message credibility, Peacock et al. ( 2022) compared the effect of labels that indicated whether the story was news, analysis, opinion, or an advertisement at the top or in the middle of the text, finding that neither had an impact, while Masullo et al. (2022) found no significant effect of an information box that explained how and why the story was created.
A study that combined source and message characteristics (author's bio with a picture, additional information about the story, footnotes, and the aforementioned label) found a significant effect, however, only 32 out of the 613 test group respondents interacted with the features (Curry & Stroud, 2021).It is important to note that all of the studies mentioned so far considered the audience as a monolithic group.Karlsson (2020) segmented participants by demographic characteristics as well as differences in relation to previous trust in media, the channel of information, and news consumption habits, finding that features increased credibility perception for those who already had a positive attitude toward news media.Prochazka et al. (2018) found that skepticism toward media was an important factor that led comments to have a positive or negative impact on the quality assessments of a news media brand.

Theorizing How Transparency Features Work
Under the theoretical framework of information dualprocessing models, such as the elaboration likelihood model (ELM) proposed by Petty and Cacioppo (1986) and the heuristic-systematic processing model (HSM) proposed by Chaiken (1987), we theorize that transparency features in news outlet websites work as cues to stimulate more critical evaluations of both source and message content in a news story.Various dual-processing models (Liu & Shrum, 2009;Metzger, 2007) indicate that there are two ways in which people process information: by analyzing the message critically in a systematic way or by using heuristics, meaning external characteristics, to make snap judgments about the information they are receiving.In the ELM, these are the central and peripheral routes to persuasion.In the HSM, they are called systematic processing and heuristic processing, since the cues appeal to heuristics, which are previously established rules in the person's mind.
In Chaiken's work in particular, the researcher found that the impact of various sources and message cues can have both differential and co-occurring effects on individuals' attitudes toward messages (Maheswaran & Chaiken, 1991).Specifically, the author found that when source cues (e.g., the perceived expertise of a person, their education, or appearance) and message cues (e.g., the perceived quality of the rhetoric, syntax, and quality of arguments in a given message) appeared high in credibility, individuals used both systematic and heuristic processes to evaluate the credibility of a source.In contrast, when source and message cues seemed to call into question the validity of the information, individuals evaluated content using a systematic process only.Simply put, when cues call into question the validity of a claim, people tend to more carefully analyze the claims using a more systematic approach.The same research has also found that in cases where people are more highly involved with the information and the information's credibility was called into question, people used a systematic processing route to evaluate the validity of a claim.

Source and Message Cues and Their Impact on Cognitive Absorption
Research has indicated that a number of affordances of digital media can impact what audiences focus their attention on in a given product.Attributes like information boxes, blinking elements, drop-down menus, highlights, and others have had both positive and negative effects on what audiences learn about a piece of information and the degree to which they are involved in messages (Oh et al., 2018;Sundar, 2008).The state of "deep involvement with software" is known as cognitive absorption (Agarwal & Karahanna, 2000).While not every cue is going to aid in information processing, research has found evidence that suggests that cues can aid in helping people become more cognitively absorbed with the information they are processing online.As a result, the authors suggest that cues can "trigger more systematic user engagement with content" (Oh et al., 2018, p. 45).In the context of a news story, it is common for both legitimate and fake news sites to include a number of cues designed to trigger heuristic processes.These cues typically help audiences make snap judgments about the quality of the source and message.Common cues might include a website name (e.g., PatriotNews.com,NBCNewNow.com)that looks like a legitimate news source, or news banners designed to appear like mainstream news.Other news sites might embed photos or use other cues that are designed to encourage audiences to think less critically about the content and see that news story as legitimate.Transparency cues are designed to instead encourage further scrutiny and analysis.In those instances where the cues suggest that a piece of information is less trustworthy, cues can highlight what the algorithm perceives to be flaws in a story and lead readers to engage in more critical or systematic thought processes.

Transparency Cues as a Form of Explainability
Several studies have suggested that in order for audiences to accept algorithmic decisions, audiences must have some way of assessing how that algorithm made decisions.In other words, algorithms that are explainable are more likely to be perceived as legitimate.Algorithmic explainability has been conceptualized as the extent to which an algorithmically driven system can provide users with insights into how that algorithm arrived at decisions or how it provided recommendations to the user (Arrieta et al., 2020;Shin, 2021Shin, , 2022;;Shin et al., 2022).Shin has found that explainability features are an important attribute in making an algorithmic choice more transparent and that they can influence whether an individual trusts the algorithmic recommendations (Shin, 2021).In practice, explainability has been conceptualized in a number of ways such as by providing audiences with pop-up information boxes that explain how an algorithm made a decision or through visualizations of that audience in understanding how that algorithm made a choice (Shin et al., 2022;Weitz et al., 2021).We note that much of the work that has been done in explainable algorithmic research has been focused on explaining how an algorithm provides recommendations (e.g., a recommended film or piece of news) to audiences rather than showing how an algorithm might have analyzed a piece of content which is our purpose here.To that end, we experiment with a series of source and message cues that are designed to indicate the quality of the content.In contrast to some algorithms that might simply provide the user with feedback about whether a piece of content is true or false, our algorithm seeks to show audiences how the algorithm arrived at those conclusions by providing them with both visual and textual cues that help audiences see for themselves how the algorithm conducted the analysis of the story.

Research Questions
Our study investigates whether algorithmic transparency features could act as cues to stimulate the use of systematic processing in credibility assessment.In order to explore this, we provide individuals with a piece of news that has been analyzed by an algorithm.The news was intentionally written to be of poor quality to mimic some types of information individuals might encounter on social media.The algorithm was designed to provide participants with information cues about the story's source and the message.We theorize that the use of such cues will help participants think more critically about the content.We report the results of two separate studies below.The first tested our algorithmic cues on a college-aged population.The second study used a sample that was representative of US voters.Our stated research questions are: RQ1: Does the incorporation of algorithmic transparency cues (source vs. message) change the perceived credibility of a story?RQ2: Does the incorporation of transparency cues (source vs. message) in a story increase the level of cognitive absorption?

Participants for Study 1 and Study 2
A total of 90 undergraduate students from a major university in the Southeastern US were recruited to participate in an online experiment for extra credit.Out of these 90 participants, 68 provided complete responses and passed the corresponding attention checks.Respondents were 63.2% female (n = 43).In terms of political leanings, 55.9% of the participants described themselves as liberals (n = 38), 25% as moderates (n = 17), and 19.1% as conservatives (n = 13).
Following our analysis of the initial student population, we sought to confirm our results in a more general pool of adults.A total of 402 participants from a representative adult population pool in the US were recruited through Prolific, a company that provides sample populations to researchers.To incentivize participation, participants were paid $3 for having completed the survey, which took an estimated time of 15 minutes.Of these 402 participants, 325 provided complete responses and passed the corresponding attention checks.Male and female respondents represented 49.2% each (n = 160) and 1.5% of respondents identified as non-binary (n = 5).In terms of political leanings, 54.5% of the participants described themselves as liberals (n = 177), 19.1% as moderates (n = 62), 26.2% as conservatives (n = 85), and 0.3% preferred not to answer (n = 1).

Sample Size and Power Analysis
Regarding the sample size of Study 1 (S1), we note that the size of the student sample was determined by the availability of participants (i.e., convenience sampling) since the main purpose of this study was to serve as a pilot for Study 2 (S2), which we planned to do with a more representative sample of the US population.Following S1 and the preliminary results obtained from the student sample, we performed an a priori power analysis to determine the minimum sample size needed for our second study.For these purposes, we computed the sample size for a one-way ANCOVA assuming a medium effect size (Cohen's f = 0.25), a power of 0.8, and a significance of 0.05.With these parameters, the minimum sample size required for S2 is 179.

Procedures
An experiment with a 2 (source cues: no/yes) × 2 (message cues: no/yes) between-subjects factorial design was conducted, with three parts and a duration of approximately 12 minutes.In the first part, participants completed a survey about news consumption habits and preexisting attitudes towards news using five-point Likert scales.
For the second part, the participants were randomly assigned to one of the four conditions: control (S1: n = 24, S2: n = 78), message cues (S1: n = 14, S2: n = 85), source cues (S1: n = 13, S2: n = 81), or both types (S1: n = 17, S2: n = 81).They watched a video tutorial on how to use the provided website that lasted approximately one minute and were asked to read the article and pay attention to the transparency features on the screen.
The third part consisted of two series of five-point Likert scale questions: an adaptation of Gaziano and McGrath's (1986) credibility scale and an investigation of cognitive absorption following the technology acceptance model (Davis, 1989).The technology acceptance model evaluation included an assessment of perceived usefulness and perceived ease of use.However, due to space constraints, we do not report these results and focus exclusively on credibility and cognitive absorption.
All participants read the same news story, which was taken from an original news source and modified by professional journalists, about a concert venue requiring individuals to provide proof of Covid-19 vaccination to be allowed inside at an upcoming concert.Stimuli varied according to the groups, possibly including source or message cues (see Figure 1).
Participants in the control condition only needed to read the story and answer our follow-up questions.
Journalists are taught to include basic facts in their news leads, called the 5W and 1H, meaning who, what, when, where, why, and how.The message cues condition consisted of a drop-down list of the algorithm's assessment of whether those items were present in the story.We intentionally left several message characteristics blank to highlight the low quality of the story's reporting process.This condition also included corresponding highlights in the text, with missing event descriptors highlighted in red on the side panel.Furthermore, the website provided detailed tooltips when hovering over the corresponding message cues (e.g., providing more details or indicating that the element was missing), which have been summarized in a table in Appendix 3 of the Supplementary File.
The source cues condition consisted of a drop-down list of background information about the author of the news article: name, main field of expertise, number of years in journalism, known retractions, and other places where the author had been published.We intentionally highlighted the fact that the author had an inconsistent expertise area (politics, in a non-politics article), an unknown number of years in journalism, and publications in biased news sources.Furthermore, the website provided detailed tooltips when hovering over the corresponding source cues (e.g., explaining what each particular element meant).Appendix 3 of the Supplementary File presents a table with the tooltips shown to the users.
The fourth condition combined both the message and source cue treatment conditions.This design allowed us to explore the individual effects of both the individual cues and the effects when combined together.Finally, the website included the tooltips for both cues when hovering over the corresponding credibility cues.
Each group also had an interactive attention check for the tasks.Participants in the control group needed to click on the button to read the text, which was blurred.Participants in the other condition groups needed to click on a button to see the corresponding transparency cues.Each click and hover over the tooltips was tracked.We discarded participants who did not click because this meant that they responded to the follow-up questions without reading the article or the associated transparency cues.In the first study, 22 responses were discarded and another 77 were rejected in the second study.

Preexisting Attitudes Toward News Media
Participants' preexisting attitudes toward news media were measured before exposure to the stimulus with six items using 5-point Likert scale questions taken from Williams (2012) and Tsfati (2002).Attitude toward news media was determined by averaging the responses of the six items.Both studies showed a high level of internal consistency for preexisting attitudes (Cronbach's , S1: 0.84, S2: 0.95).

Perceived Credibility (RQ1)
The perceived credibility of the article was measured with four Likert-type items (see Figure 2), modified from Gaziano and McGrath (1986).Participants were asked whether the article was fair, complete, accurate, and trustworthy.The final perceived credibility value was determined by averaging the responses of the four items (Cronbach's , S1: 0.85, S2: 0.89).Additional items in Gaziano's original scale, such as biased, subjective, and sensationalistic, had lower factor loadings (<0.6) in preliminary factor analyses, representing an additional factor that was not reliable (Cronbach's , S1: 0.53, S2: 0.73) or relevant to our experiment.Cognitive absorption is a measure that depends on five dimensions: temporal dissociation, focused immersion, heightened enjoyment, control, and curiosity (Agarwal & Karahanna, 2000).We analyzed cognitive absorption as one factor with parsimony in mind, averaging the responses of nine items (Cronbach's , S1: 0.90, S2: 0.92).We note that the nine items we selected were a subset of the original cognitive absorption scale, as not all elements were relevant to our evaluation.We show the evaluation items for cognitive absorption in Figure 3.

Duration
The time taken by each participant to complete the task could influence the results of cognitive absorption.Therefore, we measured the duration in seconds and performed a logarithmic transform on the duration of the task (Dragicevic, 2016), which mitigates outliers and corrects for the positive skewness in time measurements (Keene, 1995;Sauro & Lewis, 2010).

Results
RQ1 examined whether the use of source cues and message cues had an effect in terms of perceived credibility.
The results of a one-way ANCOVA with preexisting attitude toward news media and politics as control variables revealed a statistically significant difference between the four groups in both studies (S1: p < 0.001, S2: p < 0.001) with a large effect size (S1:  2 = 0.247, S2:  2 = 0.140).Tukey's honest significant difference (HSD) test revealed that all three conditions with transparency cues (message, source, and both) are statistically different from the control condition in terms of perceived credibility (S1: p < 0.05, S2: p < 0.001), as shown in Table 1.The usage of source cues on their own was associated with the lowest perceived credibility (S1: M = 2.853, SD = 0.713; S2: M = 2.543, SD = 0.881), followed by the combination of both cues (S1: M = 2.882, SD = 0.801; S2: M = 2.750, SD = 0.885) and the message cues on their own (S1: M = 2.929, SD = 0.654; S2: M = 2.753, SD = 0.764).In contrast, the control group had the highest perceived credibility (S1: M = 3.701, SD = 0.822; S2: M = 3.413, SD = 0.774).However, we note that the differences between all the groups with transparency  cues are not statistically significant according to Tukey's HSD test.
We note that the preexisting attitude had a significant effect (S1: p < 0.05, S2: p < 0.05) on the perceived credibility, with a more negative attitude being associated with lower perceived credibility ratings.The effect size of attitude for the student population was large, but it was small for the representative adult population (S1:  2 = 0.160, S2:  2 = 0.015).Finally, we highlight that political leaning also has a significant effect when dealing with the adult population (p < 0.05), while it does not have significant influence the student population, which could be caused by the higher political diversity in the representative US population sample compared to the student sample.In both samples, the effect size of politics was small (S1:  2 = 0.001, S2:  2 = 0.037).
RQ2 examined whether the use of source cues and message cues had any effect in terms of cognitive absorption.The results of a one-way ANCOVA with preexisting attitude and log-duration as control variables revealed a statistically significant difference between the four groups in both studies (S1: p < 0.05, S2: p < 0.001) with a large effect size in the student sample and a medium effect size in the representative adult sample (S1:  2 = 0.145, S2:  2 = 0.084).Details of the results are in Table 2.
For the student sample, Tukey's HSD test revealed a statistically significant difference (p < 0.05) in cognitive absorption when comparing both cues (M = 3.902, SD = 0.686) and source cues (M = 3.470, SD = 0.570) groups with the message cues (M = 3.254, SD = 0.403) and the control (M = 3.286, SD = 0.777) groups.For the adult population sample, Tukey's HSD test revealed that all three conditions with transparency cues (message, source, and both) are statistically different from the control condition in terms of cognitive absorption (p < 0.001).Participants with both cues had the highest cognitive absorption (M = 3.679, SD = 0.823), followed closely by the message cues on their own (M = 3.600, SD = 0.821), then the source cues on their own (M = 3.561, SD = 0.695), and finally the control group at the bottom (M = 3.091, SD = 0.739).Furthermore, both log-duration (p < 0.05,  2 = 0.015) and attitude (p < 0.001,  2 = 0.038) had significant effects, although small, on cognitive absorption, unlike the student sample.
We measured and analyzed the time taken by users to complete the survey to further verify that our transparency cues improved the engagement of participants with systematic processing.In particular, we make the assumption that taking a long time to complete the task is associated with a higher level of engagement.Thus, we examined whether the use of source cues and message cues had any effect in terms of the time taken to complete the task.We performed a one-way ANOVA with log-duration as the response, with the results shown in Table 3.This analysis revealed a statistically significant difference between the groups in our second study with the adult population with a medium effect size (p < 0.001,  2 = 0.092), but no significant difference for our study with the student population.Tukey's HSD test revealed that there is a statistically significant difference in task duration between the control group and the rest of the conditions (p < 0.01 for each pairwise comparison).

Discussion
We note that despite obtaining significant results and relevant effect sizes, our research is not without limitations.First, we note that our evaluation did not take into account whether the participants had any previous knowledge of the topic of the news article, which could influence their usage of the cues or their perception of credibility.Second, regarding the generalizability of our studies, we note that the first study with the student  population was limited by its relatively small sample size and lack of representativeness of the general population of the US.However, we found statistically significant results and large effect sizes in this sample.Moreover, we addressed these issues in the second study, by using a representative sample of the US population and defining an appropriate sample size that ensured high power.However, we note that these findings might not apply directly to non-US contexts.
In addition, around 20% of participants did not engage with the transparency cues (S1: 22, S2: 77).To better understand the behavior of these participants in regard to the credibility assessment and cognitive absorption, we performed a follow-up statistical analysis on the perceived credibility of this group and compared it with the control group of the representative sample of S2.There was no significant difference from the control group in statistical terms (M = 3.361, SD = 0.884).Thus, it would be possible to consider these participants as parts of the control group as well, since they did not interact with the cues.However, we have interpreted the failure to engage with the transparency cues as a lack of attention issue from the participants (i.e., failing an additional attention check), despite not failing the regular attention checks included in the survey.Thus, we ended up removing these participants from the final statistical analysis presented in this article.
Despite the limitations, the results reflect the potential of source and message cues to do more than just appeal to heuristics processes, instead encouraging the use of the central route or systematic processing of information (Maheswaran & Chaiken, 1991).Regarding RQ1, results indicate that the use of transparency cues was associated with lower perceived credibility in comparison to the absence of cues consistently throughout both studies.Source cues had the most impact, followed by both cues and message cues, however, the difference between the effects of the cues is not statistically significant according to Tukey's HSD test.The results demonstrate a similar effect to those found by Maheswaran and Chaiken (1991), that when transparency cues suggest incongruence with other information available they may invite more systematic processing.In our specific instance, we theorize that certain news cues invite heuristic processes.These might include items that are available that suggest all the typical trappings of a quality news story-headers, news flags, titles that seem like an official news source, the presence of an author's/journalist's name, etc.Such cues make it easy to engage heuristic processes that lead to quick snap judgments about the quality of a piece of news.However, when readers are presented with additional transparency cues, those cues may call for more careful scrutiny of the piece.It is also important to note that, in both studies, we found a large effect size for the influence of transparency cues on perceived credibility, higher than those reported in similar work with significant results (Curry & Stroud, 2021;Karlsson et al., 2014).
Regarding the effect of the transparency cues given the nature of the news article that we showed participants, we note that the contents of the article were actually true, just incomplete or attributed to an inconsistent source.However, the algorithmic transparency cues were designed to reduce the perceived credibility, focusing mostly on the article's inconsistencies and disregarding the true nature of the article.In this context, this result raises the question of what effect it would have on an actually false news article.Thus, future work could include studying the effect of the cues in articles with different actual levels of truth in them (e.g., a completely fake article, an inconsistent or slightly biased and misleading article, and a factual article with minimal bias).Exploring how the transparency cues influence different types of articles would also be of interest.Following this line of thought, if the transparency cues can alter the perception of an article's credibility in a significant way, it raises the ethical consideration of potential misuse by providing misleading cues and promoting misinformation.
Regarding RQ2, the cognitive absorption levels were positively impacted by source and message cues, with an even slightly higher impact for both cues combined, meaning that the users who engaged with the cues had a significantly deeper involvement with the story.We considered this using two data points.First, our reader's self-report data suggests that those who had experienced the cues in our student sample generally reported higher levels of cognitive absorption with the exception of those who received the message cues only condition.However, in our larger representative population, we found that all conditions with cues yielded higher levels of cognitive absorption and thus a deeper involvement with the story.
Our second indicator was suggested through our analysis of the task duration which revealed that the participants in the representative adult sample that used the transparency cues had higher task duration.Moreover, if we only consider participants that engaged with transparency cues in the representative adult sample, we find that those participants that take longer to do the task (above the mean of 9.13 minutes) report slightly lower perceived credibility (M = 2.628, SD = 0.869, n = 113) than those who took less time to complete the task (M = 2.729, SD = 0.827, n = 134).Although the difference is not statistically significant in this case, these results suggest that engaging with the cues and dedicating more time to the task causes participants to be more critical of the article, as they detect the inconsistencies and thus have lower perceived credibility.
All results align with the purpose of the cues, which is to encourage readers to pause and reflect on the story.Following the HSM, the contradiction between the cues and the message suggested that they created an "attenuating effect" (Chaiken & Ledgerwood, 2012) that increased involvement with the content, leading to a use of systematic processing (according to the HSM) or the central route (of the ELM).On the other hand, the control group and the participants that did not interact with the cues would be using heuristic processing (HSM) or the peripheral route (ELM).Besides leading participants to conduct a more of credibility, engaging in systematic processing would cause them to take more time and result in higher cognitive absorption.
We argue that the use of an algorithm is another important aspect of the success of the cues, for two specific reasons.First, our algorithm offers visual and textual cues that allow the user to assess the quality of the reporting on their own, instead of operating as a block that states whether the news item is credible or not.This way, it follows the principles of explainability that encourage the audience to grant it legitimacy (Shin, 2021).Second, because we did not attribute the origin of the cues to the same source as the story, we are not asking the reader to trust the institution that is providing the news (Karlsson, 2020(Karlsson, , p. 1808)).

Conclusions
We have proposed the use of algorithmic transparency cues that highlight missing information and inconsistencies in the authorship of a story to assist news readers in judging the quality of a news item through the quality of the information.The research has two statistically significant results: first, a large effect size of the cues on the assessment of source and message credibility; second, a positive impact on cognitive absorption, which is a measure of involvement with software (Agarwal & Karahanna, 2000).In addition, users who engaged with the cues also took longer to complete the task.The sum of these results supports our hypothesis that the transparency cues encouraged readers to engage in the systematic processing of information (Chaiken & Ledgerwood, 2012), consequently thinking more criti-cally about the message they have received.In this context, future research could explore whether cognitive absorption has a mediating effect on perceived credibility when using cues.This would provide insight into the mechanisms with which the cues influence the perception of credibility.
We have also extended the concept of explainability in algorithmic journalism beyond the context of news recommendation.Research has shown that explainability is a component of credibility in algorithmic journalism recommendations (Shin, 2021(Shin, , p. 1060)).We believe that it can have the same function in journalistic analysis, in this case by analyzing the news item itself and showing results directly to the audience so they can make their own decisions on whether the story is credible or not, thus providing them actionable insight.
Finally, we highlight the potential ethical implications of transparency cues influencing the perceived credibility of a news article.As previously mentioned, a malicious actor could use misleading cues to promote misinformation, instead of using the cues as intended.
Tanu Mitra is an assistant professor at the University of Washington's Information School.She studies and builds large-scale social computing systems to understand and counter problematic information online.Her research spans auditing online systems for misinformation and conspiratorial content, unraveling narratives of online extremism and hate, and building technology to foster critical thinking online.Her work employs interdisciplinary methods from the fields of human-computer interaction, data mining, machine learning, and natural language processing.

Figure 1 .
Figure1.News story with source and message cues.Notes: The Author Details section covers information about the source while the Event Summary section shows which of the 5W and 1H questions are answered in the text; information in red means that the information is missing or inconsistent with the topic of the story.

Table 2 .
Effects of the cues on cognitive absorption in S1 and S2.