Digital Civic Participation and Misinformation during the 2020 Taiwanese Presidential Election

From fact-checking chatbots to community-maintained misinformation databases, Taiwan has emerged as a critical case-study for citizen participation in politics online. Due to Taiwan’s geopolitical history with China, the recent 2020 Taiwanese Presidential Election brought fierce levels of online engagement led by citizens from both sides of the strait. In this article, we study misinformation and digital participation on three platforms, namely Line, Twitter, and Taiwan’s Professional Technology Temple (PTT, Taiwan’s equivalent of Reddit). Each of these platforms presents a different facet of the elections. Results reveal that the greatest level of disagreement occurs in discussion about incumbent president Tsai. Chinese users demonstrate emergent coordination and selective discussion around topics like China, Hong Kong, and President Tsai, whereas topics like Covid-19 are avoided. We discover an imbalance of the political presence of Tsai on Twitter, which suggests partisan practices in disinformation regulation. The cases of Taiwan and China point toward a growing trend where regular citizens, enabled by new media, can both exacerbate and hinder the flow of misinformation. The study highlights an overlooked aspect of misinformation studies, beyond the veracity of information itself, that is the clash of ideologies, practices, and cultural history that matter to democratic ideals.


Introduction
Taiwan is one of the freest regions in Asia from a sociopolitical standpoint, and yet it receives some of the highest concentrations of online disinformation, due to its geo-political history with China (Monaco, 2017). With waning trust in the traditional media over the course of recent years (Hu, 2017), Taiwan has turned to grassroots cyber-interventions, spearheaded by its community of civic 'hacktivists' (Fan et al., 2019;Rowen, 2015). The recent 2020 Taiwanese Presidential Election has thus presented a fierce battleground that re-examines democratic values.
Citizen participation in support of and detrimental to democratic ideals is not a new phenomenon. In Dark Participation, Quandt contrasted the utopian vision and dark realities of citizen news making (Quandt, 2018). Benight participation involves citizen-journalists who selflessly take part in democratic deliberation. Dark participation, in contrast, describes negative contributions to news production. This includes "trolling," piggybacking off untruths, and the dissemination of disinformation. Recent studies have linked this with the growing populism in the West, a political trend also observed in Taiwan. Han Kuo-Yu, the presidential candidate running against the incumbent President Tsai, is frequently compared to US President Trump in both his rise as a businessman-turned-politician and his use of politicized rhetoric (Cole, 2019).
Two years after dark participation has been first characterized, the political media ecosystem has evolved. The case of Taiwan encapsulates two gray areas in this light-dark dichotomy. First, both the diffusion and defense against misinformation are citizen-driven. Disinformation has been well-documented to arises from nationalized citizens from China (Yang, Chen, Shih, Bardzell, & Bardzell, 2017), instead of governmentsponsored campaigns. Rather than light and dark participation as characterized by Quandt, the elections characterize the clash of divergent political ideologies rooted in seven decades of history.
Second, the use of digital tools to fight disinformation is a double-edged sword. The Anti-infiltration Act, passed two weeks prior to the elections, caused significant controversy, with its critics worrying it was too partisan. The former head of the National Communication Commission, who allegedly resigned over disagreements for this act, stated that although "disinformation is the enemy of an open and democratic society…they [might] lose that open society by fighting against it" (Aspinwall, 2020a). The use of technology to promote certain political discourses against foreign interference may appear positive, while simultaneously diminishing the vibrancy of domestic discourse.

Research Questions and Contributions
This article presents the case study of the 2020 Taiwanese Presidential Election, told through the lens of three widely adopted platforms: Line, Twitter, and Taiwan's Professional Technology Temple (PTT, Taiwan's equivalent of Reddit). Each platform reveals a unique dimension to the election's discourse. We draw primarily from two theoretical framings. First, we postulate the influx of disinformation as a threat to three democratic normative goods: self-determination, accountable representation, and public deliberation (Tenove, 2020). Second, we consider the modal actors of media regulation, specifically from political parties to grassroots volunteers.
As we will see in Section 2, disinformation has always played a part in Taiwanese elections. A walk through history shows interference techniques morphing from direct displays of military power to subtle digital manipulation efforts. However, pinpointing the sources of misinformation is a difficult task, which depends on each platform's accessibility. Instead, we focus on understanding the discourse topics, the users involved, and how they engage in discussion over these democratic, normative goods. Our research questions and hypotheses are as follows: RQ1: How do discourse, user behavior, and political intent vary across and within platforms?
• H1a: Twitter will contain higher levels of foreign users, consistent with known percentages of platform usage.
RQ2: On Twitter, do we observe instances of geopolitical divisions and transnational solidarity?
• H2a: High levels of transnational support exist between Taiwan and Hong Kong. • H2b: There will be higher levels of bot-like behavior from mainland Chinese users.
RQ3: Which democratic normative goods appear vulnerable to misinformation?
• H3: Posts about Tsai, and hence engagement with issues of accountable representation, will produce high levels of disagreement from Chinese users and rural areas.
RQ4: What is the role of the traditional media in spreading disinformation?
• H4: A sizable proportion of news articles will contain misinformation, consistent with the distrust in the traditional media.
By answering these questions, we aim to contribute critical literature about the next phase of the light and dark debate, specifically how citizens respond collectively to address dark participation. Like other Asian countries, Taiwan's social media ecosystem is dominated by chatroom-based communication, a distinction from the West. As misinformation spreads behind these closed doors, the power of government policies is limited. Disinformation regulation in these cases becomes less a matter of policy, and more a community norm, in line with recently proposed theoretical frameworks (Starbird, Arif, & Wilson, 2019). Taiwan's case study enables us to understand communal commitments to maintain the quality of public deliberation.

The 2020 Taiwanese Elections: The Political Backdrop
In 1949, the conflict between Taiwan and China began. Facing defeat by the communist party, General Chiang Kai-Shek retreated to the island, and Taiwan has been a "de facto independent nation" (Monaco, 2017) (Hsiao, 2016).
A year prior to the election Tsai was projected to lose to Han, due to a few factors. First, wage stagnation, public pension reform, and same-sex marriage led to general discontent toward her presidency. As a result, she suffered an astounding defeat during the 2018 local elections. However, the Hong Kong Anti-extradition Bill Protests triggered a change in sentiment across the island. After Chairman Xi Jinping gave a hardliner speech regarding the one-China policy, polls showed dramatic improvement to Tsai's campaign (Hsiao, 2019).
Han was different from previous candidates due to his unconventional background and rapid rise in political power. Starting a political career from unknown origins, he became the mayor of the third-largest city in Taiwan, Kaohsiung. No one expected him to win. Kaohsiung has been the DPP stronghold for more than 20 years, and the KMT chairman Wu Den-yih sent Han to contest Kaohsiung with no expectations of victory (Jansen, 2019). Yet, he won in a landslide, owing to a surge of popular support the media called the 'Han wave.' His slogan was simple-Get Rich! His iconic hairless head earned him the nickname 'the Bald Guy.' Within six months of his election, he declared his run for presidency (Reichenbach, 2020).
Importantly, the Han Wave bears many similarities to US President Trump's Make America Great Again movement (Cole, 2019). As the Han Wave swept across the island, he accrued a large group of dedicated 'Han fans' estimated at 1,2 million. He appealed to rural voters, employed economy-focused brash rhetoric, and most critically, he entertained. Similar to the way the media latched onto President Trump's tweets, Han appeared frequently on social media, the news, and in discourse led by supporters from China. It was against this backdrop-a dark horse candidate who had flipped the DPP's most supported city-that the 2020 Taiwanese Election was held.

Taiwan's History of Foreign Interference
Foreign interference from China is intimately tied with Taiwan's elections, first taking form as military exercises. Before Taiwan's first presidential election in 1996, the People's Liberation Army fired missiles in the water around the island, in a show of intimidation. In the form of information warfare, radio stations and large speakers project sound across the strait to influence the elections.
In recent years, interference from China has taken a different form. Chinese trolling has often been described as decentralized, arising from netizens (Internet citizens). Diba, a sizable group of Chinese nationalists is known to overcome China's Great Firewall to troll Taiwanese political leadership (Yang et al., 2017). Interestingly, Diba violates the People's Republic of China's legal norms for spreading pro-People's Republic of China messages on the Internet, in a manner ironically similar to movements on self-determination.
To contextualize what misinformation looks like in Taiwan, we present two recent cases. The first was after Typhoon Jebi hit Japan and knocked out Osaka's Kansai International Airport, a report from PTT said China had evacuated Chinese nationals from the airport. The report then said that if Taiwan citizens identified themselves as Chinese they would also be evacuated. Taiwan's Foreign Ministry Representative Su Chii-cheng, following waves of criticism that he failed to protect Taiwanese citizens during this natural disaster, committed suicide. After his death, it was revealed that the Chinese Communist Party (CCP) was also unable to evacuate Chinese citizens, and the original message, shared repeatedly online and amplified by legacy media outlets, was fabricated. The message was eventually traced back to Weibo (China's main microblogging site). The second case was during the 2018 mid-term elections, a widespread ghost island meme spread across social media, stoking fear of opportunity loss, economic stagnation, and government corruption. The term first arose on PTT, now used as selfdeprecating criticism about Taiwan, but was successfully used by Chinese users to agitate feelings of emptiness and pessimism toward Taiwan's economic future.
While the source of false news may arise from mainland China, its amplification is often a direct result of Taiwan's traditional media. In these cases, although the CCP helped stoke fears by supporting these stories, the primary spread arose from sensational-oriented journalism practices in Taiwan itself.
Prior theories on the organization of disinformation campaigns show the modal actors of authoritarian regimes are the central governments, whereas in democracies this is taken up by political parties (Bradshaw & Howard, 2018). Monaco (2017) delineates propaganda in Taiwan in two primary forms: 1) Internal campaigns-domestic political propaganda on social issues, usually between these two parties, where the modal actors are the political parties; 2) cross-strait campaigns-propaganda that originates from the mainland to promote unification discourse, where the modal actors are the central government (CCP).
As modal actors of regulation are also political parties, attempts to stymie disinformation may become internal campaigns of propaganda. In other words, the modal actor for defending against foreign disinformation may become the perpetrator domestically. Additionally, the case of Diba contradicts this framework, as it is not centralized and organized, but decentralized and spontaneous. To understand this gray area in greater depth, we review Taiwan's regulation of media platforms against misinformation.

The 2020 Elections: Working Together with Social Media Companies
On December 31, 2019, a highly controversial Anti-Infiltration Act was passed in the Legislative Yuan. The law regulated the influence of entities deemed foreign hostile forces in Taiwan (Aspinwall, 2020b). Containing 12 articles, it barred people from accepting money or acting on foreign instruction. Penalties were severe: violations include fines up to $10 million NTD ($333,167 USD) and five years in prison.
The passage of the law came with criticism. The KMT criticizing the incumbent DPP party for forcing it through legislation. As mentioned prior, the former director of the National Communications Commission believed it to negatively impact domestic free speech. However, although the nature and substance of misinformation were debated, both parties agreed foreign interference should be regulated on social media.
Information travels fast as Taiwan is one of the most technologically integrated countries, with an 86% Internet penetration rate and 78.8% smartphone penetration rate (Thomala, 2020). Moreover, around 60% of Taiwanese use social media to source news, particularly for civic and political engagement (Chen, Chan, & Lee, 2016). Table 1 shows the overall usage rates for platforms in Taiwan.
Leading up to the elections, Facebook and Line came under scrutiny for the different ways they function. Facebook is a more open, profile-based social network. Line is a chatroom service, therefore more 'private.' Dr. Puma Shen, a key member of the misinformation task-force, categorized misinformation acting in three modalities (Hioe, 2020a): 1. Online and digital: On public social media platforms like Facebook. 2. Offline and digital: Apps such as Line disseminating messages directly from user to user. 3. Offline and physical: Local gangs, temples, and village leaders have for a long time taken illicit payments. As an example, many sources of payment were through off-shore, Chinese-Malaysian companies.
The importance of Facebook became apparent in the 2014 elections for the Taipei mayor. Ko Wen-ze, a physician with slight Asperger's, became the first alternative candidate to become elected mayor. As a believer in quantitative analytics, his campaign was driven by an indepth analysis of 11-14 million Facebook profiles, in a country of 23 million. In response, Facebook set up a 'war room' to help regulate content (Huang, 2019). Due to this distrust in the traditional media, Taiwan has turned to third-party, cyber-solutions to help decide what sources of news are credible. Since chatrooms in Line are not available to the public moderation, misinformation flourishes. The Cofacts chatbot was created to counter chatroom-based misinformation (Han, 2018). Developed by g0v (gov-zero), a grassroots civic hacker group in Taiwan, users who receive questionable messages forward them to the Cofacts chatbot. The message is then added to a database, fact-checked by editors, before returned to the user. Future incidents of the same article are then automatically replied.
Taiwan is not unique in its attempts to fact-check, since Brexit and the 2016 US Presidential Election revealed the impact of misinformation. Ahead of the polls in July 2018, 90 media outlets in Mexico banded together to fact-check election misinformation, in collaborative, journalistic fact-checking (Terceros, 2018). Singapore has a state-run fact-checker called Factually, and Indonesia holds weekly misinformation briefings. However, the entirely citizen-driven approach is unique to Taiwan, though it exists along-side of governmental solutions and official resources. This crowd-sourced approach addresses centralized shortcomings and is consistent with advantages shown by Pennycook and Rand, particularly in regards to source credibility (Epstein, Pennycook, & Rand, 2020), quality (Pennycook & Rand, 2019), and publisher credibility (Dias, Pennycook, & Rand, 2020).
While Line and Facebook are conduits, PTT has emerged as an important source in Chinese misinforma- China's auction site, with the most influential accounts being reportedly sold for $6,500 USD. As with the case of the Jibe typhoon, many journalists use PPT to source information, which causes false claims to be repeated via the traditional media.

How Disinformation Harms Democracy: Normative Goods Threatened by Disinformation
Here, we distinguish between misinformation and disinformation. The primary distinction is postulated upon intent. Misinformation denotes false information that is shared, regardless of an intent to mislead (Karlova & Fisher, 2013). It is generally accepted as a fallible aspect of human nature, in our propensity to misremember, mishear, and share sensational information.
Disinformation denotes false information disseminated as a hostile act or political subversion. It is the intentional diffusion of misinformation. When considering disinformation, there are vague assertions to how its spread is detrimental to democratic societies. It is valuable to discuss the specific loci it damages. Tenove typologies three democratic normative goods threatened by disinformation that require different policy responses (Tenove, 2020).
Self-determination refers to the ability of a democratic population to enact collective rules to govern themselves. Thus, they are primarily addressed through security policies at the international and domestic levels. This is perhaps most salient to Taiwan's governance. However, many contemporary democratic theorists maintain foreign influence is beneficial to selfdetermination. In a globalized world, the actions of one state influence another. Thus, policies of disinformation regulation draw the limit for which foreign actors can influence domestic policy.
Self-determination and Taiwan's sovereignty lie at the center of every election, with this time often emerging through Hong Kong. Solidarity between Taiwan and Hong Kong is not new, and modern support can be traced to the Sunflower movement in 2014. The slogan "Today's Hong Kong, Tomorrow Taiwan" emerged then, which showed Hong Kong as a constant measure of what happens if Taiwan loses its democratic freedom. This projection goes both ways: Hong Kong often frames Taiwan as a political utopia and is posed as a "lost historical possibility" (Hioe, 2020b). As we will see, the Hong Kong protests play a decisive role in shaping the discourse during the elections.
Accountable representation refers directly to the procedures of elections. In these cases, disinformation challenges citizen trust in elected representatives (European Commission, 2018). Classic examples include false claims as to where and when voting occurs, as demonstrated in the 2016 US Presidential election (DiResta et al., 2019).
Another example includes false stories targeting specific candidates. In the 2020 Taiwanese Presidential election, two major stories emerged to discredit Tsai. According to these sources of false news Tsai faked her college degree and was a secret homosexual who wanted to corrupt Taiwanese children. The second false story stated that Tsai wanted to sell the country out to Japan and the US.
Public deliberation addresses the quality of public discourse. Rather than addressing actors themselves, as national security and election policies do, public discourse is protected via media regulation. According to theories of deliberative democracies, critical to wellinformed public decision making requires communicative exchanges among citizens (Habermas, 1996). Here, disinformation threatens to undermine deliberative systems by increasing the quantity of false claims, diminishing engagement and opportunities to engage in public discussions.
The measures of our analysis are materializing: We wish to understand how these three normative goods emerge during the 2020 Taiwanese Elections. The same piece of misinformation can simultaneously act on all three goods. Next, we consider the specifics of the dataset and methods of our analysis.

Data
We use three main sources of data-Twitter, PTT, and Cofacts. First, we scraped Twitter using a keyword list pertaining to the elections, including the names of the three primary candidates (Tsai ing-wen, Han Kuo-Yu, and James Song and their parties). We also tracked terms about the election broadly, such as Taiwan2020, ComeHomeAndVote, and TaiwanVote.
As an overview of the dataset, Table 2 shows the general distribution of tweet languages. Since we have filtered using Taiwan as a necessary keyword, this data set is topically bound to discourse about the island. We observe a high level of Japanese and English tweets, which reflects the high Twitter usage in the West and Japan. In Japan, Twitter is used by 45 million monthly users (35%) and is the highest across all social media platforms (Yoshida, 2018). In comparison, Facebook only has 22% penetration ("Kokunai mau 2,800 man-ri toppa,'' 2017).
Second, we scraped PTT, often described as the Reddit of Taiwan. It was founded in 1995 by students at the National Taiwan University. With more than 1,5 million registered users, up to 150,000 users can be online during peak hours. Around 500,000 comments are posted every day. The structure of PTT is similar to that of Twitter, as users can like and reply to threads. However, reactions can be positive (推) or negative (虛).
In this article, we scraped 11,116 unique bulletin posts between November 1, 2019, and January 21, 2020, filtered on posts relating to the elections. The subset of Third, and most importantly for misinformation, we analyzed discourse on Line. We use the Cofacts database-a public, crowd-sourced data set of misinformation-and we used the four, relevant data tables listed below: 1. Articles: Specific links, texts, or articles that users forward. 2. Article replies: A table that aggregates replies to an article, with a score of 1 or -1, indicating True or False. 3. Replies: An editor's reply to an article. There are four outcomes: a) Misinformation, the article contains potential misinformation or is unverified; b) opinion, the article contains information that is an opinion; c) non-rumor, the article is factual; d) not-an-article, the article does not pertain to Cofacts. 4. Reply requests: Includes the article ID, but also the reason why it was included.
Each of these data sources reveals a different aspect of misinformation during the 2020 Taiwanese Elections. Twitter shows the coverage of the elections from actors domestic and abroad. PTT shows the domestic discourse, and due to its provision of IP, geo-local distribution of discourse. Cofacts shows the types of posts that arouse suspicion, including fact-checked labels for whether they contain misinformation, and opinion, or fact. The primary form of our analysis consists of time-series and network analysis, with cross-sectional analysis in volume. Figure 1 gives an overview of participation volume across all three channels. Twitter is shown in blue, PTT in purple, and Line in green. Immediately, we observe a rise in volume as we approach the day of the election, January 11, 2020. However, on the day itself, levels depress in PTT and Line. The levels of Line are also more consistent throughout and increases after January 20th. This is likely due to the combination of two reasons. First, it is against the law in Taiwan to post about the elections on the day ballots are counted. Since Line is closely tied to one's individual account, levels remained constant. We see a similar dip in PTT, though posts were still being written. This may reflect the semi-anonymous nature of the platform. This suggests that electoral regulation was enacted unequally across platforms, and answers partially RQ1. Second, due to Twitter's low usage in Taiwan and higher penetration in Japan and the West, we observe a spike likely due to foreign coverage. Next, we consider each of these platforms in detail.

Twitter
We set to establish what fraction of the discussion on the Twitter platform is organic versus posted by automated accounts (a.k.a., bots; Ferrara, Varol, . We used a state-the-art bot detection tool designed for Twitter, Botometer , to quantify bots' prevalence. In line with recent literature (Ferrara, 2020;Yang et al., 2019), we used the top 10 and bottom 10 percentiles of users to set apart likely human users from automated accounts, with Botscores of 0.06 and below for humans and 0.67 and above for bots. This yielded 14,948 human accounts (responsible for 30,365 tweets and 3.4% of the total tweets) and 14,929 bots (with 34,020 tweets representing 3.9% of the total tweets). The total number of accounts scored was 141,929 (with 389,851 tweets). Table 3 shows the differences in tweet types between humans and bots. Somewhat expectedly, humans post original tweets almost twice as much, and more quoted tweets. Bots on the other hand retweet without comment at almost 10% extra propensity. This is consistent with general characteristics of bot behavior .
A more pronounced difference can be observed with the language type. Table 4 shows the distribution of simplified and traditional Chinese. Simplified Chinese is used by China and traditional Chinese is used by Taiwan. We see only 7.4% of Chinese-humans users write in simplified, whereas 92.6% use in traditional Chinese. In contrast, for all Chinese-speaking bots, 31.5% use simplified Chinese, and 68.5% use traditional Chinese. This indicates a much stronger chance that a bot is adopting sim-plified Chinese. We corroborate this by considering the location of users. The bottom row shows that most of the tweets arise from non-local sources, which together affirms H1. We conclude much of the chatter on Twitter about Taiwan arises from outside of Taiwan.
The high level of English and Japanese in Table 2 over Chinese is of great interest. We find that around 50,000 out of 81,000 Japanese tweets are in response to President Tsai. While Tsai's dominant presence is largely expected, we note that Han in comparison has very few mentions, with no tweets from his account. Tsai also tweets frequently and intentionally in Japanese, such that common simplified Chinese users accused her of being "bought by Japan." Since much of the discourse occurs amongst international users, the sites of democratic harm in this regard are primarily with self-determination and, to a lesser extent, accountable representation. This becomes clearer when we consider the network graph shown in Figure 2, which portrays the semantic space of our Twitter data through top hashtags. Here, nodes are hashtags, and edges are their co-occurrences. The network was produced by tabulating all co-occurrences, processed in NetworkX, and then plotted with Gephi (Bastian, Heymann, & Jacomy, 2009;Hagberg, Swart, & Chult, 2008).
The nodes in purple (center left) show the general discourse in traditional Chinese. Tsai takes up a large central role in setting the agenda. We also note the large cluster of Japanese responses (in orange) to Tsai. In contrast, Han and the Kuomingtang are mentioned much less (bottom left, in brown). To the left (in dark green), there is a cluster of hashtags that are supportive of the DPP and Tsai's camp, but does not take a central role in the semantic network. One possibility is these are pro-DPP campaign users that did not achieve traction. The tangible imbalance between the DPP and KMT in selfdetermination discourse answers RQ2.
The network also provides insight regarding transnational solidarity. We observe a distinct division of language in the network structure. The election's discourse   in English (green) is much better connected to other international themes, such as Hong Kong (purple, right), human rights issues in China (cyan), and by then, mentions of the novel coronavirus. The lack of trending hashtags in simplified Chinese and keywords indicates that while Chinese trolls may directly attack Tsai and her online campaign, their collective behavior on Twitter is decentralized. This is a shift away from Bradshaw and colleagues' characterization of centralized campaigns, and consistent with Yang et al.'s (2017) results. Of note, the red cluster at the top denotes coverage from Thailand. The relationship between Taiwan, Hong Kong, and Thailand has been under the spotlight during the Covid-19 pandemic. In April 2020, after a celebrity drew outrage from Beijing viewers (McDevitt, 2020) and received large volumes of malicious trolling, users from Taiwan and Hong Kong began defending her online, along with the Thai users. The hashtag #nnevvy began trending, and an online community eventually known as the Milk Tea Alliance with users from Taiwan, Thailand, and Hong Kong was born. Looking at Thai coverage in the semantic network for Taiwan's election, the emergence of the Milk Tea Alliance is not sudden but a large trend of growing solidarity between the three user bases.
While the literature on transnational solidarity between Taiwan and Thailand has been sparse, activism between Hong Kong and Thailand can be traced back to 2016. When students in Thailand invited Hong Kong student activist Joshua Wong to share his experiences during the Umbrella Movement of 2014, as a speaker for the 1976 massacre of Thai student uprisings, he was detained at the Bangkok airport (Phoborisut, 2019). Protests calling for his release emerged across Hong Kong and Bangkok, which produced foundations for solidarity today.
We also note the important role of Apple Daily, a popular digital native newspaper in Hong Kong and Taiwan. Their co-occurrence with so many trending hashtags suggests, compared to other newspapers, that they disseminate their articles by carefully tracking the top trending keywords. In sum, results for Twitter suggest higher levels of automation, or bot-like behavior, in simplified Chinese accounts. However, the lack of trending terms suggests the lack of coordinated attacks, compared to discourse from Taiwan, largely set by Tsai and the DPP.

PTT
While Twitter provides insight into discourses of selfdetermination on the international front, it lacks details of public deliberation domestically. To recap, PTT is widely regarded as the 'Reddit' of Taiwan. New events are often posted here, to the extent that journalists have used it as a first-line source of information. Although competing platforms such as DCard have also risen in popularity in recent years, PTT has been a good representative of the local discourse, and its straightforward interface enables analysis of public discourse.
To get a sense of the discourse on PTT, the top terms are presented in Table 5, upon removing candidate names. We observe words that speak to a democratic process-freedom, vote, democracy, and government. The Chinese Communist Party (CCP) and Hong Kong are explicitly mentioned. Since this data set is conditional on being election-related, these keywords indicate the protests in Hong Kong, and shifting attitudes toward China played a large role in shaping discourse of self-determination. Common keywords in the comments section included the elderly and sugarcane farmers. Here, the tag 'sugarcane farmers' refers to the rural common folks. We also see PTT specific terms, such as bucket (水桶). The term 'cool down in a bucket of cold water' emerged as a euphemism for being suspended. 'Cockroach' and 'trash' are derogatory terms endemic to PTT's common vocabulary. Ko Wen-Zhe, the mayor of Taipei, is the fourth most mentioned term. Two major Taiwanese cities are mentioned-Taipei and Kaohsiung. As expected, Taipei is mentioned in conjunction with Ko, and Kaohsiung with Han. Keywords only reveal a shallow interplay within online communication. Next, we consider the comments section, specifically we quantify the level of disagreement within each post. With P the number of commen-dations and N the number of dislikes, we define the disagreement score D as follows: The choice of variable reflects negative (N) and positive (P) reactions. This measure scales with the number of disagreements (with respect to the initial post), while also capturing the diversity of commenting participants. For instance, a score of 0.5 indicates an equal number of users agreeing and disagreeing. Upon tagging the disagreement scores per article, we consider whether this is related to the specific discourse topics. We first subset all posts with disagreement scores greater than 0.5. Figure 3 shows the proportion of articles, subset on a specific topic. We observe that there is a disproportionate level of disagreement within the topics of Tsai and the DPP. To understand where these disparities arise, we cross-section on the geo-local dimension of online engagement with certain topics. We leverage the given IP addresses within the comment sections to analyze across the urban-rural divide and between Taiwanese, Chinese, and other foreign commenting participants. Figure 4(a) shows urban and rural user participation on PTT. We observe that there is little variation across the two cohorts, with rural users engaging slightly more with Han and China-related discourse.
The discourse across international borders tells a much more compelling story. Figure 4(b) shows the topical distribution by Taiwanese, Chinese, and international IP addresses. Users with Chinese IPs disproportionately target posts about China, Hong Kong, Tsai, and the KMT, whereas they engage with Han at a much lower level. In contrast, there is little to no posting about the Covid-19 pandemic, relative to the domestic and international cohort.
Table 6 further shows that posts that involve Chinese users lead to higher levels of disagreement. This is pronounced in stories about Han, which produces high polarization across Chinese and Taiwanese users. Together, these answer RQ3 and confirm our H3 in regards to discourse about Tsai.
To summarize the results from PTT, we observe that accountable representation is a likely locus of disinformation. This also shows that individuals, rather than political parties, seem to be the target of choice. The selective engagement from Chinese citizens in these topics regarding self-determination-such as Hong Kong, China, and Tsai, while avoiding topics such as  shows the phenomenon of emergent coordination.
However, analyses of discourse gives us limited insight into disinformation directly. Next, we consider Line and the misinformation aggregated under Cofact's database.

Line and Cofacts
The Cofact's database includes user-forwarded Line posts and/or links that may contain misinformation. We similarly tagged the database with discourse topics, with an additional category for medicine.
The amount of misinformation is high compared to the number of factual claims. Incumbent President Tsai seems to attract the highest level of misinformation, both in proportion and in raw volume. This seems consistent with our observation of PTT and Twitter, where Tsai seems to attract the highest levels of controversy, and    thus completes our addressing of RQ3 and confirmation of H3. Interestingly, we observe a low correlation between the volume of reported cases for the DPP and Tsai. We offer two potential explanations. In the wake of Tsai's perceived failures during the midterm elections, fissures appeared between the DPP and Tsai. Thus, Tsai became the primary target of hoaxers, rather than the DPP itself. The decoupling became evident closer to the election. The second explanation follows a hypothesis presented earlier, where individuals are the more likely target of misrepresentation, at least in the case of foreign interference.
Finally, we consider the sources of misinformation, and attempt to answer RQ4 regarding the traditional media's role in spreading misinformation. Table 7 shows the top linked sources within the database and the percentage of misinformation.
We have two main takeaways. First, the primary interplatform links are with social media and digital platforms such as Facebook and YouTube. These two together take up just over one-third of the reported links. Second, there is a high proportion of misinformation on dominant digital news platforms. For instance, hyperlinks for Google News alone contain almost 50% of all misinformation.
Although it is technically difficult to ascertain the hosting domains in these cases, other digital news sources score poorly: United Daily (0.28), KK News (0.33), Apple Daily (0.28), and ET Today (0.25). Only Liberty Times scores low on misinformation (0.05). While it's true there may be selection bias-these are articles suspected of containing misinformation after all-the fact that verified news sources even contain misinformation is particularly concerning. Our findings confirm H4 and observations from the past (Monaco, 2017), that the traditional media is often responsible for amplifying misinformation.

Conclusion
By 8 PM on January 11th, 2020 the results of the Taiwanese election were clear: Tsai had defended her presidency and won by the greatest margin in Taiwanese history. Despite large amounts of disinformation surrounding her candidacy, the outcome of the election seemed to indicate that was ineffective. An explanation may be dissonance. As Templeman recently postulated, due to the high levels of distrust in Chinese media, large levels of the population are inoculated against pro-Chinese sentiment (Templeman, 2020). However, despite the growing emphasis on domestic issues like wage growth and LGBT rights, elections never stray far from the China problem. We began this study by discussing two gray areas regarding the frame of light and dark participation. The first and longer-standing issue is the clash of political ideologies between China and Taiwan. Second, and more importantly, the use of digital tools like bots and group removal to fight misinformation may limit the domestic diversity of political voices.
The first goal of this study was to understand the different facets of the elections communicated, using a thorough analysis of these three platforms. The second and more important goal, was to understand what the discourse and citizen participation say about the tension of employing digital tools to fight disinformation.
On Twitter, we found Tsai and the DPP's dominance in the digital campaign. Her engagement focuses on the international front, with users from anglophone countries and Japan. We observe more bot-like behavior coming from Chinese users and transnational solidarity between Hong Kong, Taiwan, and Thailand. The high volume of Tsai's content suggests counter-discourse against Chinese trolls is partisan.
On PTT, although Han is the most popular topic of discussion, it is Tsai and the DPP that elicited the most disagreement. A closer look at the geo-local origins reveals Chinese participation on issues such as Hong Kong, Tsai, and the KMT, while avoidance of . This also indicates that discussion surrounding Han arises predominantly domestically. These results suggest citizen participation from China focuses on discrediting Tsai and hence challenges accountable representation. We affirm this by considering Line. Stories about Tsai are the most reported stories. Lastly, a concerning level of misinformation arises from the traditional news media.
The high volume of Tsai-related misinformation, particularly from Chinese sources, may have justified the strong terms of the Anti-Infiltration Act. On December 13, 2019 alone, Facebook removed 118 fan pages, 99 groups, and 51 accounts that supported Han. One of these pages included 155,443 members. While some of these may have violated community standards or had traces of foreign interference, the hardline approach certainly silenced legitimate support for Han. Perhaps Han would have done better had China not explicitly backed his campaign. Political bots can be used to promote democratic discourse, to washout foreign propaganda, but if the modal actors are political parties, the same technologies can stamp out the diversity of political opinion. This is especially dangerous in bipartisan situations, as in the case of Taiwan.
Digital tools alone do not determine the dark or light shade of a campaign; rather, it is whether their use violates the ideals of deliberative democracies. The case of Cofacts may provide a solution, in the domain of media regulation. To avoid partisan censorship of political information, crowdsourced solutions promise more equity and a diversity of voices. However, it is important to ensure that a representative committee of volunteers is present in fact-checking.
The case of Taiwan presents comparisons across the most salient axes in democratic theory: government vs. citizen-driven solutions, authoritarian control vs. selfdetermination. Peter Dalgren, in his canonical work, lays out four pillars for which civic culture rests upon: knowledge, loyalty to democratic values, practices and routines, identity as citizens (Dahlgren, 2000). Amidst rising populism, the ability for citizens to not only participate in news making, but to verify fact and build sociotechnical infrastructure, brings forth an optimism toward citizen-led democracy and public deliberation. For Taiwan, it is important to continuously refine its interpretation of free speech, not as a comparison with its neighbor across the strait, but a set of procedural and accountable standards.