The Limits of Policy Analytics: Early Examples and the Emerging Boundary of Possibilities

Policy analytics has emerged as a modification of traditional policy analysis, where the discrete stages of the policy cycle are reformulated into a continuous, real-time system of big data collection, data analytics, and ubiquitous, connected technologies that provides the basis for more precise problem definition, policy experimentation for revealing detailed insights into system dynamics, and ongoing assessment of the impact of micro-scale policy interventions to nudge behaviour towards desired policy objectives. Theoretical and applied work in policy analytics research and practice is emerging that offers a persuasive case for the future possibilities of a real-time approach to policymaking and governance. However, policy problems often operate on long time cycles where the effect of policy interventions on behaviour and decisions can be observed only over long periods, and often only indirectly. This article surveys examples in the policy analytics literature, infers from those examples some characteristics of the policy problems and settings that lend themselves to a policy analytics approach, and suggests the boundaries of feasible policy analytics. Rather than imagine policy analytics as a universal replacement for the decades-old policy analysis approach, a sense of this boundary will allow us to more effectively consider the appropriate application of real-time policy analytics.


Introduction
Policy analytics has emerged in recent years as a modification of the traditional policy analysis approach, where the discrete stages of the policy cycle are being reformulated into a continuous, real-time system of big data collected from ubiquitous, connected technologies, assessed using advanced data analytics. Technological developments now provide policymaking with access to massive amounts of real-time data about policy problems and system conditions. When coupled with growing capacities in data analytics, policy analytics provides a basis for more precise problem definition, detailed insights into system dynamics, and ongoing assessment of the impact of micro-scale policy interventions to nudge behaviour towards desired policy objectives (Daniell, Morton, & Insua, 2016;De Marchi, Lucertini, & Tsoukiàs, 2016;Höchtl, Parycek, & Schöllhammer, 2016;Kitchin, 2014;Lazer et al., 2009;Mergel, Rethemeyer, & Isett, 2016;Tsoukias, Montibeller, Lucertini, & Belton, 2013).
Policy analytics presents a mix of technology and expertise that could result in important advances in the science of policymaking (Giest, 2017). However, despite some early successes and enthusiasm for the possibilities of policy analytics, a number of questions and barriers to their use have emerged, principally issues related to privacy risks, data biases, and the need to clarify the relationship between the technocratic accuracy of policy analytics, and the challenges of decision-making in a diverse democracy (Noveck, 2018). Our focus here is on a specific concern that remains underexplored: to identify where the strengths of policy analytics live up to its billing, consider what the range of plausible applications is, and begin to assess the limits of policy analytics for addressing public policy problems. Our guiding research question asks what types of policy problems are amenable to 'fast' feedback control systems facilitated by big data and analytics, and which require a deeper, patient, 'slower' more deliberative approach to problem definition, analysis, decision-making, implementation, and evaluation (Kahneman, 2011). To pursue this question, we undertake a survey of the literature in policy analytics theory and practice, deriving from that the features of policy problems and their settings that characterize the range of policy issues to which policy analytics can reasonably be applied, leading towards a sketch of the boundaries of policy analytics. Rather than imagine policy analytics as a universal replacement for the decades old policy analysis approach, understanding this boundary will allow researchers and practitioners to more effectively consider the appropriate application of a real-time policy analytic approach. Our claim is that policy analytics complements and supports democratic deliberation and civic engagement; with agreement on operational objectives, policy analytics built on big data makes effective feedback control feasible.
We start by defining what we mean by policy analytics as distinct from policy analysis, sketch the emergence of the technological possibilities that have given rise to policy analytics and outline some concerns that have emerged. We next present a scan of recent policy analytic examples, leading to the identification of some characteristics of policy issues that are amenable to a policy analytics approach and-by extension-some types of policy issues that are not suitable to a continuous, realtime system of big data and data analytics, concluding with some guidance as to when policy analytics might be considered an appropriate approach. This boundary around the possibilities of policy analytics should supplement the broader need to consider the appropriate place for a policy analytic approach in the context of representative and deliberative democracy, social justice and equity considerations, social diversity, and citizen privacy rights, concerns that should temper any unexamined enthusiasm for policy analytics.

The Emergence of Policy Analytics within the Policy Sciences
The modern policy analysis movement is based on an integrated, multidisciplinary approach to the study of public problems and the development of rational solutions based on careful analysis of evidence (Lerner & Lasswell, 1951). Decisions based on the best available evidence and rigorous analysis should be better positioned to address public problems than those based on anecdote, unsupported belief, or inaccurate data (Quade, 1975). From those origins, policy analysts have traditionally been tasked with precisely defining policy problems, collecting and analyzing data and evidence, supporting political decision-making with advice, guiding faithful implementation of those decisions, and objectively overseeing the evaluation of how effective those policy interventions were.
During the first quarter century of the policy analysis movement, quantitative techniques became staples of the theory and practice of policy analysis (Quade, 1980;Radin, 2000). Despite these significant advances and successes, debates over the perceived and proposed role of policy analysis have persisted in the profession's later years (Dryzek, 1994;Stone, 1988). While technical, empirical, quantitative policy analysis became increasingly sophisticated during the 1970s, and since, high-profile failures and the perceived inability to solve complex public problems exposed the limits of positivist policy analysis (May, 1992). Critics of positivism argued that the attempt to model social interactions using mathematical models was misguided (Amy, 1984), that policy analysis was much more than data analysis (Meltsner, 1976;Wildavsky, 1979), and that positivism was fundamentally incapable of dealing with complex problems in a democracy (Fischer, 1995). A "malaise...of the policy sciences" crept into the discipline as its positivist, neo-classical economics orientation seemed incapable of understanding human behaviour, accommodating the democratic expectations of citizens, or remedying the increasing complexity of policy problems (Deleon, 1994, p. 82). The positivist policy analysis hegemony was also undermined by limitations in data availability and the tools of analysis (Morgan, Henrion, & Small, 1992). Analysts inclined towards quantitative methods longed for even more robust data, greater computational power, and the development of more technically sophisticated policy analysis throughout government and wider policy circles (Morçöl, 2001). Some of those goals appear to have been attained in the digital era, with the growth of big data arising from the ubiquitous deployment of networked computing devices throughout society and increased data analytic capacity to manage the resulting flood of data.
Definitions of 'big data' abound (Dutcher, 2014;Fredriksson, Mubarak, Tuohimaa, & Zhan, 2017;Ward & Barker, 2013), with most focusing on its characteristicsespecially the large volume of data, its continuous flow at high velocity, and the variety of data available-and others pointing to the complexity of combined data sets and their value in revealing previously undetectable patterns. What emerges, however, from the policy analytics literature is a frequent conflation of 'big data' with 'large' data collections such as a census. While this reflects the current state of the art, our concept of big data draws especially on the velocity and variety (and, consequently, the large volume) of data as the foundation for a policy analytic approach that centres on a real-time understanding and interaction with the policy environment.
With the emergence and expansion of the Internet and the range of digital technologies that have been deployed in recent years, analysts now have access to a wide range of policy-relevant big data. These technologies and their users generate a variety of signals through devices like mobile smartphones, Internet of Things (IoT) devices, personal wearables, electronic transaction cards, in situ sensors, web search and web traffic, and social media. Massive amounts of data are now generated continuously through the daily activities of individuals, from their interactions with web services and social media platforms, purchasing behaviour and transportation choices revealed through electronic transaction cards, movement and interaction captured through mobile smartphones and wearables, behavioural choices measured through IoT consumer products, a range of measurements captured by sensors, satellite remote sensing, counters, and smart meters, and the interactions of people and devices with control technology. The accumulation of these signals, and associated metadata such as geolocation information and time stamps, results in a previously unimaginable amount of data, precisely measured from multiple perspectives, and captured in real time. Advances in data storage technologies now make it possible to preserve increasing amounts of data, and faster data transfer rates allow for cloud computing at low cost. We can now capture, store, and process data-in volumes previously unimaginable, from ubiquitous sources, with continuous flow, observed through multiple channels-and have increased capacity to manage, analyze, and understand these new data sources (Lazer et al., 2009). Not only has the volume of data and our ability to analyze it changed. The same technologies that allow for real-time data capture from the field provide a mechanism to communicate policy signals outward to actors, agents, and those devices, serving again to gather further data that measure reaction to those signals. With the stages thus joined up, policy formulation can be connected with implementation and evaluation processes in a continuous and real-time cycle of ideation, experimentation, evaluation, and reformulation (Pirog, 2014). New digital tools, platforms, and the data they generate allow for a seamless linking of the discrete stages of the policy cycle into a continuous, realtime, feedback cycle of problem identification, tool mod-ification, system monitoring, and evaluation. This technology revolution offers the potential to revive and extend the positivist tradition in policy analysis and offer improved support for policymaking through an approach we call 'policy analytics'.
To be certain, there are competing conceptualizations of what policy analytics implies (Daniell et al., 2016;De Marchi et al., 2016;Tsoukias et al., 2013). While referred to inter alia as 'big data' applied to public policy and administration (Einav & Levin, 2014a;Giest, 2017;Höchtl et al., 2016;Kim, Trimi, & Chung, 2014;Kitchin, 2014;Mergel et al., 2016), 'computational social sciences' (Lazer et al., 2009), and 'policy informatics' (Johnston, 2015), the term policy analytics is used here to emphasize the combination of new sources and forms of policy-relevant big data with the use of new analytic techniques and capacity to affect policymaking throughout the entire policy cycle. Some definitions stretch the definition of 'big data' to include traditional-albeit very large-government 'large data' collections such as censuses, taxation data, social security records, health information, and survey data (Daniell et al., 2016). Some perspectives emphasise this supplementing of large data with big data, where datasets are linked with the aim of identifying previously undiscovered patterns and correlations at the problem identification and analysis stages (Höchtl et al., 2016;Janssen & Kuk, 2016a). Others focus on high volume real-time big data, combined with highly structured administrative large data, for deriving insights for operations and public service delivery (Joseph & Johnson, 2013;Mergel et al., 2016).
The harvesting of big data, coupled with advances in technology and scientific developments for managing that data, emerged first in the private sector under the heading 'business analytics', with analytics serving as an umbrella term for statistical methods and approaches including statistics, data mining, machine learning, business intelligence, knowledge management, decision support systems, operations research, and decision analysis. Key to the development of business intelligence was that this intelligence was useful if it led to action that was immediate and the impact of that action measurable (Longo, 2018;McAfee, Brynjolfsson, Davenport, Patil, & Barton, 2012). When eventually applied to public policy problems, this led to the concept of 'policy analytics' denoting the development and application of data analytic skills, methods, and technologies, supporting stakeholders with meaningful and informative analysis at any stage of the policy cycle (De Marchi et al., 2016;Tsoukias et al., 2013). Pirog (2014) envisions the extension of previously developed quantitative methods through the linking of government administrative records, data from natural science fields such as biology and neuropsychology, and geospatial data ushering in a dramatic advance in policy research. Giest (2017) gives examples from different policy domains-health, education, climate change, and crisis management-and identifies a mix of data, technology, and expertise that could result in important ad-vances in the science of policymaking. Thus, based on the literature that has emerged to date from both business analytics and policy analytics, we define policy analytics as the use of new sources and forms of policyrelevant big data combined with advanced analytics techniques and capacity, taking advantage of ubiquitous communication methods to reduce the time delay amongst stages of the policy cycle, aimed at better addressing public problems.
In adopting the tools of policy analytics, governments are mirroring the actions of private sector firms that use big data to better understand people's behaviour. Examples include encouraging users to return to a webpage, click on an ad, buy a product and a subsequent product, purchase a service, or watch a movie because they watched a similar one (McAfee et al., 2012). Data analytics can also be used to judge who is a worthy credit risk, who would be a good person to hire, and who would make an ideal mate (Mittelstadt, Allo, Taddeo, Wachter, & Floridi, 2016;Tufekci, 2014). Despite these early successes and enthusiasm for the possibilities of policy analytics, a number of questions and barriers to their use have emerged that should temper any unexamined enthusiasm (Noveck, 2018). Among these are concerns over privacy and security of citizens' data (Kim et al., 2014), proper and efficient permissioning to facilitate use by public servants (Welch, Hinnant, & Moon, 2005), weak data skills among public servants and a reliance on external consultants and contract data analysts (Dunleavy, Margetts, Bastow, & Tinkler, 2006), faulty analysis where strong correlations are valued over preliminary causal explanations (Harford, 2014), questions about big data representativeness as new digital divides emerge that undermine the possible democratizing effects of policy analytics (Longo, Kuras, Smith, Hondula, & Johnston, 2017), establishing the provenance of big data so that stakeholders and decision makers can understand where the evidence came from (Javed, McClatchey, Khan, & Shamdasani, 2018), opacity in policymaking by algorithm (Kitchin, 2017;Mittelstadt et al., 2016;Pasquale, 2015), bias in algorithms and machine learning (Koene, 2017), an over-reliance on data for decision-making in situations where values are important (Majone, 1989;Shulock, 1999), and its inverse, ignoring data in decision-making (Harsin, 2015;Tingling & Brydon, 2010).
Policy analytics represents a persuasive combination of advanced digital technology and modern behavioural science. But it has emerged alongside volatile and untrustworthy information and communications technologies reshaping shifting perceptions and redirecting changing beliefs, driving the evolving preferences that must be reflected in contested metrics for signalling social welfare and community wellbeing. In assessing this challenge, it is necessary to consider what kinds of public reasons can legitimately support the authoritative exercise of delegated public power in a political setting marked by a lack of consensus within a divided society.
As the potential dangers of the big data industry begin to be revealed and slowly understood (Persily, 2017), the question that must be asked of government is whether the benefits of policy analytics outweigh the potential downsides (Boyd & Crawford, 2012). This challenge is, of course, just one facet of the broader social question of what it means to retain meaningful human control of technocratic instruments, including autonomous and intelligent systems, in a world where the exercise of human agency is increasingly distanced from consequences and individual responsibility.

Policy Analytics in Practice
Policy analytics can take a range of approaches. Perhaps the simplest, first line of analysis lies in social media monitoring or 'social listening' to analyze and respond to citizen's preferences, experiences, articulated values, and behaviours (Charalabidis, Loukis, Androutsopoulou, Karkaletsis, & Triantafillou, 2014;Grubmüller, Götsch, & Krieger, 2013;Prpić, Taeihagh, & Melton, 2015). Social listening involves searching and monitoring social media for words, phrases, hashtags, or mentions of government accounts or persons. This approach is becoming increasingly popular with governments seeking to gauge public perception and better appreciate why citizens have the attitudes they do and how these attitudes change over time (Longo, 2017;Paris & Wan, 2011). Further analysis can centre on the assessment of sentiment and meaning, clustering opinion to reveal network properties and make sense of public opinion (Till, Longo, Dobell, & Driessen, 2014).
Venturing deeper, predictive analytics can serve as an input into framing a policy problem before it is apprehended as such, indicating where a need is being unmet, or where an emerging problem might be countered early. As a big data analytics form of forecasting (Sims, 1986) now referred to as nowcasting (Choi & Varian, 2012), predictive analytics is based on the argument that analysis of past performance can reveal a probable outcome that can be expected from continuing to pursue the same approach (i.e., doing nothing). Some recent initiatives show the possibilities for success, for example in reducing administrative failures (Behavioural Insights Team, 2012) and understanding social dynamics (Bond et al., 2012). The combination of digital signals and new analytic techniques can help in understanding and predicting behavior in contexts such as crime (Chan & Bennett Moses, 2015), energy use (Zhou & Yang, 2016), migration (e.g., the use of email, social media, web search, and geolocation have been used to infer migration flows; see Gerland, 2015;Raymer, Wiśniowski, Forster, Smith, & Bijak, 2013;Verhulst & Young, 2018), urban planning (Kitchin, 2014), and public health (Khoury & Ioannidis, 2014;Murdoch & Detsky, 2013).
Policy experimentation builds on the idea of policy incrementalism (Lindblom, 1959), with a long history of examples of trials, experiments, and pilots of varying scale and precision, and a renewed enthusiasm in jurisdictions from the United Kingdom (Breckon, 2015) to Canada (Monafu, Chan, & Turnbull, 2018). Real-time experimental policy analytics takes advantage of new big data sources, coupled with data analytics techniques, bringing together all the discrete stages of the policy cycle into one continuous process. While a policy problem is being observed, interventions would also be underway using the same devices used to collect the data, with their impact on the problem becoming part of the evidence base for further modifying the policy variables. These further modifications would also be observed for their impact, as the system response to the policy intervention moved closer to the policy target or equilibrium (Esperanza & Dirk, 2014). An intriguing application of policy analytics from transportation management can be seen in the evolution from high-occupancy vehicle (HOV) lanes to high-occupancy smart toll (HOST) lanes (Longo & McNutt, 2018). Shi, Ai and Cao (2017) argue that some policy analytic methods are better suited to particular stages of the policy cycle than others, and provide several examples to support their claim. Cognitive mapping, text mining, and understanding public attitudes through geo-specific Google search-query data (Lee, Kim, & Lee, 2016) are applicable to the agenda-setting phase, participatory planning in the decision phase, and remote sensing, smart metering, or participatory GIS to monitoring and evaluation phases. Decision support systems to collect, manage, and analyze data (e.g., a space-air-ground big data traffic system that includes people, vehicles, and road conditions using data from satellite sensing, aerial photography, aerial drone sensors, cameras, transponders, and smartphones) can support overall transportation policy implementation, law enforcement, and emergency response (Xiong et al., 2016). A groundwater web portal that combines legacy data, community-sourced groundwater information, and government open data provides real-time information to the public, and tools for data querying and visualization to support decision-making and community engagement (Dahlhaus et al., 2016). A big data archive covering more than 43 million soldiers, veterans, and their family members provides a foundation for the examination of the causes and consequences of PTSD (Vie, Griffith, Scheier, Lester, & Seligman, 2013).
In some cases, policy evaluation can be undertaken using policy analytics. Lu, Chen, Ho and Wang (2016) analyze 2 million construction waste disposal records to assess the disparity between public and private operator performance, with contractors operating in public projects performing better than those in private projects. In transportation management, cases from the Netherlands and Sweden show that automated smartcard and vehicle positioning data provide for better understanding of passenger needs and behaviours, system performance, and real-time conditions in order to support planning and operational processes (Van Oort & Cats, 2015).
Participatory policy analytics can take the form of sentiment analysis, mined from Internet content including social media, used to gauge how the public values alternative outcomes. Beyond simplistic exercises such as counting 'likes' and 'mentions', the example of mining Yelp restaurant reviews as a supplement (and potential replacement) for public health inspections (Kang, Kuznetsova, Luca, & Choi, 2013) shows that mining of large volumes of text contributions from citizens concerning government policies can extract opinions and knowledge useful for policy purposes (Maragoudakis, Loukis, & Charalabidis, 2011). Poel, Meyer and Schroeder (2018) present the results of a recent project that scanned for big data policymaking examples, noting the heightened interest in big data for policymaking in recent years, though acknowledging that there are still few good examples available. They analyze 58 data-driven cases, with a focus on national and international policy initiatives, and highlight persistent challenges: data representativeness, validity of results, gaps in citizen engagement, and weak data analysis skills. While most examples do not tread on personally identifiable data, privacy protection remains a concern due to re-identification/de-anonymization risks (de Montjoye, Hidalgo, Verleysen, & Blondel, 2013; Narayanan & Shmatikov, 2008). More generally, using big data for policy purposes revives concerns about technocracy, technoscience, policy-based evidence making, and the influence of lobby groups. The most prominent area Poel et al. (2018) identify centres on government transparency initiatives supported by the publication of open data on procurement, having the objective of revealing government corruption. A smaller number of initiatives focus on operational policy areas such as budgeting, economic and financial affairs, and transportation. Remaining initiatives cover policy areas such as health, education, research, justice, and social affairs. Almost half of the initiatives scanned focus primarily on the early stages of the policy cycle (e.g., sentiment analysis via Twitter to support agenda setting and problem analysis), with others supporting policy design, implementation, and monitoring. Observing traffic patterns via sensors and mobile phone data, and using administrative data to monitor transportation and environmental policies, were also highlighted. However, as most of the projects scanned in Poel et al. (2018) use data formats such as spreadsheets, and data analysis is limited to descriptive statistics or occasional visualizations with few examples of techniques such as machine learning or algorithmic response, the boundary in this survey between 'large data' and 'big data' appears fluid. Schintler and Kulkarni (2014) review the range of arguments for and against the use of big data in policy analysis, and offer examples to illustrate some of the positive features. The 'Billion Prices Project' uses websourced price information from retailers across multiple countries and sectors to generate daily estimates of inflation, providing a real-time price index as opposed to the periodic figures produced by national statistical agencies (Cavallo & Rigobon, 2016). The 'Global Forest Watch' project processes hundreds of millions of satellite images as well as data from people on the ground to generate real-time estimates of tree loss that are more precise than those produced from other approaches (Hartmann et al., 2018). The near real-time data are available freely online, and have been used to measure global deforestation rates, detect illegal clearing activity and burning, and monitor corporate sustainable forestry commitments. Daniell et al. (2016) point towards examples of policy analytics for formulation or delivery in the areas of health resource allocation (Aringhieri, Carello, & Morale, 2016), sentiment analysis and opinion mining (Alfaro, Cano-Montero, Gómez, Moguerza, & Ortega, 2016), using behavioral information to encourage energy efficiency, precision government services , identifying social service and public information 'deserts' (Entwistle, 2003), and promoting smart cities (Kumar, Nguyen, & Teo, 2016). Additional examples are being tested, and stand as potential opportunities for applied policy analytics, from using smart electricity meters to incentivize conservation behaviour and reduce peak-load demand (Blumsack & Fernandez, 2012;Newsham & Bowker, 2010), to possibilities such as creating on-demand local public transportation services (Murphy, 2016). The Joint Statistical Research Program of the US Internal Revenue Service enables studies that use long panels of tax returns to observe individuals over time with a view to revealing potential policy initiatives (Jarmin & O'Hara, 2016).
The principles of nudge theory are being applied in dynamic ways that take advantage of the powerful devices ubiquitously moving around us to measure the environment, along with individual behavior and health conditions, to intervene by sending information to the individual via devices such as their smartphone in order to change a behavior (Katapally et al., 2018). Smart devices can be deployed to monitor behaviour in teams to improve performance (Pentland, 2012), or monitoring student engagement to improve learning outcomes (Crosby & Ikehara, 2015).
The recent advances in Artificial Intelligence (AI) that we are currently experiencing-e.g., autonomous vehicles, facial recognition-have accelerated due to the combined developments of big data and analytics, especially machine learning. However, the origins of AI, and concerns over its adoption in public policy and administration, are much deeper. The early promise of AI in public sector practice centred on providing decision support for public managers (e.g., Barth & Arnold, 1999;Hadden, 1986;Hurley & Wallace, 1986;Jahoda, 1986;Masuch & LaPotin, 1989) but failed to materialize in any meaningful way. While the early promises of AI went unfulfilled, there have been dramatic advances in AI in recent years (Russell & Norvig, 2009) that could have important consequence for public management and governance. A key contributing factor to increasing maturity of AI technologies and the viability of AI application to public policy and administration is the increased availability of data that can be used to further machine learning. As algorithms become more widely used, increasingly autonomous, and invisible to those affected by their decisions, their status as impartial public servants becomes more difficult to monitor for bias or discrimination (Janssen & Kuk, 2016b). Today, AI systems are being used to detect irregularities, with aims such as reducing fraud and errors in service processing (Maciejewski, 2017). An even more speculative example (Death, 2015) addresses challenges of watershed governance, envisaging the application of AI to the continuous monitoring of complex streamflow dynamics and water chemistry and quality as part of decision support systems for communities concerned with environmental flows as well as crucial water supply. The possible extension to Artificial Intelligence that could offer, autonomously, better decisions than the community might make in resolving the conflicts around the vital tradeoffs among the many interests, human and ecological-as well, perhaps, as the rights of the river itself-is a topic of ongoing debate. Relatedly, the question of meaningful human involvement in decisions related to problems of human security has been addressed in a recent report on the role of AI in nuclear war (Geist & Lohn, 2018).

Discussion
Given the scan of examples of policy analytics in practice, where does this revision to the policy analysis model fit in the modern governance toolkit, and what do the examples of successful policy analytics applications tell us about the possibilities for its future, and the limitations it will likely face?
We must be careful not to overstate what policy analytics can tell us. Take, for example, the rhetoric around predictive analytics (Gandomi & Haider, 2015), which can serve as an input into framing a policy problem before it is apprehended as such. In 'predictive policing', where potential crimes, offenders, and victims are identified a priori, police resources can be directed proactively (Brayne, 2017;Perry, McInnis, Price, Smith, & Hollywood, 2013). The inherent complexities of social, economic, and behavioural phenomena, however, make policy prediction essentially impossible (Sawyer, 2005). While modeling for purposes of forecasting (Sims, 1986) and related approaches such as backcasting (Robinson, 1982) can serve as useful tools in policy analysis, and these techniques have improved with the increase in available data and growth in analytical capacity (Einav & Levin, 2014b;Wang, Kung, & Byrd, 2018), there are obvious limits on our ability to predict the future. Predictive models are necessarily abstractions from reality, and cannot feasibly include all individual and system factors. More likely are qualitative statements (including probability statements as to likelihood) about the direction of predicted change, including indications about possible unexpected outcomes. These are useful for policy analysis, especially in highly uncertain environments where unlikely events may still yield catastrophic outcomes.
It should be obvious that the proposed policy analytic approach will not solve all policy challenges. Despite the power of modern digital technology, a number of limitations and caveats remain. While more, and more accurate, evidence can improve our understanding and form the basis for better policy, we should not conflate the volume of big data with its representativeness. Despite the mesh of sensors that act as the collection net for policyrelevant data, there is the risk that those without the right devices or engaged in the targeted behavior may be rendered "digitally invisible" in the movement towards rapid policy design (Longo et al., 2017, p. 76). There are also a number of technical limits to assembling robust big data sets including challenges in data acquisition (especially where much of the really valuable data is closely guarded by private companies; (Golder & Macy, 2014;Verhulst & Young, 2018), data interoperability problems (Miller & Tucker, 2014), and legitimate privacy protections that place prohibitions on the sharing of data outside of programs or departments, or even on combining datasets behind protective firewalls. Even if data coverage is comprehensive, big data hubris can produce policy errors (Lazer, Kennedy, King, & Vespignani, 2014). Traditional social science designs research instruments to collect data in order to test a hypothesis, whereas big data analytics seeks to identify relationships (Wigan & Clarke, 2013). And the risk of apophenia-the seeing of patterns in random data-can lead policymakers to identify correlations that are easily mistaken for causal relationships (Boyd & Crawford, 2012). Shi et al. (2017, p. 552) note that "only a few government decisions have already benefited from the systematic use of masses of data and evidence, and of cutting-edge modelling", with the norm being to rely on traditional forms of policy analysis. Several challenges are noted, centring on the democratic underpinnings of policy analysis. Since public sector problems typically involve making decisions on behalf of society at large, involving the allocation of public resources (Lerner & Lasswell, 1951), policy analytics must balance the need for robust analysis with the need to satisfy legitimacy expectations, transparency requirements, and opportunities for citizen participation.
Thus the policy analytics model-of the rapid prototype based on a digitally enabled system of communication, feedback, analytics and tool modification-does not apply across a wide range of policy problems or domains. Many policy areas are not amenable to minor policy tool modifications that can be communicated digitally. Few policy systems form such a tight linkage between a minor modification of a policy signal and an immediately detectable response from the system under observation, instead operating across long timescales between policy intervention and system response. Policy analytics is well suited to the digital realm of approaches such as A/B testing of government citizen service websites (Longo, 2018), whereas many policy decisions entail actions that have significant consequences diffused over many sectors. More often than not, the policy environment will be complex beyond the capabilities of even the most advanced analytics. The possibility of policy experimentation will apply in a limited set of circumstances, especially where legitimate ethical concerns could be raised.
Consider the 4-quadrant diagram in Figure 1, with the horizontal axis running from micro or local scale on the left through regional or meso-scale to global scale on the right, and the vertical axis from certainty as to system structure and environment at the bottom to profound uncertainty at the top. In the top right quadrant (high uncertainty, global scale) one has 'wicked problems', 'messes', concerns of post-normal science facing all the challenges of affective conflict and democratic dissent. Examples might be climate change, global hydrological cycle, poverty and inequality. But even in these challenging settings one can look to rapidly increasing computational capacity to develop decision support systems. To the extent that agreement can be achieved on appropriate policy objectives and instruments, there can be realistic ambitions for real-time policy analytic systems.
In the top left quadrant, more inclusive community engagement and deliberation, building on increasingly sophisticated decision-support systems, is feasible, but again expectations of integrated data analytic/policy analytic systems running on a real-time basis must rest on hopes for inclusive and collaborative policy formation processes building agreement on legitimate and acceptable policy objectives and norms of implementation. The lower right quadrant might be thought largely empty for the moment: there appear to be few global scale challenges for which one can have reasonable certainty around system structure and environment, except perhaps international agreements on classification systems or the like. But even here, as international agreements grow in number and specificity, policy analytic methods for monitoring and certifying compliance are increasingly significant.
Nevertheless, it seems that it is in the lower left quadrant, with reasonable certainty around the nature and context of micro or local scale problems that big data, data analytics and policy analytics can best support ongoing experimentation, continuous learning, policy formation, and adaptive management with effective implementation, monitoring and enforcement. Focusing on this quadrant, how might its boundaries evolve and expand? Evidently the operational problems faced in managing the direct provision of services at local level are more amenable to such experimentally-based adaptive control and self-regulation than for the problems that have to be addressed through cooperative federalism or similar institutional arrangements for negotiation among authoritative political units at different scales. Although the professional effort to differentiate the 'policy design' product from the more traditional language calls  attention more to the implementation end of the cycle than to the formulation portion, the bigger challenge for the rapid adaptation of design in response to user experience lies in the varied and slow instruments for implementing change in the operations of representative government, legitimately and with ongoing accountability. The fuzzy boundaries that separate a summative evaluation cycle for Cabinets or executive authorities from a formative evaluation cycle for management exercising delegated authority in decisions at small (how small?) scale might suggests limits to policy analyticsbut they also suggest the potential of machine learning and autonomous and intelligent systems in pushing those boundaries far outward. The science fiction aspects of Joe AI, analyst, or Jane AI, authoritative decisionmaker-and the challenges of teaching her/it in new schools of public policy-may be with us much sooner than expected, with consequent rapid advance in the spread of policy analytics as integrated system.

Conclusion
This article began with a quotation from a leading business magazine in 1973 that enthused about the possibilities of a policy supertool that then appeared immi-nent. That quotation was cited in a commentary from the Honourable C. M. Drury (then President of the Treasury Board of Canada-the agency charged with the development of tools for policy analysis and decision support in the Government of Canada at the time) in the inaugural issue of the journal Canadian Public Policy. In reaction to the fantastic possibilities envisaged, the Minister suggested that "While we may all have our occasional doubts about the advice offered by our traditional public servants, I am certainly not yet ready to trade them in on the strength of this promise!" (Drury, 1975, p. 91).
Almost a half-century later, does policy analytics represent the delayed realisation of that promised policy supertool, or yet another misplaced enthusiasm? Daniell et al. (2016, p. 11) conclude their special issue of policy analytics in practice with the consideration "that analytics have been somehow oversold", that political decision making can be overcome by masses of data, and deep analytics, producing automated solutions to any public problem. While evidence is important, decision making still requires judgment. New initiatives can be informed by past experience, but still require careful experimentation to avoid large implementation failures.
The emerging examples may be persuasive in their particular domains. But many of the problems con-fronted by policy analysts are indeed wicked problems involving differing time scales in complex systems where the effects of policy interventions on decisions and behaviour are unclear, uncertain, and of unknown duration. Much more crucially, agreement on the objectives or purposes of policy is usually lacking, and interests around the nature or instruments of policy intervention are conflicted. Not all policy environments are compatible with the policy analytics model. Much work remains to be done before we find the proper place for this promising development in an increasingly post-positivist, post-fact, post-truth world.