{"title":"An agenda for responsible technology policy in Canada","authors":"Sam Andrey, Joe Masoodi, Tiffany Kwok, André Côté","doi":"10.1111/capa.12535","DOIUrl":null,"url":null,"abstract":"<p>The launch of ChatGPT and the explosion of interest and concern about generative artificial intelligence (AI) has again reinforced that rapid advancements in technology outpace policy and regulatory responses. There is a growing need for scholarship, engagement, and capacity-building around technology governance in Canada to unpack and demystify emerging digital innovations and equip decision-makers in government, industry, academia and civil society to advance responsible, accountable, and trustworthy technology. To help bridge this gap, this note contextualizes today's challenges of governing the design, deployment and use of digital technologies, and describes a new set of secure and responsible technology policy movements and initiatives that can inform and support effective, public interest-oriented technology policymaking in Canada. We conclude by outlining a potential research agenda, with multi-sector mobilization opportunities, to accelerate this critical work.</p><p>The arrival of the internet in 1991 brought hopes for a freer society. Early narratives surrounding the expansion of internet technologies were largely positive, driven by a libertarian spirit highlighting the beneficial features of information technology for democracy, social and economic progress (Korvela, <span>2021</span>). By the mid-2010s, this optimism had dissipated due to several international developments, including the Snowden revelations of widespread state surveillance of citizens, and the Cambridge Analytica scandal resulting from the unwitting exposure of vast troves of personal data collected by social media platforms to influence electoral outcomes.</p><p>Against this backdrop, there have been calls for more policy and regulation in Canada and elsewhere, and several new pieces of legislation aimed at regulating digital technologies and platforms have been introduced.1 Policymakers are also recognizing the need to grow their “tech policy literacy.” For example, two Canadian federal legislators recently launched the Parliamentary Caucus on Emerging Technology, which aims to serve as a cross-party forum linking parliamentarians with experts to fill the knowledge gaps on new and emerging technology (Rempel Garner, <span>2023</span>). Yet, accelerating digital innovation continues to race ahead of the capacity and timeliness of policymakers and public institutions in governing technology. What has emerged is a narrative about government and policymakers lacking understanding and expertise about emerging technology, and therefore incapable of effectively regulating it.</p><p>Former Google CEO Eric Schmidt recently argued against government regulation of AI technologies, noting “there's no way a nonindustry person can understand what's possible” (Hetzner, <span>2023</span>). This comment can be seen as self-serving, reflecting the tech giants' continuing preference for self-regulation. It also reflects a broader view that digital technology is solely the domain of technologists and industry experts—particularly from the science, technology, engineering and math (STEM) disciplines—who are the guardians of society's, or even humanity's, best interests (Allison, <span>2023</span>). Not only is this problematic when viewed through the lens of liberal-democratic representation, it also ignores historical precedents.</p><p>Long before the internet and consolidated online platforms, scholars studied the impacts of technology on society (see for e.g., Adorno & Horkheimer, <span>1972</span>; Arendt, <span>1958</span>; Ellul, <span>1964</span>; Heidegger, <span>1977</span>; Marx, <span>1976</span>). Such works paved the way to scholarship viewing technology as a social process influenced by interests and public processes, with human biases embodied in its design (Bijker et al., <span>1989</span>; Caro, <span>1974</span>; Winner, <span>1986</span>). This scholarship has acted as a springboard for contemporary studies questioning the effects of today's data-driven society on rights, freedoms, race, power, equity and democracy. For instance, research has shed light on the black-boxes of today's digital technologies, including by revealing the inherent biases embedded by human developers in AI large language models (LLMs), which produce discriminatory and racist outcomes, harming marginalized and vulnerable groups of populations (Benjamin, <span>2019</span>; Buolamwini & Gebru, <span>2018</span>; Noble, <span>2018</span>). Such works offer new ways of conceptualizing and theorizing current tech policy issues with direct applications to policymaking, such as calls for moratoriums on police use of facial recognition technologies or regulating online platforms (McPhail, <span>2022</span>; Owen, <span>2019</span>).</p><p>The past five years have also seen the emergence and acceleration of secure and responsible technology movements and initiatives within and across Western jurisdictions. Applying various labels—from “tech for good” and “privacy/security by design” to “ethical AI” and “tech stewardship”—they typically share a common aim: to better align the development and deployment of digital technology with values and principles of an open, inclusive, equitable, and democratic society. While offering significant promise for advancing principles-based tech policymaking in Canada and internationally, these efforts remain nascent, uncoordinated and relatively limited in Canada.</p><p>A lengthy list of secure and responsible technology (SRT) initiatives has recently emerged in Canada and internationally. Generally, such initiatives are focused at the intersection of technological development and culture with human values (MIT Technology Review Insights, <span>2023</span>), seeking to ensure the design and deployment of digital technologies align with democratic values and human rights. A recent G7 Communique, for instance, outlined these as including fairness, accountability, transparency, safety, protection from online harassment, and respect for privacy and the protection of personal data (White House, <span>2023</span>). While a powerful statement, it offers little guidance about what forms of tech policy these values should be applied to. We introduce a basic approach for considering these SRT initiatives that aim to influence tech policy, developed for a new Secure and Responsible Tech Policy professional education program offered at Toronto Metropolitan University.</p><p>The concept of “tech policy” itself is not commonly defined or understood. As policymaking for today's digital technologies clearly extends well beyond the scope of “public policy” formulated and advanced primarily by governments and public institutions, tech policy can be defined as the public, industry, and civil society policies and initiatives that set the conditions, rules and oversight for the development, use and impact of digital technology in Canada and globally. To help differentiate and organize the various types of tech policies, we developed a framework outlining these on a spectrum (see Figure 1, below). It progresses from ideas and advocacy initiatives meant to influence tech policy, to voluntary commitments and discretionary organizational actions, and finally legally-enforceable requirements.</p><p>An initial scan across this landscape reveals several types of initiatives. <b>Thought-leadership and activism</b> includes the wide-ranging work of nonprofits, academia and civil society seeking to inform, influence or create public accountability for a shift towards SRT in policy and practice. In Canada, there are research and policy institutes like the Centre for Media, Technology and Democracy (McGill University) and the Schwartz Reisman Institute for Technology and Society (University of Toronto), which apply SRT-aligned approaches to research and policy convening activities. Others, such as US-based non-profit All Tech is Human, focus on responsible tech community-building that bridges disciplines, from the technologists in engineering and data science to fields including law, economics and anthropology. Of course, corporate lobbying also plays a significant role in contributing to the development of technology policy, which can raise concerns about undue power and influence exerted on legislators (Beretta, <span>2019</span>).</p><p>The second category of <b>technologist frameworks, toolkits and training</b> aims to raise awareness of secure and responsible tech principles and equips and trains technologists and companies to apply these principles and practices in the development, evaluation, and monitoring of technology. The Tech Stewardship Practice Program, for instance, trains undergraduate students at Canadian universities in engineering and technology-related programs to think critically about the social, ethical, and environmental impacts of their work. Some Canadian universities have introduced programs to teach and train students on responsible tech including, for instance, the Master of Public Policy in Digital Society degree (McMaster University) and the Responsible AI program (Toronto Metropolitan University) as well as the Responsible AI course offered at Concordia University as part of its larger certificate program in AI Proficiency.</p><p><b>Multilateral and cross-sector declarations</b> are formal announcements or statements of shared commitment to a set of principles or actions, developed through a multi-party process that can include governments, businesses or industry groups, civil society organizations, and others. Participation or signing typically represent a moral, corporate or political commitment in the public sphere, but not a legally binding commitment. The Montreal Declaration for a Responsible Development of AI, launched in 2017 through the Université de Montreal, is a Canadian example. Another is the Canada Declaration on Electoral Integrity Online—a voluntary code developed by the federal government with online platforms including Facebook, Twitter, Google, TikTok, and LinkedIn—to promote responsible governance of platforms during elections (Government of Canada, <span>2021</span>). The Government of Canada and Canadian organizations are signatories to many international initiatives, such as the recent Paris Call for Trust and Security in Cyberspace, through which major states and companies have signaled a commitment to a set of nine principles.</p><p>Companies and industry sectors can play an important role in the self-governance of tech through the design of their products and policies to self-regulate, though these can also give rise to conflicts between business and societal interests. <b>Corporate policies and product design</b> seek to monitor their own adherence to legal, ethical, or safety standards, rather than being subject to an outside, independent entity or governmental regulator to monitor and enforce those standards. Meta's content policies and moderation activities for the Facebook and Instagram platforms are an obvious and controversial example (Medzini, <span>2022</span>). Apple's App Tracking Transparency policy, allowing users to limit third party tracking of their online activities, is another.</p><p>Typically voluntary processes, <b>industry standards are developed for products, practices, or operations, while certifications</b> are developed for individuals, organizations or products who choose to abide by a set of occupational, industry or other technical requirements, established by a reputable organization through expert-led processes. While there are many initiatives by tech skills certification and standards-setting organizations, the Digital Governance Council, a national forum bringing together the country's CIOs and executive technology leaders, is leading an important initiative to set technical industry standards for digital tech in Canada in 14 areas such as data governance, cyber security, AI and digital credentials. Certifications like the Responsible AI Institute (RAII) Certification qualify AI systems and support practitioners as they navigate the development of responsible AI.</p><p>Progressing to more legally-enforceable categories, <b>public directives</b> provide guidance or set rules for public, private or civil society actors that are not created by a legislative body or enshrined in law. For example, the Government of Canada has introduced a Directive on Automated Decision Making to guide use of AI in government institutions. These incorporate principles like transparency and accountability and outline policy, ethical, and legal considerations (Bitar et al., <span>2022</span>). As with the investigation of OpenAI by Canada's information and privacy commissioners, quasi-judicial public agencies or independent officers of Parliament also play a growing role as <b>oversight bodies</b> in the digital economy in areas like digital privacy (Office of the Privacy Commissioner of Canada, <span>2023</span>). Some of these bodies, however, lack powers to hold organizations to account including legal authority to proactively audit, investigate, and make orders (e.g., Information and Privacy Commissioner of Ontario, <span>2021</span>).</p><p>The final category is <b>government laws and regulations</b>. This includes legislative initiatives directly aimed at enhanced technology governance, like Canada's proposed Bill C-27, which includes the <i>AI and Data Act</i> (AIDA) (Parliament of Canada, <span>2022</span>), or its proposed bill for addressing online harms on social platforms (Government of Canada, <span>2023</span>). Emerging examples of policies being informed and influenced by SRT movements focus on user protection (Green, <span>2021</span>; Standards Council of Canada, <span>2021</span>). Arnaldi et al. (<span>2015</span>) note that the influence of responsible tech has grown, increasingly reflected in government policy and strategic documents. For example, the direct contribution of the Ethics Guidelines for Trustworthy AI process led to the formation of the EU's Artificial Intelligence Act (Stix, <span>2021</span>).</p><p>Some question the effectiveness of these SRT movements and initiatives in producing real, substantive change in policies or corporate product design and practice. The movement's inclusion of a broad range of stakeholders, including governments and global technology firms, is seen as a positive outcome (World Economic Forum, <span>2019</span>), but others criticize their commitments as performative and not reflecting genuine commitment to SRT principles (or “ethics washing”) (Green, <span>2021</span>). Others have pointed to the lack of clarity in some guidelines leading to different interpretations. Although many responsible tech documents reflect similar principles like transparency, justice, and fairness, it can be unclear how organizations operationalize such principles (Mittelstadt, <span>2019</span>; Stix, <span>2021</span>). There are also ongoing debates on definitions, like “ethical AI,” which further contributes to uncertainty in application.</p><p>Closing the gap between digital innovation and technology governance with a responsible technology ethos urgently requires equipping actors in government, industry and civil society with the knowledge and skills to effectively shape technology policies in their various forms. This calls for a scholarship agenda focused on developing further research insights about SRT and how such movements can effectively influence changes in tech policy and practice. It also requires efforts to mobilize actors across key tech policy communities in Canada—governments and regulators, tech firms and industry, academic researchers and civil society, and citizens at-large—to grow knowledge, capacity and common agendas for secure and responsible technology governance.</p><p>Although some important efforts are belatedly underway, by Canadian governments, industry actors and civil society organizations, to establish guardrails for disruptive technologies like social media platforms or cryptocurrencies, there remain significant challenges in the Canadian technology policy landscape. Chief among these concerns is how Canadian businesses can on the one hand contribute to equitable and sustainable economic growth, and on the other, be organized around principles of responsible tech. It will only be through concerted efforts to grow tech policymaking capacity in Canada, grounded in evidence-based research and guided by shared democratic values, that we will effectively govern our increasingly digital society.</p>","PeriodicalId":46145,"journal":{"name":"Canadian Public Administration-Administration Publique Du Canada","volume":"66 3","pages":"439-446"},"PeriodicalIF":1.1000,"publicationDate":"2023-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/capa.12535","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Canadian Public Administration-Administration Publique Du Canada","FirstCategoryId":"91","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/capa.12535","RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"PUBLIC ADMINISTRATION","Score":null,"Total":0}
引用次数: 0
Abstract
The launch of ChatGPT and the explosion of interest and concern about generative artificial intelligence (AI) has again reinforced that rapid advancements in technology outpace policy and regulatory responses. There is a growing need for scholarship, engagement, and capacity-building around technology governance in Canada to unpack and demystify emerging digital innovations and equip decision-makers in government, industry, academia and civil society to advance responsible, accountable, and trustworthy technology. To help bridge this gap, this note contextualizes today's challenges of governing the design, deployment and use of digital technologies, and describes a new set of secure and responsible technology policy movements and initiatives that can inform and support effective, public interest-oriented technology policymaking in Canada. We conclude by outlining a potential research agenda, with multi-sector mobilization opportunities, to accelerate this critical work.
The arrival of the internet in 1991 brought hopes for a freer society. Early narratives surrounding the expansion of internet technologies were largely positive, driven by a libertarian spirit highlighting the beneficial features of information technology for democracy, social and economic progress (Korvela, 2021). By the mid-2010s, this optimism had dissipated due to several international developments, including the Snowden revelations of widespread state surveillance of citizens, and the Cambridge Analytica scandal resulting from the unwitting exposure of vast troves of personal data collected by social media platforms to influence electoral outcomes.
Against this backdrop, there have been calls for more policy and regulation in Canada and elsewhere, and several new pieces of legislation aimed at regulating digital technologies and platforms have been introduced.1 Policymakers are also recognizing the need to grow their “tech policy literacy.” For example, two Canadian federal legislators recently launched the Parliamentary Caucus on Emerging Technology, which aims to serve as a cross-party forum linking parliamentarians with experts to fill the knowledge gaps on new and emerging technology (Rempel Garner, 2023). Yet, accelerating digital innovation continues to race ahead of the capacity and timeliness of policymakers and public institutions in governing technology. What has emerged is a narrative about government and policymakers lacking understanding and expertise about emerging technology, and therefore incapable of effectively regulating it.
Former Google CEO Eric Schmidt recently argued against government regulation of AI technologies, noting “there's no way a nonindustry person can understand what's possible” (Hetzner, 2023). This comment can be seen as self-serving, reflecting the tech giants' continuing preference for self-regulation. It also reflects a broader view that digital technology is solely the domain of technologists and industry experts—particularly from the science, technology, engineering and math (STEM) disciplines—who are the guardians of society's, or even humanity's, best interests (Allison, 2023). Not only is this problematic when viewed through the lens of liberal-democratic representation, it also ignores historical precedents.
Long before the internet and consolidated online platforms, scholars studied the impacts of technology on society (see for e.g., Adorno & Horkheimer, 1972; Arendt, 1958; Ellul, 1964; Heidegger, 1977; Marx, 1976). Such works paved the way to scholarship viewing technology as a social process influenced by interests and public processes, with human biases embodied in its design (Bijker et al., 1989; Caro, 1974; Winner, 1986). This scholarship has acted as a springboard for contemporary studies questioning the effects of today's data-driven society on rights, freedoms, race, power, equity and democracy. For instance, research has shed light on the black-boxes of today's digital technologies, including by revealing the inherent biases embedded by human developers in AI large language models (LLMs), which produce discriminatory and racist outcomes, harming marginalized and vulnerable groups of populations (Benjamin, 2019; Buolamwini & Gebru, 2018; Noble, 2018). Such works offer new ways of conceptualizing and theorizing current tech policy issues with direct applications to policymaking, such as calls for moratoriums on police use of facial recognition technologies or regulating online platforms (McPhail, 2022; Owen, 2019).
The past five years have also seen the emergence and acceleration of secure and responsible technology movements and initiatives within and across Western jurisdictions. Applying various labels—from “tech for good” and “privacy/security by design” to “ethical AI” and “tech stewardship”—they typically share a common aim: to better align the development and deployment of digital technology with values and principles of an open, inclusive, equitable, and democratic society. While offering significant promise for advancing principles-based tech policymaking in Canada and internationally, these efforts remain nascent, uncoordinated and relatively limited in Canada.
A lengthy list of secure and responsible technology (SRT) initiatives has recently emerged in Canada and internationally. Generally, such initiatives are focused at the intersection of technological development and culture with human values (MIT Technology Review Insights, 2023), seeking to ensure the design and deployment of digital technologies align with democratic values and human rights. A recent G7 Communique, for instance, outlined these as including fairness, accountability, transparency, safety, protection from online harassment, and respect for privacy and the protection of personal data (White House, 2023). While a powerful statement, it offers little guidance about what forms of tech policy these values should be applied to. We introduce a basic approach for considering these SRT initiatives that aim to influence tech policy, developed for a new Secure and Responsible Tech Policy professional education program offered at Toronto Metropolitan University.
The concept of “tech policy” itself is not commonly defined or understood. As policymaking for today's digital technologies clearly extends well beyond the scope of “public policy” formulated and advanced primarily by governments and public institutions, tech policy can be defined as the public, industry, and civil society policies and initiatives that set the conditions, rules and oversight for the development, use and impact of digital technology in Canada and globally. To help differentiate and organize the various types of tech policies, we developed a framework outlining these on a spectrum (see Figure 1, below). It progresses from ideas and advocacy initiatives meant to influence tech policy, to voluntary commitments and discretionary organizational actions, and finally legally-enforceable requirements.
An initial scan across this landscape reveals several types of initiatives. Thought-leadership and activism includes the wide-ranging work of nonprofits, academia and civil society seeking to inform, influence or create public accountability for a shift towards SRT in policy and practice. In Canada, there are research and policy institutes like the Centre for Media, Technology and Democracy (McGill University) and the Schwartz Reisman Institute for Technology and Society (University of Toronto), which apply SRT-aligned approaches to research and policy convening activities. Others, such as US-based non-profit All Tech is Human, focus on responsible tech community-building that bridges disciplines, from the technologists in engineering and data science to fields including law, economics and anthropology. Of course, corporate lobbying also plays a significant role in contributing to the development of technology policy, which can raise concerns about undue power and influence exerted on legislators (Beretta, 2019).
The second category of technologist frameworks, toolkits and training aims to raise awareness of secure and responsible tech principles and equips and trains technologists and companies to apply these principles and practices in the development, evaluation, and monitoring of technology. The Tech Stewardship Practice Program, for instance, trains undergraduate students at Canadian universities in engineering and technology-related programs to think critically about the social, ethical, and environmental impacts of their work. Some Canadian universities have introduced programs to teach and train students on responsible tech including, for instance, the Master of Public Policy in Digital Society degree (McMaster University) and the Responsible AI program (Toronto Metropolitan University) as well as the Responsible AI course offered at Concordia University as part of its larger certificate program in AI Proficiency.
Multilateral and cross-sector declarations are formal announcements or statements of shared commitment to a set of principles or actions, developed through a multi-party process that can include governments, businesses or industry groups, civil society organizations, and others. Participation or signing typically represent a moral, corporate or political commitment in the public sphere, but not a legally binding commitment. The Montreal Declaration for a Responsible Development of AI, launched in 2017 through the Université de Montreal, is a Canadian example. Another is the Canada Declaration on Electoral Integrity Online—a voluntary code developed by the federal government with online platforms including Facebook, Twitter, Google, TikTok, and LinkedIn—to promote responsible governance of platforms during elections (Government of Canada, 2021). The Government of Canada and Canadian organizations are signatories to many international initiatives, such as the recent Paris Call for Trust and Security in Cyberspace, through which major states and companies have signaled a commitment to a set of nine principles.
Companies and industry sectors can play an important role in the self-governance of tech through the design of their products and policies to self-regulate, though these can also give rise to conflicts between business and societal interests. Corporate policies and product design seek to monitor their own adherence to legal, ethical, or safety standards, rather than being subject to an outside, independent entity or governmental regulator to monitor and enforce those standards. Meta's content policies and moderation activities for the Facebook and Instagram platforms are an obvious and controversial example (Medzini, 2022). Apple's App Tracking Transparency policy, allowing users to limit third party tracking of their online activities, is another.
Typically voluntary processes, industry standards are developed for products, practices, or operations, while certifications are developed for individuals, organizations or products who choose to abide by a set of occupational, industry or other technical requirements, established by a reputable organization through expert-led processes. While there are many initiatives by tech skills certification and standards-setting organizations, the Digital Governance Council, a national forum bringing together the country's CIOs and executive technology leaders, is leading an important initiative to set technical industry standards for digital tech in Canada in 14 areas such as data governance, cyber security, AI and digital credentials. Certifications like the Responsible AI Institute (RAII) Certification qualify AI systems and support practitioners as they navigate the development of responsible AI.
Progressing to more legally-enforceable categories, public directives provide guidance or set rules for public, private or civil society actors that are not created by a legislative body or enshrined in law. For example, the Government of Canada has introduced a Directive on Automated Decision Making to guide use of AI in government institutions. These incorporate principles like transparency and accountability and outline policy, ethical, and legal considerations (Bitar et al., 2022). As with the investigation of OpenAI by Canada's information and privacy commissioners, quasi-judicial public agencies or independent officers of Parliament also play a growing role as oversight bodies in the digital economy in areas like digital privacy (Office of the Privacy Commissioner of Canada, 2023). Some of these bodies, however, lack powers to hold organizations to account including legal authority to proactively audit, investigate, and make orders (e.g., Information and Privacy Commissioner of Ontario, 2021).
The final category is government laws and regulations. This includes legislative initiatives directly aimed at enhanced technology governance, like Canada's proposed Bill C-27, which includes the AI and Data Act (AIDA) (Parliament of Canada, 2022), or its proposed bill for addressing online harms on social platforms (Government of Canada, 2023). Emerging examples of policies being informed and influenced by SRT movements focus on user protection (Green, 2021; Standards Council of Canada, 2021). Arnaldi et al. (2015) note that the influence of responsible tech has grown, increasingly reflected in government policy and strategic documents. For example, the direct contribution of the Ethics Guidelines for Trustworthy AI process led to the formation of the EU's Artificial Intelligence Act (Stix, 2021).
Some question the effectiveness of these SRT movements and initiatives in producing real, substantive change in policies or corporate product design and practice. The movement's inclusion of a broad range of stakeholders, including governments and global technology firms, is seen as a positive outcome (World Economic Forum, 2019), but others criticize their commitments as performative and not reflecting genuine commitment to SRT principles (or “ethics washing”) (Green, 2021). Others have pointed to the lack of clarity in some guidelines leading to different interpretations. Although many responsible tech documents reflect similar principles like transparency, justice, and fairness, it can be unclear how organizations operationalize such principles (Mittelstadt, 2019; Stix, 2021). There are also ongoing debates on definitions, like “ethical AI,” which further contributes to uncertainty in application.
Closing the gap between digital innovation and technology governance with a responsible technology ethos urgently requires equipping actors in government, industry and civil society with the knowledge and skills to effectively shape technology policies in their various forms. This calls for a scholarship agenda focused on developing further research insights about SRT and how such movements can effectively influence changes in tech policy and practice. It also requires efforts to mobilize actors across key tech policy communities in Canada—governments and regulators, tech firms and industry, academic researchers and civil society, and citizens at-large—to grow knowledge, capacity and common agendas for secure and responsible technology governance.
Although some important efforts are belatedly underway, by Canadian governments, industry actors and civil society organizations, to establish guardrails for disruptive technologies like social media platforms or cryptocurrencies, there remain significant challenges in the Canadian technology policy landscape. Chief among these concerns is how Canadian businesses can on the one hand contribute to equitable and sustainable economic growth, and on the other, be organized around principles of responsible tech. It will only be through concerted efforts to grow tech policymaking capacity in Canada, grounded in evidence-based research and guided by shared democratic values, that we will effectively govern our increasingly digital society.
期刊介绍:
Canadian Public Administration/Administration publique du Canada is the refereed scholarly publication of the Institute of Public Administration of Canada (IPAC). It covers executive, legislative, judicial and quasi-judicial functions at all three levels of Canadian government. Published quarterly, the journal focuses mainly on Canadian issues but also welcomes manuscripts which compare Canadian public sector institutions and practices with those in other countries or examine issues in other countries or international organizations which are of interest to the public administration community in Canada.