| Line 1: |
Line 1: |
| | This guide outlines key responsibilities for public servants using AI in any policy, program, or service. It is grounded in federal directives, standards, and legislation, with a focus on identifying and removing bias and barriers. | | This guide outlines key responsibilities for public servants using AI in any policy, program, or service. It is grounded in federal directives, standards, and legislation, with a focus on identifying and removing bias and barriers. |
| | + | |
| | + | === Core Principles for Responsible AI === |
| | + | Every public servant involved with an AI project must understand and apply these foundational principles: |
| | + | |
| | + | * Human Oversight: A human must always have the final say in decisions that impact a person's rights or well-being. |
| | + | * Fairness by Design: Equity is not an add-on or a nice to have. Legal considerations require users to proactively identify, challenge, and mitigate bias at every stage. |
| | + | * Data Integrity: The quality and fairness of AI systems depend entirely on the quality and fairness of the data it's trained on. |
| | + | * Accountability & Redress: Clear mechanisms must exist for people to challenge AI-driven decisions and seek recourse. |
| | + | |
| | + | === 1. Validating & Managing Data === |
| | + | AI systems learn from data. If this data reflects historical or societal bias, the AI will learn, perpetuate, and even amplify that bias. |
| | + | |
| | + | * Audit Your Data Sources: Before any development, rigorously analyze your datasets for historical biases. For example, if historical data shows a certain demographic was disproportionately denied a service, using that data without correction will teach the AI to continue that discriminatory pattern. |
| | + | * Ensure Data Representativeness: Ensure data reflects the diversity of people in Canada. If there are gaps (e.g., underrepresentation of Northern communities or persons with disabilities), develop a strategy to address them before proceeding. |
| | + | * Practice Data Minimization: Only collect and use the data that is absolutely necessary for the system’s purpose. Every extra data point increases the risk of introducing bias and privacy violations. |
| | + | * Establish Clear Data Governance: Appoint clear ownership and accountability for the data's quality, lifecycle, and ethical use. |
| | + | |
| | + | ==== Key Resources: ==== |
| | + | |
| | + | * [https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32601 Guideline on Service and Digital (See Appendix C: Standard on Enterprise Data)] |
| | + | * [https://www150.statcan.gc.ca/n1/pub/12-586-x/12-586-x2017001-eng.htm Statistics Canada's Quality Assurance Framework] |
| | + | * [https://www.priv.gc.ca/en/privacy-topics/privacy-laws-in-canada/the-personal-information-protection-and-electronic-documents-act-pipeda/p_principle/ PIPEDA fair information principles - Office of the Privacy Commissioner of Canada] |
| | + | |
| | + | === 2. Developing & Updating Policies === |
| | + | When creating or changing policies that involve AI, assess its impact on people from the very beginning. |
| | + | |
| | + | * Conduct an Algorithmic Impact Assessment (AIA) to determine the system's risk level. This must include an explicit assessment of the proposed data sources for potential bias. |
| | + | * Incorporate foundational legislation like the Accessible Canada Act, the United Nations Declaration for the Rights of Indigenous People and the Employment Equity Act in the policy analysis. |
| | + | * Challenge policy assumptions in areas like risk scoring, eligibility determination, and the underlying assumptions of the policy itself that could lead to discriminatory outcomes. |
| | + | |
| | + | ==== Key Resources: ==== |
| | + | |
| | + | * [https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html Algorithmic Impact Assessment (AIA) Tool] |
| | + | * [https://laws-lois.justice.gc.ca/eng/acts/a-0.6/ Accessible Canada Act] |
| | + | * [https://www.justice.gc.ca/eng/declaration/index.html United Nations Declaration on the Rights of Indigenous Peoples Act] |
| | + | * [https://laws-lois.justice.gc.ca/eng/acts/e-5.401/ Employment Equity Act] |
| | + | * [https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/guide-peer-review-automated-decision-systems.html Guide to Peer Review of Automated Decision Systems - Canada.ca] |
| | + | |
| | + | === 3. Designing Programs & Services === |
| | + | Embed fairness directly into the architecture of any AI-powered program. |
| | + | |
| | + | * Ensure meaningful human oversight and provide plain-language notices to users explaining how the AI works and how to challenge a decision. |
| | + | * Include systemically marginalized groups in all phases, from initial design to final testing and implementation. |
| | + | * Audit all AI tools for equity, especially internal systems, to ensure they do not perpetuate bias and barriers. |
| | + | * Embed accessibility and bias mitigation throughout design, testing, and implementation. |
| | + | |
| | + | ==== Key Resources: ==== |
| | + | |
| | + | * [https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32592 Directive on Automated Decision-Making] |
| | + | * [https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32603 Policy on Service and Digital] |
| | + | * [https://accessible.canada.ca/creating-accessibility-standards/accessible-and-equitable-artificial-intelligence-systems-technical-guide Accessible and Equitable Artificial Intelligence Systems - Accessibility Standards Canada] |
| | + | * [https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/guide-peer-review-automated-decision-systems.html Guide to Peer Review of Automated Decision Systems] |
| | + | |
| | + | === 4. Procuring Technology & AI Systems === |
| | + | |
| | + | * Require conformance with accessibility standards in all procurement contracts. This includes both the hardware/software standards and the specific standards for AI. |
| | + | * Require potential vendors to disclose the sources of their training data, their data-cleaning methods, and the steps they took to mitigate bias in their models. |
| | + | * Require conformance with accessibility standards in all procurement contracts, including ICT and AI-specific standards. |
| | + | * Mandate an external, independent peer review for any high-impact AI system before a contract is finalized and before deployment. |
| | + | |
| | + | ==== Key Resources: ==== |
| | + | |
| | + | * [https://accessible.canada.ca/creating-accessibility-standards/canasc-en-301-5492024-accessibility-requirements-ict-products-and-services CAN/ASC - EN 301 549:2024 Accessibility requirements for ICT products and services (EN 301 549:2021, IDT) - Accessibility Standards Canada] |
| | + | * [https://accessible.canada.ca/creating-accessibility-standards/asc-62-accessible-equitable-artificial-intelligence-systems ASC-6.2 Accessible and Equitable Artificial Intelligence Systems - Accessibility Standards Canada] |
| | + | * [https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32592 Directive on Automated Decision-Making] |
| | + | |
| | + | === 5. Managing & Supervising Teams === |
| | + | As a leader, help build your team's capacity to work with AI ethically and inclusively. |
| | + | |
| | + | * Obtain training on bias, equity, and accessible design principles. |
| | + | * Actively engage employees from systemically marginalized groups to gather feedback on AI tools and processes. |
| | + | |
| | + | === 6. Working in Human Resources (HR) === |
| | + | Exercise caution to prevent AI from creating discriminatory barriers in recruitment, promotion, or talent management. |
| | + | |
| | + | * Do not use AI in hiring or promotion unless: |
| | + | *# AI training data was audited and corrected for biases related to gender, race, disability, and other protected grounds. |
| | + | *# The system has been independently audited for equity impacts in a Canadian context. |
| | + | *# Interfaces are fully bilingual and accessible. |
| | + | * Ensure AI-enabled learning or assessment platforms are barrier-free and have been co-designed with meaningful consultation from systemically discriminated groups. |
| | + | * Conduct an Algorithmic Impact Assessment for any system that automates decisions affecting employees' rights or careers. |
| | + | |
| | + | ==== Key Resources: ==== |
| | + | |
| | + | * [https://www.canada.ca/en/public-service-commission/services/appointment-framework/guides-tools-appointment-framework/ai-hiring-process.html Artificial intelligence in the hiring process - Canada.ca] |
| | + | * [https://www.canada.ca/en/public-service-commission/services/public-service-hiring-guides/enhance-fairness-reduce-bias-assessment-tools.html Enhance fairness and reduce bias in the content of assessment tools - Canada.ca] |
| | + | * [https://laws-lois.justice.gc.ca/eng/acts/e-5.401/ Employment Equity Act] |
| | + | * [https://accessible.canada.ca/creating-accessibility-standards/accessible-and-equitable-artificial-intelligence-systems-technical-guide Accessible and Equitable Artificial Intelligence Systems - Accessibility Standards Canada] |
| | + | * [https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html Algorithmic Impact Assessment (AIA) Tool] |
| | + | |
| | + | === 7. Supporting Indigenous Rights & Self-Determination === |
| | + | Ensure AI systems respect the rights and data sovereignty of Indigenous Peoples. |
| | + | |
| | + | * Align AI systems with the principles of the the United Nations Declaration on the Rights of Indigenous Peoples Act and the OCAP® Principles (Ownership, Control, Access, and Possession). |
| | + | * Include Indigenous Peoples, leaders, and networks in the design, procurement, and governance of any AI system that may affect them. |
| | + | * Avoid automated systems that could reinforce or create new systemic inequities for Indigenous Peoples. |
| | + | |
| | + | ==== Key Resources: ==== |
| | + | |
| | + | * [https://fnigc.ca/ocap-training/ The First Nations Principles of OCAP®] |
| | + | * [https://www.justice.gc.ca/eng/declaration/index.html United Nations Declaration on the Rights of Indigenous People Act] |
| | + | |
| | + | === 8. Monitoring, Evaluating & Auditing === |
| | + | Continuously assess the real-world impact of AI systems to ensure they remain fair and effective over time. |
| | + | |
| | + | * Assess AI-related impacts using GBA Plus assessments, program evaluations, and privacy and accessibility audits. |
| | + | * Continuously monitor system outputs for unexpected or inequitable results. If an AI system starts flagging a specific demographic at a higher rate, it requires immediate investigation. |
| | + | * Report transparently on AI risks, mitigation efforts, and any updates made to the system. |
| | + | * Establish clear feedback and redress mechanisms so users can challenge an automated decision |
| | + | |
| | + | ==== Key Resources: ==== |
| | + | |
| | + | * [https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32592 Directive on Automated Decision-Making- Canada.ca] |
| | + | * [https://www.canada.ca/en/women-gender-equality/gender-based-analysis-plus.html Gender-based Analysis Plus (GBA Plus) - Canada.ca.] |
| | + | |
| | + | === 9. Engaging the Public & Partners === |
| | + | Foster public trust through clear communication and meaningful collaboration. |
| | + | |
| | + | * Provide plain language explanations of what AI tools do and how they impact people. |
| | + | * Ensure all outreach is culturally relevant, linguistically accessible, and inclusive of marginalized communities. |
| | + | * Co-design AI systems with systemically marginalized groups and by recognizing that persons with disabilities must be involved in creating policies and services that affect them. |
| | + | |
| | + | ==== Key Resources: ==== |
| | + | |
| | + | * [https://accessible.canada.ca/creating-accessibility-standards/asc-62-accessible-equitable-artificial-intelligence-systems ASC-6.2 Accessible and Equitable Artificial Intelligence Systems - Accessibility Standards Canada] |
| | + | |
| | + | === 10. Designing Training & Communications === |
| | + | Create educational materials that are accessible and inclusive. |
| | + | |
| | + | * Use inclusive and accessible formats like screen-reader-compatible documents, captioned videos, and translated materials. |
| | + | * Co-create content with systemically marginalized groups to ensure it is relevant and respectful. |
| | + | * Go beyond "performative" training. Invest in meaningful education that helps public servants understand and confront systemic racism, ableism, and colonialism. |
| | + | |
| | + | ==== Key Resources: ==== |
| | + | |
| | + | * [https://accessible.canada.ca/creating-accessibility-standards/asc-62-accessible-equitable-artificial-intelligence-systems ASC-6.2 Accessible and Equitable Artificial Intelligence Systems - Accessibility Standards Canada] |
| | + | |
| | + | === 11. For All Public Servants: Your Personal Responsibility === |
| | + | Regardless of your role, you have a part to play in ensuring the responsible use of AI. |
| | + | |
| | + | * Seek out training on data literacy, AI ethics, and unconscious bias. Understand the basics so you can ask critical questions. |
| | + | * If you are asked to work on an AI project, ask "What are the risks of bias in this data?" and "Who might be negatively impacted by this system?" |
| | + | * Actively consult with systemically marginalized groups and colleagues from different levels, locations and classifications. Their insights are invaluable for spotting potential issues. |
| | + | |
| | + | ==== Key Resources: ==== |
| | + | |
| | + | * [https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/gc-ai-strategy-overview.html AI Strategy for the Public Service] |
| | + | * [https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/principles.html Guiding principles for the use of AI in government] |
| | + | * [https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/guide-use-generative-ai.html Guide on the use of generative artificial intelligence] |
| | + | __NOTOC__ |