Line 22: |
Line 22: |
| * [https://www150.statcan.gc.ca/n1/pub/12-586-x/12-586-x2017001-eng.htm Statistics Canada's Quality Assurance Framework] | | * [https://www150.statcan.gc.ca/n1/pub/12-586-x/12-586-x2017001-eng.htm Statistics Canada's Quality Assurance Framework] |
| * [https://www.priv.gc.ca/en/privacy-topics/privacy-laws-in-canada/the-personal-information-protection-and-electronic-documents-act-pipeda/p_principle/ PIPEDA fair information principles - Office of the Privacy Commissioner of Canada] | | * [https://www.priv.gc.ca/en/privacy-topics/privacy-laws-in-canada/the-personal-information-protection-and-electronic-documents-act-pipeda/p_principle/ PIPEDA fair information principles - Office of the Privacy Commissioner of Canada] |
| + | |
| + | === 2. Developing & Updating Policies === |
| + | When creating or changing policies that involve AI, assess its impact on people from the very beginning. |
| + | |
| + | Conduct an Algorithmic Impact Assessment (AIA) to determine the system's risk level. This must include an explicit assessment of the proposed data sources for potential bias. |
| + | |
| + | Incorporate foundational legislation like the Accessible Canada Act, the United Nations Declaration for the Rights of Indigenous People and the Employment Equity Act in the policy analysis. |
| + | |
| + | Challenge policy assumptions in areas like risk scoring, eligibility determination, and the underlying assumptions of the policy itself that could lead to discriminatory outcomes. |
| + | |
| + | ==== Key Resources: ==== |
| + | |
| + | * Algorithmic Impact Assessment (AIA) Tool |
| + | * Accessible Canada Act |
| + | * United Nations Declaration on the Rights of Indigenous Peoples Act |
| + | * Employment Equity Act |
| + | * Guide to Peer Review of Automated Decision Systems - Canada.ca |
| + | |
| + | === 3. Designing Programs & Services === |
| + | Embed fairness directly into the architecture of any AI-powered program. |
| + | |
| + | Ensure meaningful human oversight and provide plain-language notices to users explaining how the AI works and how to challenge a decision. |
| + | |
| + | Include systemically marginalized groups in all phases, from initial design to final testing and implementation. |
| + | |
| + | Audit all AI tools for equity, especially internal systems, to ensure they do not perpetuate bias and barriers. |
| + | |
| + | Embed accessibility and bias mitigation throughout design, testing, and implementation. |
| + | |
| + | ==== Key Resources: ==== |
| + | |
| + | * Directive on Automated Decision-Making |
| + | * Policy on Service and Digital |
| + | * Accessible and Equitable Artificial Intelligence Systems - Accessibility Standards Canada |
| + | * Guide to Peer Review of Automated Decision Systems |
| + | |
| + | === 4. Procuring Technology & AI Systems === |
| + | Require conformance with accessibility standards in all procurement contracts. This includes both the hardware/software standards and the specific standards for AI. |
| + | |
| + | Require potential vendors to disclose the sources of their training data, their data-cleaning methods, and the steps they took to mitigate bias in their models. |
| + | |
| + | Require conformance with accessibility standards in all procurement contracts, including ICT and AI-specific standards. |
| + | |
| + | Mandate an external, independent peer review for any high-impact AI system before a contract is finalized and before deployment. |
| + | |
| + | ==== Key Resources: ==== |
| + | |
| + | * CAN/ASC - EN 301 549:2024 Accessibility requirements for ICT products and services (EN 301 549:2021, IDT) - Accessibility Standards Canada |
| + | * ASC-6.2 Accessible and Equitable Artificial Intelligence Systems - Accessibility Standards Canada |
| + | * Directive on Automated Decision-Making (See section 6.3.3.6 on GBA Plus) |
| + | |
| + | === 5. Managing & Supervising Teams === |
| + | As a leader, help build your team's capacity to work with AI ethically and inclusively. |
| + | |
| + | Obtain training on bias, equity, and accessible design principles. |
| + | |
| + | Actively engage employees from systemically marginalized groups to gather feedback on AI tools and processes. |
| + | |
| + | === 6. Working in Human Resources (HR) === |
| + | Exercise caution to prevent AI from creating discriminatory barriers in recruitment, promotion, or talent management. |
| + | |
| + | Do not use AI in hiring or promotion unless: |
| + | |
| + | AI training data was audited and corrected for biases related to gender, race, disability, and other protected grounds. |
| + | |
| + | The system has been independently audited for equity impacts in a Canadian context. |
| + | |
| + | Interfaces are fully bilingual and accessible. |
| + | |
| + | Ensure AI-enabled learning or assessment platforms are barrier-free and have been co-designed with meaningful consultation from systemically discriminated groups. |
| + | |
| + | Conduct an Algorithmic Impact Assessment for any system that automates decisions affecting employees' rights or careers. |
| + | |
| + | ==== Key Resources: ==== |
| + | |
| + | * Artificial intelligence in the hiring process - Canada.ca |
| + | * Enhance fairness and reduce bias in the content of assessment tools - Canada.ca |
| + | * Employment Equity Act |
| + | * Accessible and Equitable Artificial Intelligence Systems - Accessibility Standards Canada |
| + | * Algorithmic Impact Assessment (AIA) Tool |
| + | |
| + | === 6. Supporting Indigenous Rights & Self-Determination === |
| + | Ensure AI systems respect the rights and data sovereignty of Indigenous Peoples. |
| + | |
| + | Align AI systems with the principles of the the United Nations Declaration on the Rights of Indigenous Peoples Act and the OCAP® Principles (Ownership, Control, Access, and Possession). |
| + | |
| + | Include Indigenous Peoples, leaders, and networks in the design, procurement, and governance of any AI system that may affect them. |
| + | |
| + | Avoid automated systems that could reinforce or create new systemic inequities for Indigenous Peoples. |
| + | |
| + | ==== Key Resources: ==== |
| + | |
| + | * The First Nations Principles of OCAP® |
| + | * United Nations Declaration on the Rights of Indigenous People Act |
| + | |
| + | === 7. Monitoring, Evaluating & Auditing === |
| + | Continuously assess the real-world impact of AI systems to ensure they remain fair and effective over time. |
| + | |
| + | Assess AI-related impacts using GBA Plus assessments, program evaluations, and privacy and accessibility audits. |
| + | |
| + | Continuously monitor system outputs for unexpected or inequitable results. If an AI system starts flagging a specific demographic at a higher rate, it requires immediate investigation. |
| + | |
| + | Report transparently on AI risks, mitigation efforts, and any updates made to the system. |
| + | |
| + | Establish clear feedback and redress mechanisms so users can challenge an automated decision |
| + | |
| + | ==== Key Resources: ==== |
| + | |
| + | * Directive on Automated Decision-Making- Canada.ca 6.3,3.6 |
| + | * Gender-based Analysis Plus (GBA Plus) - Canada.ca. |
| + | |
| + | === 8. Engaging the Public & Partners === |
| + | Foster public trust through clear communication and meaningful collaboration. |
| + | |
| + | Provide plain language explanations of what AI tools do and how they impact people. |
| + | |
| + | Ensure all outreach is culturally relevant, linguistically accessible, and inclusive of marginalized communities. |
| + | |
| + | Co-design AI systems with systemically marginalized groups and by recognizing that persons with disabilities must be involved in creating policies and services that affect them. |
| + | |
| + | ==== Key Resources: ==== |
| + | |
| + | * ASC-6.2 Accessible and Equitable Artificial Intelligence Systems - Accessibility Standards Canada |
| + | |
| + | === 9. Designing Training & Communications === |
| + | Create educational materials that are accessible and inclusive. |
| + | |
| + | Use inclusive and accessible formats like screen-reader-compatible documents, captioned videos, and translated materials. |
| + | |
| + | Co-create content with systemically marginalized groups to ensure it is relevant and respectful. |
| + | |
| + | Go beyond "performative" training. Invest in meaningful education that helps public servants understand and confront systemic racism, ableism, and colonialism. |
| + | |
| + | ==== Key Resources: ==== |
| + | |
| + | * ASC-6.2 Accessible and Equitable Artificial Intelligence Systems - Accessibility Standards Canada |
| + | |
| + | === 10. For All Public Servants: Your Personal Responsibility === |
| + | Regardless of your role, you have a part to play in ensuring the responsible use of AI. |
| + | |
| + | Seek out training on data literacy, AI ethics, and unconscious bias. Understand the basics so you can ask critical questions. |
| + | |
| + | If you are asked to work on an AI project, ask "What are the risks of bias in this data?" and "Who might be negatively impacted by this system?" |
| + | |
| + | Actively consult with systemically marginalized groups and colleagues from different levels, locations and classifications. Their insights are invaluable for spotting potential issues. |
| + | |
| + | ==== Key Resources: ==== |
| + | |
| + | * AI Strategy for the Public Service |
| + | * Guiding principles for the use of AI in government |
| + | * Guide on the use of generative artificial intelligence |