Line 26: |
Line 26: |
| When creating or changing policies that involve AI, assess its impact on people from the very beginning. | | When creating or changing policies that involve AI, assess its impact on people from the very beginning. |
| | | |
− | Conduct an Algorithmic Impact Assessment (AIA) to determine the system's risk level. This must include an explicit assessment of the proposed data sources for potential bias. | + | * Conduct an Algorithmic Impact Assessment (AIA) to determine the system's risk level. This must include an explicit assessment of the proposed data sources for potential bias. |
− | | + | * Incorporate foundational legislation like the Accessible Canada Act, the United Nations Declaration for the Rights of Indigenous People and the Employment Equity Act in the policy analysis. |
− | Incorporate foundational legislation like the Accessible Canada Act, the United Nations Declaration for the Rights of Indigenous People and the Employment Equity Act in the policy analysis. | + | * Challenge policy assumptions in areas like risk scoring, eligibility determination, and the underlying assumptions of the policy itself that could lead to discriminatory outcomes. |
− | | |
− | Challenge policy assumptions in areas like risk scoring, eligibility determination, and the underlying assumptions of the policy itself that could lead to discriminatory outcomes. | |
| | | |
| ==== Key Resources: ==== | | ==== Key Resources: ==== |
| | | |
− | * Algorithmic Impact Assessment (AIA) Tool | + | * [https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html Algorithmic Impact Assessment (AIA) Tool] |
− | * Accessible Canada Act | + | * [https://laws-lois.justice.gc.ca/eng/acts/a-0.6/ Accessible Canada Act] |
| * United Nations Declaration on the Rights of Indigenous Peoples Act | | * United Nations Declaration on the Rights of Indigenous Peoples Act |
| * Employment Equity Act | | * Employment Equity Act |
Line 43: |
Line 41: |
| Embed fairness directly into the architecture of any AI-powered program. | | Embed fairness directly into the architecture of any AI-powered program. |
| | | |
− | Ensure meaningful human oversight and provide plain-language notices to users explaining how the AI works and how to challenge a decision. | + | * Ensure meaningful human oversight and provide plain-language notices to users explaining how the AI works and how to challenge a decision. |
− | | + | * Include systemically marginalized groups in all phases, from initial design to final testing and implementation. |
− | Include systemically marginalized groups in all phases, from initial design to final testing and implementation. | + | * Audit all AI tools for equity, especially internal systems, to ensure they do not perpetuate bias and barriers. |
− | | + | * Embed accessibility and bias mitigation throughout design, testing, and implementation. |
− | Audit all AI tools for equity, especially internal systems, to ensure they do not perpetuate bias and barriers. | |
− | | |
− | Embed accessibility and bias mitigation throughout design, testing, and implementation. | |
| | | |
| ==== Key Resources: ==== | | ==== Key Resources: ==== |
Line 59: |
Line 54: |
| | | |
| === 4. Procuring Technology & AI Systems === | | === 4. Procuring Technology & AI Systems === |
− | Require conformance with accessibility standards in all procurement contracts. This includes both the hardware/software standards and the specific standards for AI.
| |
| | | |
− | Require potential vendors to disclose the sources of their training data, their data-cleaning methods, and the steps they took to mitigate bias in their models. | + | * Require conformance with accessibility standards in all procurement contracts. This includes both the hardware/software standards and the specific standards for AI. |
− | | + | * Require potential vendors to disclose the sources of their training data, their data-cleaning methods, and the steps they took to mitigate bias in their models. |
− | Require conformance with accessibility standards in all procurement contracts, including ICT and AI-specific standards. | + | * Require conformance with accessibility standards in all procurement contracts, including ICT and AI-specific standards. |
− | | + | * Mandate an external, independent peer review for any high-impact AI system before a contract is finalized and before deployment. |
− | Mandate an external, independent peer review for any high-impact AI system before a contract is finalized and before deployment. | |
| | | |
| ==== Key Resources: ==== | | ==== Key Resources: ==== |
Line 76: |
Line 69: |
| As a leader, help build your team's capacity to work with AI ethically and inclusively. | | As a leader, help build your team's capacity to work with AI ethically and inclusively. |
| | | |
− | Obtain training on bias, equity, and accessible design principles. | + | * Obtain training on bias, equity, and accessible design principles. |
− | | + | * Actively engage employees from systemically marginalized groups to gather feedback on AI tools and processes. |
− | Actively engage employees from systemically marginalized groups to gather feedback on AI tools and processes. | |
| | | |
| === 6. Working in Human Resources (HR) === | | === 6. Working in Human Resources (HR) === |
| Exercise caution to prevent AI from creating discriminatory barriers in recruitment, promotion, or talent management. | | Exercise caution to prevent AI from creating discriminatory barriers in recruitment, promotion, or talent management. |
| | | |
− | Do not use AI in hiring or promotion unless: | + | * Do not use AI in hiring or promotion unless: |
− | | + | *# AI training data was audited and corrected for biases related to gender, race, disability, and other protected grounds. |
− | AI training data was audited and corrected for biases related to gender, race, disability, and other protected grounds. | + | *# The system has been independently audited for equity impacts in a Canadian context. |
− | | + | *# Interfaces are fully bilingual and accessible. |
− | The system has been independently audited for equity impacts in a Canadian context. | + | * Ensure AI-enabled learning or assessment platforms are barrier-free and have been co-designed with meaningful consultation from systemically discriminated groups. |
− | | + | * Conduct an Algorithmic Impact Assessment for any system that automates decisions affecting employees' rights or careers. |
− | Interfaces are fully bilingual and accessible. | |
− | | |
− | Ensure AI-enabled learning or assessment platforms are barrier-free and have been co-designed with meaningful consultation from systemically discriminated groups. | |
− | | |
− | Conduct an Algorithmic Impact Assessment for any system that automates decisions affecting employees' rights or careers. | |
| | | |
| ==== Key Resources: ==== | | ==== Key Resources: ==== |
Line 101: |
Line 88: |
| * Employment Equity Act | | * Employment Equity Act |
| * Accessible and Equitable Artificial Intelligence Systems - Accessibility Standards Canada | | * Accessible and Equitable Artificial Intelligence Systems - Accessibility Standards Canada |
− | * Algorithmic Impact Assessment (AIA) Tool | + | * [https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html Algorithmic Impact Assessment (AIA) Tool] |
| | | |
| === 6. Supporting Indigenous Rights & Self-Determination === | | === 6. Supporting Indigenous Rights & Self-Determination === |
| Ensure AI systems respect the rights and data sovereignty of Indigenous Peoples. | | Ensure AI systems respect the rights and data sovereignty of Indigenous Peoples. |
| | | |
− | Align AI systems with the principles of the the United Nations Declaration on the Rights of Indigenous Peoples Act and the OCAP® Principles (Ownership, Control, Access, and Possession). | + | * Align AI systems with the principles of the the United Nations Declaration on the Rights of Indigenous Peoples Act and the OCAP® Principles (Ownership, Control, Access, and Possession). |
− | | + | * Include Indigenous Peoples, leaders, and networks in the design, procurement, and governance of any AI system that may affect them. |
− | Include Indigenous Peoples, leaders, and networks in the design, procurement, and governance of any AI system that may affect them. | + | * Avoid automated systems that could reinforce or create new systemic inequities for Indigenous Peoples. |
− | | |
− | Avoid automated systems that could reinforce or create new systemic inequities for Indigenous Peoples. | |
| | | |
| ==== Key Resources: ==== | | ==== Key Resources: ==== |
| | | |
− | * The First Nations Principles of OCAP® | + | * [https://fnigc.ca/ocap-training/ The First Nations Principles of OCAP®] |
− | * United Nations Declaration on the Rights of Indigenous People Act | + | * [https://www.justice.gc.ca/eng/declaration/index.html United Nations Declaration on the Rights of Indigenous People Act] |
| | | |
| === 7. Monitoring, Evaluating & Auditing === | | === 7. Monitoring, Evaluating & Auditing === |
| Continuously assess the real-world impact of AI systems to ensure they remain fair and effective over time. | | Continuously assess the real-world impact of AI systems to ensure they remain fair and effective over time. |
| | | |
− | Assess AI-related impacts using GBA Plus assessments, program evaluations, and privacy and accessibility audits. | + | * Assess AI-related impacts using GBA Plus assessments, program evaluations, and privacy and accessibility audits. |
− | | + | * Continuously monitor system outputs for unexpected or inequitable results. If an AI system starts flagging a specific demographic at a higher rate, it requires immediate investigation. |
− | Continuously monitor system outputs for unexpected or inequitable results. If an AI system starts flagging a specific demographic at a higher rate, it requires immediate investigation. | + | * Report transparently on AI risks, mitigation efforts, and any updates made to the system. |
− | | + | * Establish clear feedback and redress mechanisms so users can challenge an automated decision |
− | Report transparently on AI risks, mitigation efforts, and any updates made to the system. | |
− | | |
− | Establish clear feedback and redress mechanisms so users can challenge an automated decision | |
| | | |
| ==== Key Resources: ==== | | ==== Key Resources: ==== |
Line 136: |
Line 118: |
| Foster public trust through clear communication and meaningful collaboration. | | Foster public trust through clear communication and meaningful collaboration. |
| | | |
− | Provide plain language explanations of what AI tools do and how they impact people. | + | * Provide plain language explanations of what AI tools do and how they impact people. |
− | | + | * Ensure all outreach is culturally relevant, linguistically accessible, and inclusive of marginalized communities. |
− | Ensure all outreach is culturally relevant, linguistically accessible, and inclusive of marginalized communities. | + | * Co-design AI systems with systemically marginalized groups and by recognizing that persons with disabilities must be involved in creating policies and services that affect them. |
− | | |
− | Co-design AI systems with systemically marginalized groups and by recognizing that persons with disabilities must be involved in creating policies and services that affect them. | |
| | | |
| ==== Key Resources: ==== | | ==== Key Resources: ==== |
| | | |
− | * ASC-6.2 Accessible and Equitable Artificial Intelligence Systems - Accessibility Standards Canada | + | * [https://accessible.canada.ca/creating-accessibility-standards/asc-62-accessible-equitable-artificial-intelligence-systems ASC-6.2 Accessible and Equitable Artificial Intelligence Systems - Accessibility Standards Canada] |
| | | |
| === 9. Designing Training & Communications === | | === 9. Designing Training & Communications === |
| Create educational materials that are accessible and inclusive. | | Create educational materials that are accessible and inclusive. |
| | | |
− | Use inclusive and accessible formats like screen-reader-compatible documents, captioned videos, and translated materials. | + | * Use inclusive and accessible formats like screen-reader-compatible documents, captioned videos, and translated materials. |
− | | + | * Co-create content with systemically marginalized groups to ensure it is relevant and respectful. |
− | Co-create content with systemically marginalized groups to ensure it is relevant and respectful. | + | * Go beyond "performative" training. Invest in meaningful education that helps public servants understand and confront systemic racism, ableism, and colonialism. |
− | | |
− | Go beyond "performative" training. Invest in meaningful education that helps public servants understand and confront systemic racism, ableism, and colonialism. | |
| | | |
| ==== Key Resources: ==== | | ==== Key Resources: ==== |
| | | |
− | * ASC-6.2 Accessible and Equitable Artificial Intelligence Systems - Accessibility Standards Canada | + | * [https://accessible.canada.ca/creating-accessibility-standards/asc-62-accessible-equitable-artificial-intelligence-systems ASC-6.2 Accessible and Equitable Artificial Intelligence Systems - Accessibility Standards Canada] |
| | | |
| === 10. For All Public Servants: Your Personal Responsibility === | | === 10. For All Public Servants: Your Personal Responsibility === |
| Regardless of your role, you have a part to play in ensuring the responsible use of AI. | | Regardless of your role, you have a part to play in ensuring the responsible use of AI. |
| | | |
− | Seek out training on data literacy, AI ethics, and unconscious bias. Understand the basics so you can ask critical questions. | + | * Seek out training on data literacy, AI ethics, and unconscious bias. Understand the basics so you can ask critical questions. |
− | | + | * If you are asked to work on an AI project, ask "What are the risks of bias in this data?" and "Who might be negatively impacted by this system?" |
− | If you are asked to work on an AI project, ask "What are the risks of bias in this data?" and "Who might be negatively impacted by this system?" | + | * Actively consult with systemically marginalized groups and colleagues from different levels, locations and classifications. Their insights are invaluable for spotting potential issues. |
− | | |
− | Actively consult with systemically marginalized groups and colleagues from different levels, locations and classifications. Their insights are invaluable for spotting potential issues. | |
| | | |
| ==== Key Resources: ==== | | ==== Key Resources: ==== |
| | | |
− | * AI Strategy for the Public Service | + | * [https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/gc-ai-strategy-overview.html AI Strategy for the Public Service] |
− | * Guiding principles for the use of AI in government | + | * [https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/principles.html Guiding principles for the use of AI in government] |
− | * Guide on the use of generative artificial intelligence | + | * [https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/guide-use-generative-ai.html Guide on the use of generative artificial intelligence] |