Important: The GCConnex decommission will not affect GCCollab or GCWiki. Thank you and happy collaborating!
Difference between revisions of "Artificial Intelligence Tip Sheet"
Jump to navigation
Jump to search
Line 8: | Line 8: | ||
* Data Integrity: The quality and fairness of AI systems depend entirely on the quality and fairness of the data it's trained on. | * Data Integrity: The quality and fairness of AI systems depend entirely on the quality and fairness of the data it's trained on. | ||
* Accountability & Redress: Clear mechanisms must exist for people to challenge AI-driven decisions and seek recourse. | * Accountability & Redress: Clear mechanisms must exist for people to challenge AI-driven decisions and seek recourse. | ||
+ | |||
+ | === 1. Validating & Managing Data === | ||
+ | AI systems learn from data. If this data reflects historical or societal bias, the AI will learn, perpetuate, and even amplify that bias. | ||
+ | |||
+ | * Audit Your Data Sources: Before any development, rigorously analyze your datasets for historical biases. For example, if historical data shows a certain demographic was disproportionately denied a service, using that data without correction will teach the AI to continue that discriminatory pattern. | ||
+ | * Ensure Data Representativeness: Ensure data reflects the diversity of people in Canada. If there are gaps (e.g., underrepresentation of Northern communities or persons with disabilities), develop a strategy to address them before proceeding. | ||
+ | * Practice Data Minimization: Only collect and use the data that is absolutely necessary for the system’s purpose. Every extra data point increases the risk of introducing bias and privacy violations. | ||
+ | * Establish Clear Data Governance: Appoint clear ownership and accountability for the data's quality, lifecycle, and ethical use. |
Revision as of 14:10, 24 September 2025
This guide outlines key responsibilities for public servants using AI in any policy, program, or service. It is grounded in federal directives, standards, and legislation, with a focus on identifying and removing bias and barriers.
Core Principles for Responsible AI
Every public servant involved with an AI project must understand and apply these foundational principles:
- Human Oversight: A human must always have the final say in decisions that impact a person's rights or well-being.
- Fairness by Design: Equity is not an add-on or a nice to have. Legal considerations require users to proactively identify, challenge, and mitigate bias at every stage.
- Data Integrity: The quality and fairness of AI systems depend entirely on the quality and fairness of the data it's trained on.
- Accountability & Redress: Clear mechanisms must exist for people to challenge AI-driven decisions and seek recourse.
1. Validating & Managing Data
AI systems learn from data. If this data reflects historical or societal bias, the AI will learn, perpetuate, and even amplify that bias.
- Audit Your Data Sources: Before any development, rigorously analyze your datasets for historical biases. For example, if historical data shows a certain demographic was disproportionately denied a service, using that data without correction will teach the AI to continue that discriminatory pattern.
- Ensure Data Representativeness: Ensure data reflects the diversity of people in Canada. If there are gaps (e.g., underrepresentation of Northern communities or persons with disabilities), develop a strategy to address them before proceeding.
- Practice Data Minimization: Only collect and use the data that is absolutely necessary for the system’s purpose. Every extra data point increases the risk of introducing bias and privacy violations.
- Establish Clear Data Governance: Appoint clear ownership and accountability for the data's quality, lifecycle, and ethical use.