Changes

m
no edit summary
Line 1: Line 1:  
== '''Overview''' ==
 
== '''Overview''' ==
  −
=== Directive on Automated Decision-Making ===
  −
Released in 2019, the [https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32592 Directive on Automated Decision-Making] seeks to ensure transparency, accountability, and procedural fairness in the use of automated decision systems in the federal government. The scope of the directive covers systems used to make or support administrative decisions impacting external clients (e.g., citizens, businesses). It applies to systems developed or procured as of April 1st 2020.
  −
  −
Federal institutions subject to the directive are required to complete and publish an [https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html Algorithmic Impact Assessment (AIA)] to the Open Government Portal. The AIA tool is a questionnaire that determines the impact level of an automated decision system based on factors related to the system's design, algorithm, decision, impact, and data.
  −
  −
=== Third review ===
   
Treasury Board of Canada Secretariat (TBS) is completing the third review of the directive. The review takes stock of the current state of the directive and identifies risks and challenges to the government’s commitment to responsible artificial intelligence (AI) in the federal public sector. These issues highlight critical gaps that limit the directive’s relevance and effectiveness in supporting transparency, accountability, and fairness in automated decision-making. They also identify problems with terminology, feasibility, and coherence with other federal policy instruments.
 
Treasury Board of Canada Secretariat (TBS) is completing the third review of the directive. The review takes stock of the current state of the directive and identifies risks and challenges to the government’s commitment to responsible artificial intelligence (AI) in the federal public sector. These issues highlight critical gaps that limit the directive’s relevance and effectiveness in supporting transparency, accountability, and fairness in automated decision-making. They also identify problems with terminology, feasibility, and coherence with other federal policy instruments.
   Line 27: Line 20:  
* What We Heard Report (phase 1 of stakeholder engagement)  ''[link to be added]''
 
* What We Heard Report (phase 1 of stakeholder engagement)  ''[link to be added]''
 
* Report on the third review of the Directive on Automated Decision-Making ''[link to be added]''
 
* Report on the third review of the Directive on Automated Decision-Making ''[link to be added]''
 +
 +
=== Contact ===
 +
Please submit any questions to [mailto:Ai-ia@tbs-sct.gc.ca ai-ia@tbs-sct.gc.ca]
99

edits