Difference between revisions of "Third Review of the Directive on Automated Decision-Making"

From wiki
Jump to navigation Jump to search
m
m (→‎Reference materials: removed outdated references)
 
(45 intermediate revisions by 2 users not shown)
Line 1: Line 1:
Treasury Board of Canada Secretariat (TBS) is undertaking a review of the [https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32592 Directive on Automated Decision-Making]. The review takes stock of the current state of the directive and identifies risks and challenges to the government’s commitment to responsible AI in the federal public sector. These issues highlight critical gaps that limit the directive’s relevance and effectiveness in supporting transparency, accountability, and fairness in automated decision-making. They also identify problems with terminology, feasibility, and coherence with other federal policy instruments.
+
== '''About the Third Review''' ==
  
As part of the review, we are proposing a series of policy recommendations and accompanying amendments to the directive. The recommendations would help ensure that automated decision systems impacting federal public servants are fair and inclusive; reinforce transparency and accountability; strengthen protections against discrimination and harm; and clarify requirements and support operational needs.
+
=== Background ===
 +
Treasury Board of Canada Secretariat (TBS) is completing the third review of the [https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32592 Directive on Automated Decision-Making]. The review takes stock of the current state of the directive and identifies risks and challenges to the government’s commitment to responsible artificial intelligence (AI) in the federal public service. It provides an analysis of gaps that may limit the directive’s relevance and effectiveness in supporting transparency, accountability, and fairness in automated decision-making. The review also highlights problems with terminology, feasibility, and coherence with other federal policy instruments.
  
TBS is engaging with a broad range of stakeholders during this process, including in academia, civil society, other governments, and international organizations. Stakeholders are invited to review and provide comments on key issues, policy recommendations, and proposed amendments. A What We Heard Report summarizing the outcomes of previous stakeholder engagements is available for reference.
+
Periodic reviews are not intended to be exhaustive. They seek to adapt the directive to trends in the regulation and use of AI technologies in Canada and globally. Reviews also allow TBS to gradually refine the text of the instrument to support interpretation and facilitate compliance across government. The first review (2020-21) sought to clarify and reinforce existing requirements, update policy references, and strengthen transparency and quality assurance measures. The second review (2021-22) informed the development of guidelines supporting the interpretation of the directive.
  
'''Consultation materials'''
+
=== Policy recommendations ===
 +
As part of the third review, TBS is proposing 12 policy recommendations and accompanying amendments to the directive:
  
* Key issues, policy recommendations, and proposed amendments [link to be added]
+
* Expand the scope to cover internal services.
* One-page overview of policy recommendations [link to be added]
+
* Clarify that the scope includes systems which make assessments related to administrative decisions.
 +
* Replace the 6-month review interval with a biennial review and enable the Chief Information Officer of Canada to request off-cycle reviews.
 +
* Replace references to Canadians with more encompassing language such as clients and Canadian society.
 +
* Introduce measures supporting the tracing, protection, and lawful handling of data used and generated by a system.
 +
* Expand the bias testing requirement to cover models.
 +
* Mandate the completion of Gender Based Analysis Plus during the development or modification of a system.
 +
* Establish explanation criteria in support of the explanation requirement and integrate them into the [https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html Algorithmic Impact Assessment (AIA)].
 +
* Expand the AIA to include questions concerning an institution's reasons for pursuing automation and potential impacts on persons with disabilities.
 +
* Mandate the publication of a complete or summarized peer review prior to system production.
 +
* Align the contingency requirement with relevant terminology established in Treasury Board security policy.
 +
* Mandate the release of an AIA prior to the production of a system and ensure it is reviewed on a scheduled basis.
  
'''Reference materials'''
+
=== Expected outcomes ===
 +
The recommendations would reinforce transparency and accountability; strengthen protections against discrimination and harm; ensure that automated decisions impacting the rights or interests of individuals – including federal public servants – are fair and inclusive; clarify requirements; and support operational needs.
  
* What We Heard Report (phase 1 of stakeholder engagement) [link to be added]
+
== '''Stakeholder Engagement''' ==
* Report on the third review of the Directive on Automated Decision-Making [link to be added]
+
TBS has engaged with a broad range of stakeholders during the third review, including in federal institutions, academia, civil society, governments in other jurisdictions, and international organizations. The goal of stakeholder engagement is to validate the policy recommendations and provisional amendments proposed in the review and identify additional issues that merit consideration as part of this exercise or in future reviews.
 +
 
 +
TBS divided stakeholder engagement into two phases. The second phase ran between September and November 2022. It involved outreach to federal AI policy and data communities; agents of parliament; bargaining agents; industry representatives; and international organizations. The first phase ran between April and July 2022, and mainly drew on the expertise of federal partners operating automation projects and subject matter experts in academia, civil society, and counterparts in other governments. The What We Heard Reports linked below provide a summary of engagement themes and outcomes.
 +
=== Reference materials ===
 +
 
 +
* Winter 2023 - [[:en:images/3/33/3rd_review_of_the_Directive_on_Automated_Decision-Making_-_2nd_What_We_Heard_Report_(EN).pdf|What We Heard Report (phase 2 of stakeholder engagement)]]
 +
* Fall 2022 - [[:en:images/archive/f/f4/20230125001443!DADM_3rd_Review_-_Phase_2_Consultation_Deck_(EN).pdf|Phase 2 consultation: key issues, policy recommendations, and provisional amendments]]
 +
* Summer 2022 - [[:en:images/3/32/WWHR_-_Phase_1_of_Stakeholder_Engagement_on_the_3rd_Review_of_the_DADM_(EN).pdf|What We Heard Report (phase 1 of stakeholder engagement)]]
 +
* Spring 2022 - [[:en:images/d/d6/DADM_3rd_review_-_Phase_1_consultation_-_Key_issues,_policy_recommendations,_and_amendments_(v2).pdf|Phase 1 consultation: key issues, policy recommendations, and provisional amendments]]
 +
* Spring 2022 - [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4087546 Report on the 3rd review of the Directive on Automated Decision-Making (phase 1)]
 +
 
 +
=== Contact ===
 +
Please submit any questions to [mailto:Ai-ia@tbs-sct.gc.ca ai-ia@tbs-sct.gc.ca]
 +
 
 +
[[fr:Troisième examen de la Directive sur la prise de décisions automatisée]]

Latest revision as of 13:23, 25 April 2024

About the Third Review

Background

Treasury Board of Canada Secretariat (TBS) is completing the third review of the Directive on Automated Decision-Making. The review takes stock of the current state of the directive and identifies risks and challenges to the government’s commitment to responsible artificial intelligence (AI) in the federal public service. It provides an analysis of gaps that may limit the directive’s relevance and effectiveness in supporting transparency, accountability, and fairness in automated decision-making. The review also highlights problems with terminology, feasibility, and coherence with other federal policy instruments.

Periodic reviews are not intended to be exhaustive. They seek to adapt the directive to trends in the regulation and use of AI technologies in Canada and globally. Reviews also allow TBS to gradually refine the text of the instrument to support interpretation and facilitate compliance across government. The first review (2020-21) sought to clarify and reinforce existing requirements, update policy references, and strengthen transparency and quality assurance measures. The second review (2021-22) informed the development of guidelines supporting the interpretation of the directive.

Policy recommendations

As part of the third review, TBS is proposing 12 policy recommendations and accompanying amendments to the directive:

  • Expand the scope to cover internal services.
  • Clarify that the scope includes systems which make assessments related to administrative decisions.
  • Replace the 6-month review interval with a biennial review and enable the Chief Information Officer of Canada to request off-cycle reviews.
  • Replace references to Canadians with more encompassing language such as clients and Canadian society.
  • Introduce measures supporting the tracing, protection, and lawful handling of data used and generated by a system.
  • Expand the bias testing requirement to cover models.
  • Mandate the completion of Gender Based Analysis Plus during the development or modification of a system.
  • Establish explanation criteria in support of the explanation requirement and integrate them into the Algorithmic Impact Assessment (AIA).
  • Expand the AIA to include questions concerning an institution's reasons for pursuing automation and potential impacts on persons with disabilities.
  • Mandate the publication of a complete or summarized peer review prior to system production.
  • Align the contingency requirement with relevant terminology established in Treasury Board security policy.
  • Mandate the release of an AIA prior to the production of a system and ensure it is reviewed on a scheduled basis.

Expected outcomes

The recommendations would reinforce transparency and accountability; strengthen protections against discrimination and harm; ensure that automated decisions impacting the rights or interests of individuals – including federal public servants – are fair and inclusive; clarify requirements; and support operational needs.

Stakeholder Engagement

TBS has engaged with a broad range of stakeholders during the third review, including in federal institutions, academia, civil society, governments in other jurisdictions, and international organizations. The goal of stakeholder engagement is to validate the policy recommendations and provisional amendments proposed in the review and identify additional issues that merit consideration as part of this exercise or in future reviews.

TBS divided stakeholder engagement into two phases. The second phase ran between September and November 2022. It involved outreach to federal AI policy and data communities; agents of parliament; bargaining agents; industry representatives; and international organizations. The first phase ran between April and July 2022, and mainly drew on the expertise of federal partners operating automation projects and subject matter experts in academia, civil society, and counterparts in other governments. The What We Heard Reports linked below provide a summary of engagement themes and outcomes.

Reference materials

Contact

Please submit any questions to ai-ia@tbs-sct.gc.ca