Difference between revisions of "Third Review of the Directive on Automated Decision-Making"

From wiki
Jump to navigation Jump to search
m
m (→‎Reference materials: removed outdated references)
 
(28 intermediate revisions by one other user not shown)
Line 1: Line 1:
== '''About the Third review''' ==
+
== '''About the Third Review''' ==
  
 
=== Background ===
 
=== Background ===
Treasury Board of Canada Secretariat (TBS) is completing the third review of the [https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32592 Directive on Automated Decision-Making]. The review takes stock of the current state of the directive and identifies risks and challenges to the government’s commitment to responsible artificial intelligence (AI) in the federal public sector. These issues highlight critical gaps that limit the directive’s relevance and effectiveness in supporting transparency, accountability, and fairness in automated decision-making. They also identify problems with terminology, feasibility, and coherence with other federal policy instruments.
+
Treasury Board of Canada Secretariat (TBS) is completing the third review of the [https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32592 Directive on Automated Decision-Making]. The review takes stock of the current state of the directive and identifies risks and challenges to the government’s commitment to responsible artificial intelligence (AI) in the federal public service. It provides an analysis of gaps that may limit the directive’s relevance and effectiveness in supporting transparency, accountability, and fairness in automated decision-making. The review also highlights problems with terminology, feasibility, and coherence with other federal policy instruments.
  
Periodic reviews are not intended to be exhaustive. They seek to adapt the directive to pertinent trends in the Canadian and global AI landscape, while gradually refining the text of the instrument to support interpretation and facilitate compliance. The first review sought to clarify and reinforce existing requirements, update policy references, and strengthen transparency and quality assurance measures. The second review informed the development of guidelines supporting the interpretation of the directive.
+
Periodic reviews are not intended to be exhaustive. They seek to adapt the directive to trends in the regulation and use of AI technologies in Canada and globally. Reviews also allow TBS to gradually refine the text of the instrument to support interpretation and facilitate compliance across government. The first review (2020-21) sought to clarify and reinforce existing requirements, update policy references, and strengthen transparency and quality assurance measures. The second review (2021-22) informed the development of guidelines supporting the interpretation of the directive.
  
 
=== Policy recommendations ===
 
=== Policy recommendations ===
Line 13: Line 13:
 
* Replace the 6-month review interval with a biennial review and enable the Chief Information Officer of Canada to request off-cycle reviews.
 
* Replace the 6-month review interval with a biennial review and enable the Chief Information Officer of Canada to request off-cycle reviews.
 
* Replace references to Canadians with more encompassing language such as clients and Canadian society.
 
* Replace references to Canadians with more encompassing language such as clients and Canadian society.
* Introduce measures supporting the tracing, protection, and appropriate retention and disposition of data used and generated by a system.
+
* Introduce measures supporting the tracing, protection, and lawful handling of data used and generated by a system.
 
* Expand the bias testing requirement to cover models.
 
* Expand the bias testing requirement to cover models.
* Mandate the completion of Gender Based Analysis Plus during the development of a system.
+
* Mandate the completion of Gender Based Analysis Plus during the development or modification of a system.
 
* Establish explanation criteria in support of the explanation requirement and integrate them into the [https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html Algorithmic Impact Assessment (AIA)].
 
* Establish explanation criteria in support of the explanation requirement and integrate them into the [https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html Algorithmic Impact Assessment (AIA)].
 
* Expand the AIA to include questions concerning an institution's reasons for pursuing automation and potential impacts on persons with disabilities.
 
* Expand the AIA to include questions concerning an institution's reasons for pursuing automation and potential impacts on persons with disabilities.
* Mandate the publication of complete or summarized peer reviews and require completion prior to system production.
+
* Mandate the publication of a complete or summarized peer review prior to system production.
 
* Align the contingency requirement with relevant terminology established in Treasury Board security policy.
 
* Align the contingency requirement with relevant terminology established in Treasury Board security policy.
* Mandate the release of AIAs prior to the production of a system.
+
* Mandate the release of an AIA prior to the production of a system and ensure it is reviewed on a scheduled basis.
  
 
=== Expected outcomes ===
 
=== Expected outcomes ===
The recommendations would ensure that automated decision systems impacting federal public servants are fair and inclusive; reinforce transparency and accountability; strengthen protections against discrimination and harm; clarify requirements; and support operational needs.
+
The recommendations would reinforce transparency and accountability; strengthen protections against discrimination and harm; ensure that automated decisions impacting the rights or interests of individuals – including federal public servants – are fair and inclusive; clarify requirements; and support operational needs.
 
 
== '''Stakeholder engagement''' ==
 
TBS is engaging with a broad range of stakeholders during the third review, including in academia, civil society, other governments, and international organizations. The goal of stakeholder engagement is to validate the policy recommendations and provisional amendments proposed in the review and identify additional issues that merit consideration as part of this exercise or in future reviews.
 
 
 
In September 2022, TBS launched the second phase of stakeholder engagement. This phase will involve outreach to federal AI policy and data communities; agents of parliament; bargaining agents; and international organizations. The first phase ran between April and July 2022, and drew on the expertise of federal partners and subject matter experts in academia, civil society, and other governments. The What We Heard Report linked below provides a summary of the outcomes of this phase.
 
 
 
Contacted stakeholders are invited to review and comment on the proposal for amending the directive and AIA, which is laid out in the consultation materials below.
 
 
 
=== Consultation materials ===
 
[[File:DADM 3rd Review - Phase 2 Consultation Deck (EN).pdf|none|thumb|Key issues, policy recommendations, and provisional amendments|alt=]]
 
[[File:DADM 3rd Review Summary (one-pager) (EN).pdf|left|thumb|One-page overview of policy recommendations]]
 
 
 
 
 
 
 
 
 
 
 
  
 +
== '''Stakeholder Engagement''' ==
 +
TBS has engaged with a broad range of stakeholders during the third review, including in federal institutions, academia, civil society, governments in other jurisdictions, and international organizations. The goal of stakeholder engagement is to validate the policy recommendations and provisional amendments proposed in the review and identify additional issues that merit consideration as part of this exercise or in future reviews.
  
 +
TBS divided stakeholder engagement into two phases. The second phase ran between September and November 2022. It involved outreach to federal AI policy and data communities; agents of parliament; bargaining agents; industry representatives; and international organizations. The first phase ran between April and July 2022, and mainly drew on the expertise of federal partners operating automation projects and subject matter experts in academia, civil society, and counterparts in other governments. The What We Heard Reports linked below provide a summary of engagement themes and outcomes.
 +
=== Reference materials ===
  
 
+
* Winter 2023 - [[:en:images/3/33/3rd_review_of_the_Directive_on_Automated_Decision-Making_-_2nd_What_We_Heard_Report_(EN).pdf|What We Heard Report (phase 2 of stakeholder engagement)]]
 
+
* Fall 2022 - [[:en:images/archive/f/f4/20230125001443!DADM_3rd_Review_-_Phase_2_Consultation_Deck_(EN).pdf|Phase 2 consultation: key issues, policy recommendations, and provisional amendments]]
 
+
* Summer 2022 - [[:en:images/3/32/WWHR_-_Phase_1_of_Stakeholder_Engagement_on_the_3rd_Review_of_the_DADM_(EN).pdf|What We Heard Report (phase 1 of stakeholder engagement)]]
 
+
* Spring 2022 - [[:en:images/d/d6/DADM_3rd_review_-_Phase_1_consultation_-_Key_issues,_policy_recommendations,_and_amendments_(v2).pdf|Phase 1 consultation: key issues, policy recommendations, and provisional amendments]]
 
+
* Spring 2022 - [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4087546 Report on the 3rd review of the Directive on Automated Decision-Making (phase 1)]
 
 
 
 
=== Reference materials ===
 
* What We Heard Report (phase 1 of stakeholder engagement) ''[link to be added]''
 
* Report on the third review of the Directive on Automated Decision-Making ''[link to be added]''
 
  
 
=== Contact ===
 
=== Contact ===

Latest revision as of 13:23, 25 April 2024

About the Third Review

Background

Treasury Board of Canada Secretariat (TBS) is completing the third review of the Directive on Automated Decision-Making. The review takes stock of the current state of the directive and identifies risks and challenges to the government’s commitment to responsible artificial intelligence (AI) in the federal public service. It provides an analysis of gaps that may limit the directive’s relevance and effectiveness in supporting transparency, accountability, and fairness in automated decision-making. The review also highlights problems with terminology, feasibility, and coherence with other federal policy instruments.

Periodic reviews are not intended to be exhaustive. They seek to adapt the directive to trends in the regulation and use of AI technologies in Canada and globally. Reviews also allow TBS to gradually refine the text of the instrument to support interpretation and facilitate compliance across government. The first review (2020-21) sought to clarify and reinforce existing requirements, update policy references, and strengthen transparency and quality assurance measures. The second review (2021-22) informed the development of guidelines supporting the interpretation of the directive.

Policy recommendations

As part of the third review, TBS is proposing 12 policy recommendations and accompanying amendments to the directive:

  • Expand the scope to cover internal services.
  • Clarify that the scope includes systems which make assessments related to administrative decisions.
  • Replace the 6-month review interval with a biennial review and enable the Chief Information Officer of Canada to request off-cycle reviews.
  • Replace references to Canadians with more encompassing language such as clients and Canadian society.
  • Introduce measures supporting the tracing, protection, and lawful handling of data used and generated by a system.
  • Expand the bias testing requirement to cover models.
  • Mandate the completion of Gender Based Analysis Plus during the development or modification of a system.
  • Establish explanation criteria in support of the explanation requirement and integrate them into the Algorithmic Impact Assessment (AIA).
  • Expand the AIA to include questions concerning an institution's reasons for pursuing automation and potential impacts on persons with disabilities.
  • Mandate the publication of a complete or summarized peer review prior to system production.
  • Align the contingency requirement with relevant terminology established in Treasury Board security policy.
  • Mandate the release of an AIA prior to the production of a system and ensure it is reviewed on a scheduled basis.

Expected outcomes

The recommendations would reinforce transparency and accountability; strengthen protections against discrimination and harm; ensure that automated decisions impacting the rights or interests of individuals – including federal public servants – are fair and inclusive; clarify requirements; and support operational needs.

Stakeholder Engagement

TBS has engaged with a broad range of stakeholders during the third review, including in federal institutions, academia, civil society, governments in other jurisdictions, and international organizations. The goal of stakeholder engagement is to validate the policy recommendations and provisional amendments proposed in the review and identify additional issues that merit consideration as part of this exercise or in future reviews.

TBS divided stakeholder engagement into two phases. The second phase ran between September and November 2022. It involved outreach to federal AI policy and data communities; agents of parliament; bargaining agents; industry representatives; and international organizations. The first phase ran between April and July 2022, and mainly drew on the expertise of federal partners operating automation projects and subject matter experts in academia, civil society, and counterparts in other governments. The What We Heard Reports linked below provide a summary of engagement themes and outcomes.

Reference materials

Contact

Please submit any questions to ai-ia@tbs-sct.gc.ca