Changes

no edit summary
Line 8: Line 8:     
== Introduction ==
 
== Introduction ==
Unlike traditional automated decision systems, ML-based automated decision systems do not follow explicit rules authored by humans <ref>V. Fomins, "The Shift from Traditional Computing Systems to  Artificial Intelligence and the Implications for Bias," ''Smart  Technologies and Fundamental Rights,'' pp. 316-333, 2020.</ref>. ML models are not inherently objective. Data scientists train models by feeding them a data set of training examples, and the human involvement in the provision and curation of this data can make a model's predictions susceptible to bias. Due to this, applications of ML-based automated decision systems have far-reaching implications for society. These range from new questions about the legal responsibility for mistakes committed by these systems to retraining for workers displaced by these technologies. There is a need for a framework to ensure that accountable and transparent decisions are made, supporting ethical practices.
+
[[File:5gp.png|thumb|426x426px|alt=|'''Figure 1: Five main guiding principles for Responsible AI''']]Unlike traditional automated decision systems, ML-based automated decision systems do not follow explicit rules authored by humans <ref>V. Fomins, "The Shift from Traditional Computing Systems to  Artificial Intelligence and the Implications for Bias," ''Smart  Technologies and Fundamental Rights,'' pp. 316-333, 2020.</ref>. ML models are not inherently objective. Data scientists train models by feeding them a data set of training examples, and the human involvement in the provision and curation of this data can make a model's predictions susceptible to bias. Due to this, applications of ML-based automated decision systems have far-reaching implications for society. These range from new questions about the legal responsibility for mistakes committed by these systems to retraining for workers displaced by these technologies. There is a need for a framework to ensure that accountable and transparent decisions are made, supporting ethical practices.
    
=== Responsible AI ===
 
=== Responsible AI ===
 
Responsible AI is a governance framework that documents how a specific organization is addressing the challenges around AI from both an ethical and a legal point of view.
 
Responsible AI is a governance framework that documents how a specific organization is addressing the challenges around AI from both an ethical and a legal point of view.
[[File:5gp.png|thumb|426x426px|alt=|'''Figure 1: Five main guiding principles for Responsible AI''']]In an attempt to ensure Responsible AI practices, organizations have identified guiding principles to guide the development of AI applications and solutions.  According to the research “The global landscape of AI ethics guidelines” <ref name=":2">A. Jobin, M. Ienca and E. Vayena, "The global landscape of AI ethics guidelines," ''Nature Machine Intelligence,'' p. 389–399, 2019</ref> , some principles are mentioned more often than others. However, Gartner has concluded that there is a global convergence emerging around five ethical principles <ref>S. Sicular , E. Brethenoux , F. Buytendijk and J. Hare, "AI Ethics: Use 5 Common Guidelines as Your Starting Point," Gartner, 11 July 2019. [Online]. Available: <nowiki>https://www.gartner.com/en/documents/3947359/ai-ethics-use-5-common-guidelines-as-your-starting-point</nowiki>. [Accessed 23 August 2021].</ref>:
+
 
 +
In an attempt to ensure Responsible AI practices, organizations have identified guiding principles to guide the development of AI applications and solutions.  According to the research “The global landscape of AI ethics guidelines” <ref name=":2">A. Jobin, M. Ienca and E. Vayena, "The global landscape of AI ethics guidelines," ''Nature Machine Intelligence,'' p. 389–399, 2019</ref> , some principles are mentioned more often than others. However, Gartner has concluded that there is a global convergence emerging around five ethical principles <ref>S. Sicular , E. Brethenoux , F. Buytendijk and J. Hare, "AI Ethics: Use 5 Common Guidelines as Your Starting Point," Gartner, 11 July 2019. [Online]. Available: <nowiki>https://www.gartner.com/en/documents/3947359/ai-ethics-use-5-common-guidelines-as-your-starting-point</nowiki>. [Accessed 23 August 2021].</ref>:
    
* Human-centric and socially beneficial
 
* Human-centric and socially beneficial
Line 21: Line 22:     
The definition of the various guiding principles is included in <ref name=":2" />.
 
The definition of the various guiding principles is included in <ref name=":2" />.
  −
=== The Treasury Board Directive on Automated Decision-Making ===
  −
The Treasury Board Directive on Automated Decision-Making is defined as a policy instrument to promote ethical and responsible use of AI. It outlines the responsibilities of federal institutions using AI-based automated decision systems.
  −
  −
==== The Algorithmic Impact Assessment Process ====
  −
According to the Directive, an Algorithmic Impact Assessment (AIA)  must be conducted before the production of any automated decision systems to assess the risks of the system. The assessment must be updated at regular intervals when there is a change to the functionality or the scope of the Automated Decision System.
  −
The AIA tool is a questionnaire that determines the impact level of an automated decision system. It is composed of 48 risk and 33 mitigation questions. Assessment scores are based on many factors including systems design, algorithm, decision type, impact, and data. The tool results in an impact categorization level between I (presenting the least risk) and IV (presenting the greatest risk).   Based on the impact level (which will be published), the Directive may impose additional requirements. The process is described in Figure 2.
  −
[[File:TBS Aia.png|center|thumb|582x582px|'''Figure 2: The Algorithmic Impact Assessment Process''']]
      
== Bias Detection and Mitigation ==
 
== Bias Detection and Mitigation ==
121

edits