Changes

Jump to navigation Jump to search
no edit summary
Line 19: Line 19:  
=== Responsible AI ===
 
=== Responsible AI ===
 
Responsible AI is a governance framework that documents how a specific organization is addressing the challenges around AI from both an ethical and a legal point of view.
 
Responsible AI is a governance framework that documents how a specific organization is addressing the challenges around AI from both an ethical and a legal point of view.
[[File:5gp.png|thumb|352x352px|alt=|''Figure 1: Five main guiding principles for Responsible AI'']]In an attempt to ensure Responsible AI practices, organizations have identified guiding principles to guide the development of AI applications and solutions.  According to the research “The global landscape of AI ethics guidelines” <ref>A. Jobin, M. Ienca and E. Vayena, "The global landscape of AI ethics guidelines," ''Nature Machine Intelligence,'' p. 389–399, 2019</ref> , some principles are mentioned more often than others. However, Gartner has concluded that there is a global convergence emerging around five ethical principles <ref>S. Sicular , E. Brethenoux , F. Buytendijk and J. Hare, "AI Ethics: Use 5 Common Guidelines as Your Starting Point," Gartner, 11 July 2019. [Online]. Available: <nowiki>https://www.gartner.com/en/documents/3947359/ai-ethics-use-5-common-guidelines-as-your-starting-point</nowiki>. [Accessed 23 August 2021].</ref> :
+
[[File:5gp.png|thumb|364x364px|alt=|'''Figure 1: Five main guiding principles for Responsible AI''']]In an attempt to ensure Responsible AI practices, organizations have identified guiding principles to guide the development of AI applications and solutions.  According to the research “The global landscape of AI ethics guidelines” <ref>A. Jobin, M. Ienca and E. Vayena, "The global landscape of AI ethics guidelines," ''Nature Machine Intelligence,'' p. 389–399, 2019</ref> , some principles are mentioned more often than others. However, Gartner has concluded that there is a global convergence emerging around five ethical principles <ref>S. Sicular , E. Brethenoux , F. Buytendijk and J. Hare, "AI Ethics: Use 5 Common Guidelines as Your Starting Point," Gartner, 11 July 2019. [Online]. Available: <nowiki>https://www.gartner.com/en/documents/3947359/ai-ethics-use-5-common-guidelines-as-your-starting-point</nowiki>. [Accessed 23 August 2021].</ref> :
    
·      Human-centric and socially beneficial
 
·      Human-centric and socially beneficial
Line 38: Line 38:  
==== The Algorithmic Impact Assessment Process ====
 
==== The Algorithmic Impact Assessment Process ====
 
According to the Directive, an Algorithmic Impact Assessment (AIA)  must be conducted before the production of any automated decision systems to assess the risks of the system. The assessment must be updated at regular intervals when there is a change to the functionality or the scope of the Automated Decision System.
 
According to the Directive, an Algorithmic Impact Assessment (AIA)  must be conducted before the production of any automated decision systems to assess the risks of the system. The assessment must be updated at regular intervals when there is a change to the functionality or the scope of the Automated Decision System.
[[File:TBS Aia.png|alt=The Algorithmic Impact Assessment Process|thumb|410x410px|''Figure 2: The Algorithmic Impact Assessment Process'']]
+
[[File:TBS Aia.png|alt=The Algorithmic Impact Assessment Process|thumb|450x450px|'''Figure 2: The Algorithmic Impact Assessment Process''']]
 
The AIA tool is a questionnaire that determines the impact level of an automated decision system. It is composed of 48 risk and 33 mitigation questions. Assessment scores are based on many factors including systems design, algorithm, decision type, impact, and data. The tool results in an impact categorization level between I (presenting the least risk) and IV (presenting the greatest risk).   Based on the impact level (which will be published), the Directive may impose additional requirements. The process is described in Figure 2.
 
The AIA tool is a questionnaire that determines the impact level of an automated decision system. It is composed of 48 risk and 33 mitigation questions. Assessment scores are based on many factors including systems design, algorithm, decision type, impact, and data. The tool results in an impact categorization level between I (presenting the least risk) and IV (presenting the greatest risk).   Based on the impact level (which will be published), the Directive may impose additional requirements. The process is described in Figure 2.
       
== Bias Detection and Mitigation ==
 
== Bias Detection and Mitigation ==
To comply with the TBS Directive on ADS, DFO is working on establishing an internal process, as part of the Result Fund project for 2021 – 2022,  to detect and mitigate potential bias in ADS.
+
To comply with the  Treasury Board Directive on automated decision-making, DFO is working on establishing an internal process, as part of the Results Fund project for 2021 – 2022,  to detect and mitigate potential bias in ML-based automated decision systems.  
    
=== Bias Detection and Mitigation Process ===
 
=== Bias Detection and Mitigation Process ===
The process of bias detection and mitigation is dependent upon the identification of the context within which bias is to be assessed. Given the breadth of sources from which bias can originate, exhaustive identification of the sources relevant to a particular system and the quantification of their impacts can be impractical. As such, it is recommended to instead '''view bias through the lens of harms that can be induced by the system''' [2]. Common types of harm can be represented through formal mathematical definitions of fairness in decision systems. These definitions provide the foundation for the quantitative assessment of fairness as an indicator of bias in systems.
+
The process of bias detection and mitigation is dependent upon the identification of the context within which bias is to be assessed. Given the breadth of sources from which bias can originate, exhaustive identification of the sources relevant to a particular system and the quantification of their impacts can be impractical. As such, it is recommended to instead '''view bias through the lens of harms that can be induced by the system'''  <ref>S. Bird, M. Dudík, R. Edgar, B. Horn, R. Lutz, V. Milan, M. Sameki, H. Wallach and K. Walker, "Fairlearn: A toolkit for assessing and improving fairness in AI," Microsoft, May 2020. [Online]. Available: <nowiki>https://www.microsoft.com/en-us/research/publication/fairlearn-a-toolkit-for-assessing-and-improving-fairness-in-ai/</nowiki>. [Accessed 30 11 2021].</ref>. Common types of harm can be represented through formal mathematical definitions of fairness in decision systems. These definitions provide the foundation for the quantitative assessment of fairness as an indicator of bias in systems.
 
  −
Harms are typically considered in the context of error rates and/or rates of representation. In the case of error rates, differences in false positive/negative rates across different groups of people induce disproportionate performance of the system, leading to unjust consequences or reduced benefits for certain groups. Differences in rates of representation, even in the absence of differences in rates of error, can lead to unfair distributions of benefits or penalties across groups of people. The types of harm applicable to a particular system are highly dependent upon the intended usage of the system. A flow chart to assist in identification of the metric of fairness most relevant to a system is provided in Figure 3.
  −
 
      +
Harms are typically considered in the context of error rates and/or rates of representation. Differences in error rates across different groups of people induce disproportionate performance of the system, leading to unjust consequences or reduced benefits for certain groups. Differences in rates of representation, even in the absence of differences in rates of error, can lead to unfair distributions of benefits or penalties across groups of people. The types of harm applicable to a particular system are highly dependent upon the intended usage of the system. A flow chart to assist in identification of the metric of fairness most relevant to a system is provided in Figure 3.
 +
[[File:FairnessTree.png|center|thumb|679x679px|'''Figure 3: Fairness Metric Selection Flow Chart.''' <ref>Aequitas - The Bias Report (dssg.io)</ref>]]
 
Having laid out the potentials for harm in the system, the next step is to identify protected attributes. In the context of bias detection and mitigation, a protected attribute refers to a categorical attribute for which there are concerns of bias across the attribute categories. Common examples of protected attributes are race and gender. Relevant protected attributes for the application of the system must be identified in order to investigate disproportionate impacts across the attribute categories.
 
Having laid out the potentials for harm in the system, the next step is to identify protected attributes. In the context of bias detection and mitigation, a protected attribute refers to a categorical attribute for which there are concerns of bias across the attribute categories. Common examples of protected attributes are race and gender. Relevant protected attributes for the application of the system must be identified in order to investigate disproportionate impacts across the attribute categories.
  

Navigation menu

GCwiki