Changes

Jump to navigation Jump to search
no edit summary
Line 5: Line 5:  
In order to support these goals, the Office of the Chief Data Steward (OCDS) is developing a data ethics framework which will provide guidance on the ethical handling of data and the responsible use of Artificial Intelligence (AI). The guidance material on the responsible use of AI addresses 6 major themes that have been identified as being pertinent to DFO projects using AI. These themes are Privacy and Security, Transparency, Accountability, Methodology and Data Quality, Fairness, and Explainability. While many of these themes have a strong overlap with the domain of data ethics, the theme of Fairness covers many ethical concerns unique to AI due to the nature of the impacts that bias can have on AI models.
 
In order to support these goals, the Office of the Chief Data Steward (OCDS) is developing a data ethics framework which will provide guidance on the ethical handling of data and the responsible use of Artificial Intelligence (AI). The guidance material on the responsible use of AI addresses 6 major themes that have been identified as being pertinent to DFO projects using AI. These themes are Privacy and Security, Transparency, Accountability, Methodology and Data Quality, Fairness, and Explainability. While many of these themes have a strong overlap with the domain of data ethics, the theme of Fairness covers many ethical concerns unique to AI due to the nature of the impacts that bias can have on AI models.
   −
Supported by the (2021 – 2022) Results Fund, the OCDS and IMTS are prototyping automated decision systems, based on the outcome of the AI pilot project. The effort includes defining an internal process to detect and mitigate bias, as a potential risk of ML-based automated decision systems. A case study is designed to apply this process to assess and mitigate bias in a predictive model for detecting vessels’ fishing behavior. The process defined from this work and the results of the field study will contribute towards the guidance material that will eventually form the responsible AI component of the data ethics framework.  
+
Supported by the (2021 – 2022) Results Fund, the OCDS and IMTS are prototyping automated decision systems, based on the outcome of the AI pilot project. The effort includes defining an internal process to detect and mitigate bias, as a potential risk of ML-based automated decision systems. A case study is designed to apply this process to assess and mitigate bias in a predictive model for detecting vessels’ fishing behavior. The process defined from this work and the results of the field study will contribute towards the guidance material that will eventually form the responsible AI component of the data ethics framework.
    
== Introduction ==
 
== Introduction ==
Line 11: Line 11:     
=== Responsible AI ===
 
=== Responsible AI ===
Responsible AI is a governance framework that documents how a specific organization is addressing the challenges around AI from both an ethical and a legal point of view.
+
Through a study of established responsible AI frameworks from other organizations and an inspection of DFO use cases in which AI is employed, a set of responsible AI themes have been identified for a DFO framework:
   −
In an attempt to ensure Responsible AI practices, organizations have identified guiding principles to guide the development of AI applications and solutions.  According to the research “The global landscape of AI ethics guidelines” <ref name=":2">A. Jobin, M. Ienca and E. Vayena, "The global landscape of AI ethics guidelines," ''Nature Machine Intelligence,'' p. 389–399, 2019</ref> , some principles are mentioned more often than others. However, Gartner has concluded that there is a global convergence emerging around five ethical principles <ref>S. Sicular , E. Brethenoux , F. Buytendijk and J. Hare, "AI Ethics: Use 5 Common Guidelines as Your Starting Point," Gartner, 11 July 2019. [Online]. Available: <nowiki>https://www.gartner.com/en/documents/3947359/ai-ethics-use-5-common-guidelines-as-your-starting-point</nowiki>. [Accessed 23 August 2021].</ref>:
+
* Privacy and Security
 +
* Transparency
 +
* Accountability
 +
* Methodology and Data Quality
 +
* Fairness
 +
* Explainability
   −
* Human-centric and socially beneficial
  −
* Fair
  −
* Explainable and transparent
  −
* Secure and safe
  −
* Accountable
     −
The definition of the various guiding principles is included in <ref name=":2" />.
+
Each theme covers a set of high-level guidelines defining the goals set out by the framework. These guidelines are supported by concrete processes which provide the specific guidance required to achieve the goals in practice. To support the guidelines set up under the responsible AI theme of Fairness, we have developed a process for the detection and mitigation of bias in machine learning models.
   −
== Bias Detection and Mitigation ==
+
== Bias Detection and Mitigation Process ==
To comply with the  Treasury Board Directive on automated decision-making, DFO is working on establishing an internal process, as part of the Results Fund project for 2021 – 2022,  to detect and mitigate potential bias in ML-based automated decision systems. 
  −
 
  −
=== Bias Detection and Mitigation Process ===
   
The process of bias detection and mitigation is dependent upon the identification of the context within which bias is to be assessed. Given the breadth of sources from which bias can originate, exhaustive identification of the sources relevant to a particular system and the quantification of their impacts can be impractical. As such, it is recommended to instead '''view bias through the lens of harms that can be induced by the system'''  <ref name=":0">S. Bird, M. Dudík, R. Edgar, B. Horn, R. Lutz, V. Milan, M. Sameki, H. Wallach and K. Walker, "Fairlearn: A toolkit for assessing and improving fairness in AI," Microsoft, May 2020. [Online]. Available: <nowiki>https://www.microsoft.com/en-us/research/publication/fairlearn-a-toolkit-for-assessing-and-improving-fairness-in-ai/</nowiki>. [Accessed 30 11 2021].</ref>. Common types of harm can be represented through formal mathematical definitions of fairness in decision systems. These definitions provide the foundation for the quantitative assessment of fairness as an indicator of bias in systems.
 
The process of bias detection and mitigation is dependent upon the identification of the context within which bias is to be assessed. Given the breadth of sources from which bias can originate, exhaustive identification of the sources relevant to a particular system and the quantification of their impacts can be impractical. As such, it is recommended to instead '''view bias through the lens of harms that can be induced by the system'''  <ref name=":0">S. Bird, M. Dudík, R. Edgar, B. Horn, R. Lutz, V. Milan, M. Sameki, H. Wallach and K. Walker, "Fairlearn: A toolkit for assessing and improving fairness in AI," Microsoft, May 2020. [Online]. Available: <nowiki>https://www.microsoft.com/en-us/research/publication/fairlearn-a-toolkit-for-assessing-and-improving-fairness-in-ai/</nowiki>. [Accessed 30 11 2021].</ref>. Common types of harm can be represented through formal mathematical definitions of fairness in decision systems. These definitions provide the foundation for the quantitative assessment of fairness as an indicator of bias in systems.
    
Harms are typically considered in the context of error rates and/or rates of representation. Differences in error rates across different groups of people induce disproportionate performance of the system, leading to unjust consequences or reduced benefits for certain groups. Differences in rates of representation, even in the absence of differences in rates of error, can lead to unfair distributions of benefits or penalties across groups of people. The types of harm applicable to a particular system are highly dependent upon the intended usage of the system. A flow chart to assist in identification of the metric of fairness most relevant to a system is provided in Figure 3.
 
Harms are typically considered in the context of error rates and/or rates of representation. Differences in error rates across different groups of people induce disproportionate performance of the system, leading to unjust consequences or reduced benefits for certain groups. Differences in rates of representation, even in the absence of differences in rates of error, can lead to unfair distributions of benefits or penalties across groups of people. The types of harm applicable to a particular system are highly dependent upon the intended usage of the system. A flow chart to assist in identification of the metric of fairness most relevant to a system is provided in Figure 3.
 +
 
[[File:FairnessTree.png|center|thumb|679x679px|'''Figure 3: Fairness Metric Selection Flow Chart.''' <ref>http://www.datasciencepublicpolicy.org/our-work/tools-guides/aequitas/</ref>]]
 
[[File:FairnessTree.png|center|thumb|679x679px|'''Figure 3: Fairness Metric Selection Flow Chart.''' <ref>http://www.datasciencepublicpolicy.org/our-work/tools-guides/aequitas/</ref>]]
 +
 +
 
Having laid out the potentials for harm in the system, the next step is to identify protected attributes. In the context of bias detection and mitigation, a protected attribute refers to a categorical attribute for which there are concerns of bias across the attribute categories. Common examples of protected attributes are race and gender. Relevant protected attributes for the application of the system must be identified in order to investigate disproportionate impacts across the attribute categories.
 
Having laid out the potentials for harm in the system, the next step is to identify protected attributes. In the context of bias detection and mitigation, a protected attribute refers to a categorical attribute for which there are concerns of bias across the attribute categories. Common examples of protected attributes are race and gender. Relevant protected attributes for the application of the system must be identified in order to investigate disproportionate impacts across the attribute categories.
   Line 65: Line 65:  
'''False positive (negative) rate''': The percentage of negative (positive) instances incorrectly classified as positive (negative) in a sample of instances to which binary classification is applied.
 
'''False positive (negative) rate''': The percentage of negative (positive) instances incorrectly classified as positive (negative) in a sample of instances to which binary classification is applied.
   −
'''Protected attribute''': An attribute that partitions a population into groups across which fairness is expected. Examples of common protected attributes are gender and race.  
+
'''Protected attribute''': An attribute that partitions a population into groups across which fairness is expected. Examples of common protected attributes are gender and race.
    
=== Results of Bias Assessment   ===
 
=== Results of Bias Assessment   ===
121

edits

Navigation menu

GCwiki