Line 58: |
Line 58: |
| There is a growing number of technical bias detection and mitigation tools, which can supplement the AI practitioner’s capacity to avoid and mitigate AI bias. As part of the 2021 -2022 Result fund project, DFO has explored the following bias detection tools that AI practitioners can use to detect and remove bias. | | There is a growing number of technical bias detection and mitigation tools, which can supplement the AI practitioner’s capacity to avoid and mitigate AI bias. As part of the 2021 -2022 Result fund project, DFO has explored the following bias detection tools that AI practitioners can use to detect and remove bias. |
| | | |
− | ==== '''Microsoft’s FairLearn''' ==== | + | ==== Microsoft’s FairLearn ==== |
| An open-source toolkit by Microsoft that allows AI practitioners to detect and correct the fairness of their AI systems. To optimize the trade-offs between fairness and model performance, the toolkit includes two components: an interactive visualization dashboard and bias mitigation algorithm [2]. | | An open-source toolkit by Microsoft that allows AI practitioners to detect and correct the fairness of their AI systems. To optimize the trade-offs between fairness and model performance, the toolkit includes two components: an interactive visualization dashboard and bias mitigation algorithm [2]. |
| | | |
Line 73: |
Line 73: |
| To comply with the Directive, and to ensure that this automated decision system is both effective and fair, an investigation has been conducted into the potential for bias in the fishing detection model, and bias mitigation techniques have been applied. | | To comply with the Directive, and to ensure that this automated decision system is both effective and fair, an investigation has been conducted into the potential for bias in the fishing detection model, and bias mitigation techniques have been applied. |
| | | |
− | == Terms and Definitions == | + | === Terms and Definitions === |
| '''Bias''': In the context of automated decision systems, the presence of systematic errors, or misrepresentation in data and system outcomes. Bias may originate from a variety of sources including societal bias (e.g., discrimination) embedded in data, unrepresentative data sampling or preferential treatment imparted through algorithms. | | '''Bias''': In the context of automated decision systems, the presence of systematic errors, or misrepresentation in data and system outcomes. Bias may originate from a variety of sources including societal bias (e.g., discrimination) embedded in data, unrepresentative data sampling or preferential treatment imparted through algorithms. |
| | | |
Line 86: |
Line 86: |
| '''Protected attribute''': An attribute that partitions a population into groups across which fairness is expected. Examples of common protected attributes are gender and race. | | '''Protected attribute''': An attribute that partitions a population into groups across which fairness is expected. Examples of common protected attributes are gender and race. |
| | | |
− | == Results of Bias Assessment == | + | === Results of Bias Assessment === |
| | | |
| | | |
| The bias assessment process has been conducted following the four steps proposed as indicated in the Bias Detection and Mitigation Process. Details on the results of the investigation for each of the steps are provided below. | | The bias assessment process has been conducted following the four steps proposed as indicated in the Bias Detection and Mitigation Process. Details on the results of the investigation for each of the steps are provided below. |
| | | |
− | === Identification of Potential Harms === | + | ==== Identification of Potential Harms ==== |
| The investigation was initiated with an assessment of the potential for harms induced by bias in the system. For the use case of non-compliance detection, there always exists an objective ground truth. That is to say, through complete knowledge of the actions taken by a fishing vessel and the conditions specified in a fishing license, it can be objectively determined whether or not a vessel is in compliance with its license conditions. When a system is able to correctly make this determination with perfect consistency, there would be no potential for harm through biased decisions made by the system. However, due to the practical constraints of incomplete information (e.g., fishing activity of the vessels), perfect accuracy is not realistically achievable. Thus, the potential for harm exists in the instances where the system makes errors in its decisions. | | The investigation was initiated with an assessment of the potential for harms induced by bias in the system. For the use case of non-compliance detection, there always exists an objective ground truth. That is to say, through complete knowledge of the actions taken by a fishing vessel and the conditions specified in a fishing license, it can be objectively determined whether or not a vessel is in compliance with its license conditions. When a system is able to correctly make this determination with perfect consistency, there would be no potential for harm through biased decisions made by the system. However, due to the practical constraints of incomplete information (e.g., fishing activity of the vessels), perfect accuracy is not realistically achievable. Thus, the potential for harm exists in the instances where the system makes errors in its decisions. |
| | | |
Line 110: |
Line 110: |
| Although false positives have been identified as the primary source of potential harm through this investigation, it is important to note that some degree of less direct harm exists in false negatives as well. Missed detections of non-compliance may result in illegal fishing going uncaught. This has a cost to both DFO as well as the commercial fisheries. Should certain fisheries or geographic areas be more susceptible to false negatives than others, this may lead to disproportionate distributions of these costs. | | Although false positives have been identified as the primary source of potential harm through this investigation, it is important to note that some degree of less direct harm exists in false negatives as well. Missed detections of non-compliance may result in illegal fishing going uncaught. This has a cost to both DFO as well as the commercial fisheries. Should certain fisheries or geographic areas be more susceptible to false negatives than others, this may lead to disproportionate distributions of these costs. |
| | | |
− | === Bias Detection === | + | ==== Bias Detection ==== |
| To assess model fairness across different gear types, the data is first partitioned according to the gear type and then False Positive Rate (FPR) disparity [3] is measured. The FPR is given by the percentage of negative instances (of fishing activity) that are mislabeled by the model as being positives instances. The FPR disparity is measured as the greatest difference between the FPR of each grouping of vessels by gear type. The greater this difference is measured to be, the greater the degree of bias and unfairness in the system. | | To assess model fairness across different gear types, the data is first partitioned according to the gear type and then False Positive Rate (FPR) disparity [3] is measured. The FPR is given by the percentage of negative instances (of fishing activity) that are mislabeled by the model as being positives instances. The FPR disparity is measured as the greatest difference between the FPR of each grouping of vessels by gear type. The greater this difference is measured to be, the greater the degree of bias and unfairness in the system. |
| | | |
Line 146: |
Line 146: |
| '''Table 1''': Results for FPR and detection accuracy are shown for each gear type. The FPR disparity difference is measured as the difference between the highest and lowest FPR, giving a value of 52.62. | | '''Table 1''': Results for FPR and detection accuracy are shown for each gear type. The FPR disparity difference is measured as the difference between the highest and lowest FPR, giving a value of 52.62. |
| | | |
− | === Bias Mitigation === | + | ==== Bias Mitigation ==== |
− | | |
| | | |
| Bias mitigation algorithms implemented in FairLearn and other similar tools can be applied at various stages of the machine learning pipeline. In general, there is a trade-off between model performance and bias such that mitigation algorithms induce a loss in model performance. Initial experimentation has demonstrated this occurrence leading to a notable loss in performance to reduce bias. This can be observed in the results shown in Table 2 where a mitigation algorithm has been applied to reduce the FPR disparity to 28.83 at the cost of a loss in fishing detection accuracy. | | Bias mitigation algorithms implemented in FairLearn and other similar tools can be applied at various stages of the machine learning pipeline. In general, there is a trade-off between model performance and bias such that mitigation algorithms induce a loss in model performance. Initial experimentation has demonstrated this occurrence leading to a notable loss in performance to reduce bias. This can be observed in the results shown in Table 2 where a mitigation algorithm has been applied to reduce the FPR disparity to 28.83 at the cost of a loss in fishing detection accuracy. |
Line 218: |
Line 217: |
| | | |
| DFO is in the process of defining guiding principles to guide the development of AI applications and solutions. Once defined, various tools will be considered and/or developed to operationalize such principles. | | DFO is in the process of defining guiding principles to guide the development of AI applications and solutions. Once defined, various tools will be considered and/or developed to operationalize such principles. |
− |
| |
| | | |
| == Bibliography == | | == Bibliography == |