Changes

no edit summary
Line 108: Line 108:     
== The Path Forward ==
 
== The Path Forward ==
Responsible AI is the only way to mitigate AI risks, and bias risks are considered a subset of such risks. As DFO moves towards adopting AI to support decision-making and improve service delivery, there is a need to ensure that these decisions are not only bias-aware, but also accurate, human-centric, explainable, and privacy-aware.
+
The development of a process for bias identification and mitigation is a step towards a framework that supports responsible use of AI. In order to fully develop this framework, guidance for additional processes is required. In particular, the theme of Explainability is another topic with requirements that are unique to the use of AI. Next steps in this area will require the identification of tools and the development of guidance to support the use of interpretable models and explainability algorithms for black-box models. Further to this, a more general process is require to enable project teams to assess their compliance across all themes of responsible AI. The OCDS is currently undertaking the development of these resources.
 
  −
DFO is in the process of defining guiding principles to guide the development of AI applications and solutions. Once defined, various tools will be considered and/or developed to operationalize such principles.
      
== Bibliography ==
 
== Bibliography ==
    
<references />
 
<references />
121

edits