Changes

no edit summary
Line 34: Line 34:  
       <tr>
 
       <tr>
 
         <th>Latest version</th>
 
         <th>Latest version</th>
         <td>July 22, 2019</td>
+
         <td>February 17, 2019</td>
 
       </tr>
 
       </tr>
 
       <tr>
 
       <tr>
Line 58: Line 58:  
     <p class="expand inline mw-collapsible-content">Major technology companies like Apple, Google, Samsung, Facebook, and Amazon have begun to realize the impact that facial recognition can have on their existing security infrastructure. </p><p class="inline">Apple has been attempting to add movement capabilities to the system. The subject of a scanned face can now be talking or moving during the scan, which allows facial recognition to combine with other biometric security measures like voice recognition. Since moving subjects can be scanned, individuals can be identified in a crowd without intrusion by using facial recognition systems.</p>
 
     <p class="expand inline mw-collapsible-content">Major technology companies like Apple, Google, Samsung, Facebook, and Amazon have begun to realize the impact that facial recognition can have on their existing security infrastructure. </p><p class="inline">Apple has been attempting to add movement capabilities to the system. The subject of a scanned face can now be talking or moving during the scan, which allows facial recognition to combine with other biometric security measures like voice recognition. Since moving subjects can be scanned, individuals can be identified in a crowd without intrusion by using facial recognition systems.</p>
 
     <h2>Technology Brief</h2>
 
     <h2>Technology Brief</h2>
   <p class="inline">Facial recognition systems can use either a 2D/3D image or video feed to create a digital image, establish the faceprint, and identify a face by comparing its digital image with the faceprints in a database. Every face has several “landmarks” and the system will flag these as “nodal points”. A human face can have up to 80 of these points.  They represent areas of interest on the face that the system measures. </p><p class="expand inline mw-collapsible-content">Some examples of these measurements would be, distance between the eyes, width of the nose, depth of the eye socket, and more. These measurements will be stored in a database as a faceprint. </p><p class="inline">When the system scans a face, it will compare all these measurements to the records, faceprints, in the database. </p><p class="expand inline mw-collapsible-content">Facial recognition systems employ an algorithm, such as the Facial Recognition Vendor Test, that can predict whether there’s a match based on the “nodal points” on an individual’s face. </p><p class="inline">Usually, there is a 4-stage process involved in the operation of this technology <ref><i>[http://www.ex-sight.com/technology.html]</i></ref>: </p>
+
   <p class="inline">Facial recognition systems can use either a 2D/3D image or video feed to create a digital image, establish the faceprint, and identify a face by comparing its digital image with the faceprints in a database. Every face has several “landmarks” and the system will flag these as “nodal points”. A human face can have up to 80 of these points.  They represent areas of interest on the face that the system measures. </p><p class="expand inline mw-collapsible-content">Some examples of these measurements would be, distance between the eyes, width of the nose, depth of the eye socket, and more. These measurements will be stored in a database as a faceprint. </p><p class="inline">When the system scans a face, it will compare all these measurements to the records, faceprints, in the database. </p><p class="expand inline mw-collapsible-content">Facial recognition systems employ an algorithm, such as the Facial Recognition Vendor Test, that can predict whether there’s a match based on the “nodal points” on an individual’s face. </p><p class="inline">Usually, there is a 4-stage process involved in the operation of this technology:<ref><i>[http://www.ex-sight.com/technology.html]</i></ref> </p>
 
   <ul>
 
   <ul>
 
     <li><b>Capture</b> – a physical or behavioral sample is captured by the system during enrolment</li>
 
     <li><b>Capture</b> – a physical or behavioral sample is captured by the system during enrolment</li>
Line 73: Line 73:  
   <p class="inline">Face ID is a technology developed by Apple and introduced in iPhone X. It provides intuitive and secure authentication enabled by the state-of-the-art TrueDepth camera system with advanced technologies to accurately map the geometry of your face. With a simple glance, Face ID securely unlocks your iPhone or iPad Pro. </p><p class="expand inline mw-collapsible-content">You can use it to authorize purchases from the iTunes Store, App Store, and Apple Books, and make payments with Apple Pay.  The iPhone XR, XS, and XS Max are all packing the second-generation of Face ID, which is an updated version of the biometric authentication system that is supposed to be faster than the version introduced with the iPhone X.</p>
 
   <p class="inline">Face ID is a technology developed by Apple and introduced in iPhone X. It provides intuitive and secure authentication enabled by the state-of-the-art TrueDepth camera system with advanced technologies to accurately map the geometry of your face. With a simple glance, Face ID securely unlocks your iPhone or iPad Pro. </p><p class="expand inline mw-collapsible-content">You can use it to authorize purchases from the iTunes Store, App Store, and Apple Books, and make payments with Apple Pay.  The iPhone XR, XS, and XS Max are all packing the second-generation of Face ID, which is an updated version of the biometric authentication system that is supposed to be faster than the version introduced with the iPhone X.</p>
 
   <h2>Canadian Government Use</h2>
 
   <h2>Canadian Government Use</h2>
   <p class="expand mw-collapsible-content">Unlike the private sector, Government use cases for facial recognition applications are primarily related to security, specifically for identity verification and fraud prevention. For example, the Canada Border Services Agency (CBSA) has recently launched the Primary Inspection Kiosk (PIK) program where passengers entering the country from airports must check-in using self-serve kiosks<ref>Braga, Matthew. (March 2nd, 2017). Facial Recognition Technology is coming to Canadian Airports this spring. Canadian Broadcasting Corporation. Retrieved 17-05-2019 from: <i>[https://www.cbc.ca/news/technology/cbsa-canada-airports-facial-recognition-kiosk-biometrics-1.4007344]</i></ref>These kiosks use facial recognition in order to clear passengers. The overall shift to un-maned kiosks has bolstered security while reducing congestion at airports and has been in development since 2015. Portuguese company Vision-Box has installed 130 Kiosks in the Toronto’s Pearson International Airport. The Kiosks are designed to take biometric data in two phases - facial recognition and fingerprint biometrics. The kiosks will also be able to obtain iris data, a feature reserved for people travelling under the NEXUS program.</p>
+
   <p class="expand mw-collapsible-content">Unlike the private sector, Government use cases for facial recognition applications are primarily related to security, specifically for identity verification and fraud prevention. For example, the Canada Border Services Agency (CBSA) has recently launched the Primary Inspection Kiosk (PIK) program where passengers entering the country from airports must check-in using self-serve kiosks.<ref>Braga, Matthew. (March 2nd, 2017). Facial Recognition Technology is coming to Canadian Airports this spring. Canadian Broadcasting Corporation. Retrieved 17-05-2019 from: <i>[https://www.cbc.ca/news/technology/cbsa-canada-airports-facial-recognition-kiosk-biometrics-1.4007344]</i></ref> These kiosks use facial recognition in order to clear passengers. The overall shift to un-maned kiosks has bolstered security while reducing congestion at airports and has been in development since 2015. Portuguese company Vision-Box has installed 130 Kiosks in the Toronto’s Pearson International Airport. The Kiosks are designed to take biometric data in two phases - facial recognition and fingerprint biometrics. The kiosks will also be able to obtain iris data, a feature reserved for people travelling under the NEXUS program.</p>
   <p class="inline">Facial recognition systems are also used in provincial casinos for identifying and locking out visitors with gambling addictions who have voluntarily entered themselves into self-exclusion lists<ref>Elash, Anita, and Luk, Vivian. (July 25th, 2011). Canadian Casinos, Banks, Police use Facial-Recognition Technology. The Globe and Mail. Toronto, Ontario. Retrieved 21-05-2019 from:  <i>[https://www.theglobeandmail.com/news/national/time-to-lead/canadian-casinos-banks-police-use-facial-recognition-technology/article590998/ ]</i></ref>It is worth noting that the system was developed jointly with the Ontario Privacy Commissioner to ensure a privacy-by-default design. </p><p class="expand inline mw-collapsible-content">In real time, the system scans customers entering the casino and compares their images with gamblers on the self-exclusion list. If there is a match, the system notifies security and if not, the system deletes the image automatically. Access to the database is restricted and information about an individual is only accessible if the person in the picture is physically present.</p>
+
   <p class="inline">Facial recognition systems are also used in provincial casinos for identifying and locking out visitors with gambling addictions who have voluntarily entered themselves into self-exclusion lists.<ref>Elash, Anita, and Luk, Vivian. (July 25th, 2011). Canadian Casinos, Banks, Police use Facial-Recognition Technology. The Globe and Mail. Toronto, Ontario. Retrieved 21-05-2019 from:  <i>[https://www.theglobeandmail.com/news/national/time-to-lead/canadian-casinos-banks-police-use-facial-recognition-technology/article590998/ ]</i></ref> It is worth noting that the system was developed jointly with the Ontario Privacy Commissioner to ensure a privacy-by-default design. </p><p class="expand inline mw-collapsible-content">In real time, the system scans customers entering the casino and compares their images with gamblers on the self-exclusion list. If there is a match, the system notifies security and if not, the system deletes the image automatically. Access to the database is restricted and information about an individual is only accessible if the person in the picture is physically present.</p>
   <p class="inline">Passport Canada has been using facial recognition software for the past decade to compare new passport photos against its database to prevent passport fraud. One to one (1:1) comparisons are done to confirm a person’s identity, meaning that a recently taken image is compared to one already in the database that is associated with that person’s identity. One to many (1:N) comparisons are done to compare an image against the entire database of passport photos to make sure there are no duplicate applicants or individuals with multiple identities<ref>Mackrael, Kim, and Ha, Tu Thanh. (May 15th, 2014) Facial Recognition Program Allows RCMP to Identify Alleged Passport Fraud. The Globe and Mail. Toronto, Ontario. Retrieved 27-05-2019 from: <i>[https://www.theglobeandmail.com/news/national/facial-recognition-program-allows-rcmp-to-nab-alleged-passport-fraudster/article18703608/]</i></ref></p><p class="expand inline mw-collapsible-content">This initiative has been successfully used to catch individuals attempting to obtain multiple passports. This same concept is also used for driver’s licences being issued at the provincial level<ref>Office of the Privacy Commissioner of Canada. (March 2013). Automated Facial Recognition in the Public and Private Sectors. Government of Canada. Retrieved 23-05-2019 from: <i>[https://www.priv.gc.ca/en/opc-actions-and-decisions/research/explore-privacy-research/2013/fr_201303/]</i></ref>. </p>
+
   <p class="inline">Passport Canada has been using facial recognition software for the past decade to compare new passport photos against its database to prevent passport fraud. One to one (1:1) comparisons are done to confirm a person’s identity, meaning that a recently taken image is compared to one already in the database that is associated with that person’s identity. One to many (1:N) comparisons are done to compare an image against the entire database of passport photos to make sure there are no duplicate applicants or individuals with multiple identities.<ref>Mackrael, Kim, and Ha, Tu Thanh. (May 15th, 2014) Facial Recognition Program Allows RCMP to Identify Alleged Passport Fraud. The Globe and Mail. Toronto, Ontario. Retrieved 27-05-2019 from: <i>[https://www.theglobeandmail.com/news/national/facial-recognition-program-allows-rcmp-to-nab-alleged-passport-fraudster/article18703608/]</i></ref> </p><p class="expand inline mw-collapsible-content">This initiative has been successfully used to catch individuals attempting to obtain multiple passports. This same concept is also used for driver’s licences being issued at the provincial level.<ref>Office of the Privacy Commissioner of Canada. (March 2013). Automated Facial Recognition in the Public and Private Sectors. Government of Canada. Retrieved 23-05-2019 from: <i>[https://www.priv.gc.ca/en/opc-actions-and-decisions/research/explore-privacy-research/2013/fr_201303/]</i></ref> </p>
   <p>Bill C-309, An Act to Amend the Criminal Code has made the concealment of identity (using masks or disguises) unlawful while participating in riots or unlawful assemblies<ref> Parliament of Canada. (June 19th, 2013). Bill C-309 An Act to Amend the Criminal Code (concealment of identity). Government of Canada. Retrieved 03-06-2019 from: <i>[https://www.parl.ca/LegisInfo/BillDetails.aspx?Bill=C309&Language=E&Mode=1&Parl=41&Ses=1 ]</i></ref>Although the Privacy Act and PIPEDA (Personal Information Protection and Electronic Documents Act) state that consent must be obtained before private information is collected, Bill C-309 paves the way for law enforcement to scan large crowds using facial recognition software and uncover the identities of participants.</p>
+
   <p>Bill C-309, An Act to Amend the Criminal Code has made the concealment of identity (using masks or disguises) unlawful while participating in riots or unlawful assemblies.<ref> Parliament of Canada. (June 19th, 2013). Bill C-309 An Act to Amend the Criminal Code (concealment of identity). Government of Canada. Retrieved 03-06-2019 from: <i>[https://www.parl.ca/LegisInfo/BillDetails.aspx?Bill=C309&Language=E&Mode=1&Parl=41&Ses=1 ]</i></ref> Although the Privacy Act and PIPEDA (Personal Information Protection and Electronic Documents Act) state that consent must be obtained before private information is collected, Bill C-309 paves the way for law enforcement to scan large crowds using facial recognition software and uncover the identities of participants.</p>
    
   <h2>Implications for Government Agencies</h2>
 
   <h2>Implications for Government Agencies</h2>
Line 83: Line 83:  
   <p class="inline">SSC could leverage this technology by offering facial recognition as a service. SSC could utilise this technology to replace the current government employee security ID. A smart camera will instantly capture the biometric data of the individuals for local analysis and then open the gate to access the building. This service could reduce ongoing security costs associated with having a security team on site, but there probably will not be any savings in the short term due to the cost of developing the applications and installing the related equipment. </p><p class="expand inline mw-collapsible-content">Facial recognition technology is a non-intrusive form of identity verification that cannot be lost by the individual. Within the SSC context, it would eliminate the need for employees to carry security passes. Additionally, this would help prevent unauthorized individuals from gaining access to secure facilities. </p><p class="inline">Two-factor authentication with a user’s face could also be used for accessing secure files with higher security classifications (such as secret documents).</p>
 
   <p class="inline">SSC could leverage this technology by offering facial recognition as a service. SSC could utilise this technology to replace the current government employee security ID. A smart camera will instantly capture the biometric data of the individuals for local analysis and then open the gate to access the building. This service could reduce ongoing security costs associated with having a security team on site, but there probably will not be any savings in the short term due to the cost of developing the applications and installing the related equipment. </p><p class="expand inline mw-collapsible-content">Facial recognition technology is a non-intrusive form of identity verification that cannot be lost by the individual. Within the SSC context, it would eliminate the need for employees to carry security passes. Additionally, this would help prevent unauthorized individuals from gaining access to secure facilities. </p><p class="inline">Two-factor authentication with a user’s face could also be used for accessing secure files with higher security classifications (such as secret documents).</p>
 
   <p class="inline-spacer"></p>
 
   <p class="inline-spacer"></p>
   <p class="expand inline mw-collapsible-content">There is a growing trend among smartphone manufacturers to create devices that can be unlocked with facial recognition technology. </p><p class="inline">A market research firm based in Hong Kong estimates that nearly 64% (or 1 billion) of all smartphones shipped worldwide will have facial recognition capabilities in 2020<ref>Naiya, Pavel. (February 7th, 2018) More than one billion smartphones to feature facial recognition in 2020. Counterpoint technology Market Research. Hong Kong, China. Retrieved 27-05-2019 from:  <i>[https://www.counterpointresearch.com/one-billion-smartphones-feature-face-recognition-2020/]</i></ref>SSC can take advantage of this research and only issue phones with facial recognition capabilities to employees. This biometric information can be paired with any other type of authentication method to create a two-step verification process for all smartphones. </p><p class="expand inline mw-collapsible-content">SSC would not need to acquire any additional software licences, as these phones would already have the capacity for facial recognition. Since verification is done locally, (reference images are stored on the device outside of a cloud environment) this minimizes the security risks associated with facial recognition technology.</p>
+
   <p class="expand inline mw-collapsible-content">There is a growing trend among smartphone manufacturers to create devices that can be unlocked with facial recognition technology. </p><p class="inline">A market research firm based in Hong Kong estimates that nearly 64% (or 1 billion) of all smartphones shipped worldwide will have facial recognition capabilities in 2020.<ref>Naiya, Pavel. (February 7th, 2018) More than one billion smartphones to feature facial recognition in 2020. Counterpoint technology Market Research. Hong Kong, China. Retrieved 27-05-2019 from:  <i>[https://www.counterpointresearch.com/one-billion-smartphones-feature-face-recognition-2020/]</i></ref> SSC can take advantage of this research and only issue phones with facial recognition capabilities to employees. This biometric information can be paired with any other type of authentication method to create a two-step verification process for all smartphones. </p><p class="expand inline mw-collapsible-content">SSC would not need to acquire any additional software licences, as these phones would already have the capacity for facial recognition. Since verification is done locally, (reference images are stored on the device outside of a cloud environment) this minimizes the security risks associated with facial recognition technology.</p>
 
   <p class="inline-spacer"></p>
 
   <p class="inline-spacer"></p>
 
   <p class="inline">Although facial recognition requires a lot of computing power to process images in real time, Edge Computing can mitigate this concern. Image pre-processing tasks can be completed by the device that took the picture, or much closer to the device than the data center. The device would capture the image, scan it for faces, and then extract information like a faceprint from the image. Once the faceprint has been created, it is sent to the main server for authentication and the original image is discarded. </p><p class="expand inline mw-collapsible-content">Since pre-processing of the faceprint has been done outside of the server, the server can focus on verifying the recent faceprint against an internal match.</p>
 
   <p class="inline">Although facial recognition requires a lot of computing power to process images in real time, Edge Computing can mitigate this concern. Image pre-processing tasks can be completed by the device that took the picture, or much closer to the device than the data center. The device would capture the image, scan it for faces, and then extract information like a faceprint from the image. Once the faceprint has been created, it is sent to the main server for authentication and the original image is discarded. </p><p class="expand inline mw-collapsible-content">Since pre-processing of the faceprint has been done outside of the server, the server can focus on verifying the recent faceprint against an internal match.</p>
Line 91: Line 91:  
   <p>There are also certain factors that can limit the accuracy of facial recognition systems. If the photo was taken in profile or if the image quality is too low, there may not be enough information available for the system to extract and generate a match. Haircuts, skin color, makeup, glasses, and face coverings such as surgical masks can also lower recognition accuracy. Considering that these systems are based on Artificial Intelligence (AI), there is also the possibility of improperly “training” them.</p>
 
   <p>There are also certain factors that can limit the accuracy of facial recognition systems. If the photo was taken in profile or if the image quality is too low, there may not be enough information available for the system to extract and generate a match. Haircuts, skin color, makeup, glasses, and face coverings such as surgical masks can also lower recognition accuracy. Considering that these systems are based on Artificial Intelligence (AI), there is also the possibility of improperly “training” them.</p>
 
   <p>An AI for a facial recognition system should be manually supervised to “reward” correct matches, but if the training set only consists of a very specific demographic of people it will have difficulty with detecting other types of faces. The lack of a diverse training set creates recognition biases in the programs, which makes them better at detecting and correctly identifying individuals with specific attributes over others.</p>
 
   <p>An AI for a facial recognition system should be manually supervised to “reward” correct matches, but if the training set only consists of a very specific demographic of people it will have difficulty with detecting other types of faces. The lack of a diverse training set creates recognition biases in the programs, which makes them better at detecting and correctly identifying individuals with specific attributes over others.</p>
   <p class="expand mw-collapsible">In a study conducted by Joy Buolamwini where three different facial recognition systems were tested for accuracy in determining genders, they had error rates between 21% and 35% for women with darker skin tones whereas the error rate for light-skinned males was less than 1%<ref>Lohr, Steve. (February 9th,2018). Facial Recognition is Accurate, if You’re a White Guy. New York Times. New York, USA. Retrieved 29-05-2019 from: <i>[https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html ]</i></ref>This brings into question the reliability of these systems. To avoid discrimination against specific minority groups, they must be developed and tested to make sure they don’t have any recognition biases.</p>
+
   <p class="expand mw-collapsible">In a study conducted by Joy Buolamwini where three different facial recognition systems were tested for accuracy in determining genders, they had error rates between 21% and 35% for women with darker skin tones whereas the error rate for light-skinned males was less than 1%.<ref>Lohr, Steve. (February 9th,2018). Facial Recognition is Accurate, if You’re a White Guy. New York Times. New York, USA. Retrieved 29-05-2019 from: <i>[https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html ]</i></ref> This brings into question the reliability of these systems. To avoid discrimination against specific minority groups, they must be developed and tested to make sure they don’t have any recognition biases.</p>
 
   <p class="inline">These systems also have fuzzy accuracy rates, meaning that matches are never 100% accurate when doing searches, for example, in a database of images. There is the real possibility of false positives, where matches are found but they aren’t for the right person, and of false negatives, where the real match exists within the database but the system is unable to create a match. </p><p class="expand inline mw-collapsible-content">This accuracy gap means that the systems should only be used by trained individuals who understand how the technology works and specific guidelines should be followed when matches are generated by the system.</p>
 
   <p class="inline">These systems also have fuzzy accuracy rates, meaning that matches are never 100% accurate when doing searches, for example, in a database of images. There is the real possibility of false positives, where matches are found but they aren’t for the right person, and of false negatives, where the real match exists within the database but the system is unable to create a match. </p><p class="expand inline mw-collapsible-content">This accuracy gap means that the systems should only be used by trained individuals who understand how the technology works and specific guidelines should be followed when matches are generated by the system.</p>
   <p class="expand mw-collapsible">A good example in the field is how the Toronto police force is using the system: only six FBI-trained Toronto Police officers can use the system and it can only generate a list of candidates. They do not use the system as a sole basis for arrests but in tandem with other traditional evidence gathering methods<ref>Burt, Chris. (May 28th, 2019). Toronto police using facial recognition as Canadian government ponders rules. Biometrics Research Group Inc. Retrieved 29-05-2019 from: <i>[https://www.biometricupdate.com/201905/toronto-police-using-facial-recognition-as-canadian-government-ponders-rules ]</i></ref>AI systems, if they are to help inform important decisions should never be solely trusted and should be used as tools to inform decisions, not guide them.</p>
+
   <p class="expand mw-collapsible">A good example in the field is how the Toronto police force is using the system: only six FBI-trained Toronto Police officers can use the system and it can only generate a list of candidates. They do not use the system as a sole basis for arrests but in tandem with other traditional evidence gathering methods.<ref>Burt, Chris. (May 28th, 2019). Toronto police using facial recognition as Canadian government ponders rules. Biometrics Research Group Inc. Retrieved 29-05-2019 from: <i>[https://www.biometricupdate.com/201905/toronto-police-using-facial-recognition-as-canadian-government-ponders-rules ]</i></ref> AI systems, if they are to help inform important decisions should never be solely trusted and should be used as tools to inform decisions, not guide them.</p>
   <p class="inline">To deal with the issue of bad lighting or faces at unrecognizable angles, some systems are altering images so that they are more “readable”. </p><p class="expand inline mw-collapsible-content">Panasonic has developed facial recognition software that analyses movement, speed and lighting present in videos to automatically correct still images that would otherwise be blurry<ref>Panasonic. (February 20th, 2018) Panasonic to Launch Face Recognition Server Software Using Deep Learning Technology. Panasonic Corporation. Kadoma, Japan. Retrieved 15-05-2019 from: <i>[https://security.panasonic.com/news/archives/686 ]</i></ref></p><p class="inline">Since the software adjusts the image before being analysed, it creates a new concern that additional false positives will be created. If an image has been touched up and edited before it was “plugged-in” to a facial recognition system, this can alter the faceprint being analysed and the search results may be biased or incorrect.</p>
+
   <p class="inline">To deal with the issue of bad lighting or faces at unrecognizable angles, some systems are altering images so that they are more “readable”. </p><p class="expand inline mw-collapsible-content">Panasonic has developed facial recognition software that analyses movement, speed and lighting present in videos to automatically correct still images that would otherwise be blurry.<ref>Panasonic. (February 20th, 2018) Panasonic to Launch Face Recognition Server Software Using Deep Learning Technology. Panasonic Corporation. Kadoma, Japan. Retrieved 15-05-2019 from: <i>[https://security.panasonic.com/news/archives/686 ]</i></ref> </p><p class="inline">Since the software adjusts the image before being analysed, it creates a new concern that additional false positives will be created. If an image has been touched up and edited before it was “plugged-in” to a facial recognition system, this can alter the faceprint being analysed and the search results may be biased or incorrect.</p>
 
   <p class="inline-spacer"></p>
 
   <p class="inline-spacer"></p>
 
   <p class="inline">Another limitation of these systems is that it can only recognize individuals whose images are already contained within its database. </p><p class="expand inline mw-collapsible-content">Systems must also be able to perform “liveness” testing, or in other words, they must be able to determine if the subject in question is actually there in person since faces are not secret, in the same sense that passwords are secret, and faces cannot be hidden. Facial recognition systems rely on the difficulty of impersonating a real person to keep the system secure. Given this fact, the system needs to be able to determine the “liveness” of the image it’s analysing and determine if the person it just photographed is real or if it’s a picture.</p>
 
   <p class="inline">Another limitation of these systems is that it can only recognize individuals whose images are already contained within its database. </p><p class="expand inline mw-collapsible-content">Systems must also be able to perform “liveness” testing, or in other words, they must be able to determine if the subject in question is actually there in person since faces are not secret, in the same sense that passwords are secret, and faces cannot be hidden. Facial recognition systems rely on the difficulty of impersonating a real person to keep the system secure. Given this fact, the system needs to be able to determine the “liveness” of the image it’s analysing and determine if the person it just photographed is real or if it’s a picture.</p>
 
   <p class="inline-spacer"></p>
 
   <p class="inline-spacer"></p>
   <p class="inline">Facial recognition system technology has not yet been regulated in Canada and organizations that currently use it must operate within a specific legal framework. Under the Canadian Privacy Act, federal government institutions can only use personal information for the specific purpose for which it was collected and consent of the individual must be obtained before that information can be used for another purpose. Under PIPEDA, an organization must inform individuals and receive consent to any use of their personal information<ref>Office of the Privacy Commissioner of Canada. (March 2013). Automated Facial Recognition in the Public and Private Sectors. Government of Canada. Retrieved 23-05-2019 from: <i>[https://www.priv.gc.ca/en/opc-actions-and-decisions/research/explore-privacy-research/2013/fr_201303]/</i></ref></p><p class="expand inline mw-collapsible-content">This is a potential legal barrier to any organization planning to conduct live analysis of public crowds since each individual would need to consent to the collection and use of their faces (private information). </p><p class="inline">These regulations ensure that databases containing personal information belonging to different GC departments cannot be shared between departments for purposes other than the specific use for which consent was obtained.</p>
+
   <p class="inline">Facial recognition system technology has not yet been regulated in Canada and organizations that currently use it must operate within a specific legal framework. Under the Canadian Privacy Act, federal government institutions can only use personal information for the specific purpose for which it was collected and consent of the individual must be obtained before that information can be used for another purpose. Under PIPEDA, an organization must inform individuals and receive consent to any use of their personal information.<ref>Office of the Privacy Commissioner of Canada. (March 2013). Automated Facial Recognition in the Public and Private Sectors. Government of Canada. Retrieved 23-05-2019 from: <i>[https://www.priv.gc.ca/en/opc-actions-and-decisions/research/explore-privacy-research/2013/fr_201303]/</i></ref> </p><p class="expand inline mw-collapsible-content">This is a potential legal barrier to any organization planning to conduct live analysis of public crowds since each individual would need to consent to the collection and use of their faces (private information). </p><p class="inline">These regulations ensure that databases containing personal information belonging to different GC departments cannot be shared between departments for purposes other than the specific use for which consent was obtained.</p>
    
   <h4>Considerations</h4>
 
   <h4>Considerations</h4>
262

edits