Time to “face” the threat

March 4, 2025 | Ambreena Ladhani

1.1 Introduction

Imagine preparing to board an airline with your friend. A border security officer examines your passport and uses facial recognition software to confirm and authenticate your identity quickly. You do not mind—you enjoy the convenience of boarding your flight early. As you settle in, you unlock your iPhone using FaceID, open your camera, and capture a photo of your friend. Eagerly, you share it on Facebook, but amid your excitement, you forget to tag her. Conveniently, Facebook’s “tag suggest” feature recognizes your friend’s face from other images you have posted, prompting a notification that asks if you would like to tag her in your photo. Do you view the “tag suggest” feature as a significant privacy violation? Maybe not. After all, you just surrendered your face to border security officers, and you unlock your phone with your face every day. What difference does it make?

The difference is not just significant— it is downright insurmountable. Facebook has turned your photo into biometric information and can identify your friend from Facebook’s extensive database of over two billion users. Your reliance on FaceID and voluntary submission of a biometric scan at the airport may inadvertently influence your acceptance of the new feature. Yet, the facial recognition (FR) feature used by Facebook can identify you, which is far more invasive than using FaceID,1 or verifying your passport at the airport. While Facebook’s “tag suggest” feature is now defunct, the hypothetical remains relevant for future integrations of FR technologies within social media platforms. As the use of FR proliferates in private sectors, passive acceptance of increasingly privacy invasive technologies remains a worrisome threat.

Part I of the following research explores the differences between two forms of FR technology—identification and verification FR. Drawing on Norval and Prasopoulou’s work, this section asserts that the acceptance of FR in one context may result in acceptance of FR in a new context. The section uses Facebook’s integration of identification FR to illustrate the threat of FR identification in social media, which differs in capabilities from other accepted uses of verification FR.

Part II of this research explores Helen Nissenbaum’s theory of contextual integrity, arguing it as the favorable approach to conceptualizing privacy. Further, the section asserts that social media integration of identification FR and data scraping images from social media websites to create FR databases are both a breach of contextual integrity.

Part III of this research demonstrates the manner in which contextual integrity aligns with Canadian conceptions of privacy, illustrated through interpretations of privacy case law and legislation for the private-sector collection, use, and disclosure of personal information. Attention is drawn to the manner in which contextual integrity differs from private-sector privacy regulations, particularly by shedding light on the consent-based model that grounds Canada’s private-sector legislation.

Part IV of this research criticizes the current “no-go” zones in private-sector privacy legislation for its limited protection of FR harms arising from the two violations of contextual integrity explored in Part II: social media integration of identification FR derived from user’s images, and data scraping from online social media platforms to create FR databases. Accordingly, this section argues that explicit bans should prevent the creation of identification biometric systems derived from existing photos or videos, and data scraping from social media websites or CCTV to create FR databases.

Ultimately, this research argues that when one form of FR is accepted in one context, it is possible that other forms will be accepted in other contexts. The meanings associated with a particular practice may be influenced by prior practices, and accordingly, acceptance of one form of FR may lead to passive consent to other, potentially more invasive technologies. This threat alone renders “consent” to FR technologies inadequate. Instead, explicit bans on intrusive forms of identification FR systems are necessary in the private sector, particularly in the age of social media. These bans are pertinent in light of the available access to social media images databases. Creating FR systems from available online images, at minimum, runs afoul to privacy when viewed as contextual integrity.

1.2 Not all facial recognition is created equal

FR systems generally fall under two categories—those used for verification (one-to-one matching) and those used for identification (one-to-many matching)2, but these categories are often used interchangeably.3 Drawing this distinction from the outset is important, as each practice differs with respect to how much information can be revealed about a person.

1.2.1 Distinguishing between verification and identification

The first form of FR is facial verification.4 Facial verification uses a one-to-one matching system, where the individual’s face is compared to a stored photo or facial template of the identity claimed.5 The system is designed to authenticate if the presented face corresponds to the individual’s claimed identity.6 Verification FR is less privacy-invasive and eliminates the risk of “false matches,” as the system has no alternative face to match the biometric information against.7 Moreover, if the storage device is lost or stolen, only the personal information of a single individual is susceptible to risk.8 Common examples of verification FR include unlocking an iPhone using FaceID,9 or verifying a passport when boarding an airplane.10

The other form of FR is facial identification. Identification involves receiving data from an unknown person and using a one-to-many matching system to compare the individual’s face against a larger database with the biometric information of several other people to identify a potential match.11 Identification FR raises privacy concerns because of the heightened risk of false matches and the threat of data breaches.”12

Both identification FR and verification FR have been used for public safety and security. The Canada Border Services Agency (CBSA) has tested various verification and identification FR methods over the past decade.13 Verification FR (one-to-one matching) is frequently used to verify the identity of travellers. This form of FR presents less of a threat based on the amount of information it reveals and the absence of the threat of "false matches."14 However, the CBSA has also piloted more invasive identification FR practices, including the “Faces on the Move” Project occurring from August 2017 to March 2018.15 The project is the largest known government deployment of identification FR in Canada to date,16 and equipped the CBSA with tools to identify thousands of international travellers entering Canada. 17 The project aimed to ensure travellers with false identification records and criminal histories did not gain re-entry into Canada.18 During the project, FR was used on 15,000-20,000 travellers daily.19 While the project ceased operation, the CBSA “continues to explore potential uses for facial recognition in establishing travellers’ identities” suggesting the use of identification FR will continue to rise in the public sector.20

In 2020, the Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic (CIPPIC) released a report concerning using FR at the border.21 The report stated, “[f]acial recognition is being adopted at the border without consideration for the harm it would cause... Then, once it is  adopted, it often gets repurposed very quickly for reasons beyond the narrow reasons of the context in which it was developed.”22 The repurposing of FR beyond its initial context was forecasted in 2006, in a technical report issued by the European Commission’s Joint Research Centre:

The objective here is to open up the scope of thinking on the future of biometrics beyond the current passport and visa application plans. One of the themes of this report is the so-called “diffusion effect,” i.e. as biometric technologies become better, cheaper, more reliable and are used more widely for government applications, they will also be implemented in everyday life, in businesses, at home, in schools, and other public sectors.23

Certainly, the forecast projected in the technical report rings true. The use of FR has proliferated in the private commercial sector in the past decade.24 It has been integrated into a variety of services and products in the private sector, spanning security systems,25 smart TVs,26 and product advertisement.27 Facial recognition security cameras can now be purchased for under $150 despite increasing sophistication.28

The technical report correctly identifies that the diffusion of biometrics may create the perception of a surveillance society.29 In 2013, the OPC echoed the same concerns that “the availability of cheap facial recognition for the masses may have the effect of normalizing surveillance over time.”30 The concern of normalizing surveillance over time is compounded by the concern that FR technologies are prone to being “repurposed very quickly for reasons beyond the narrow reasons of the context in which it was developed.”31 Perhaps it is unsurprising, then, that Instagram will soon integrate facial recognition to verify users’ age,32 and Facebook has patent-pending technology that will allow it to tailor ads based on the facial expressions of its users.33

The adoption of different FR systems is defined by Aletta Norval and Elpida Prasopoulou as “iterations” of FR, many of which exist within different contexts.34 An “iteration” refers to a particular practice that is repeated and altered.35 To harmonize this with the concept of “diffusion,” when one form of FR is accepted, it is more likely that other “iterations” will also be accepted, and in a new iteration “the meanings associated with a particular practice will bear the traces of earlier contexts.”36 In other words, one may be inclined to consent to a new form of identification FR, if they already consent to another form of FR, such as unlocking their iPhone with Apple FaceID. Acceptance in one context may lead to acceptance in another. While both forms of FR may only require an image of one’s face, the system’s capability can differ immensely. The lack of knowledge regarding the actual capacity of each FR use in its specific context may result in people acquiescing to increasingly intrusive technologies.

To illustrate the potential threat that may arise from passive acceptance of a new FR iteration, the following section explores Facebook’s (now defunct) “tag suggest” feature, which purported to be like other uses of FR, though it differed immensely in its capabilities.

1.2.2 Facebook’s identification system

In 2011, Facebook played a pivotal role in the shift from FR use in the public sector into the private sector.37 Facebook introduced a (now defunct) “tag suggest” feature, allowing individuals in new images to be identified based on the information extracted from previously tagged photos.38 When introducing the “tag suggest” feature, Facebook posted the following message to its users:

You may have noticed a box on the right of your home page called “Photos are better with friends.” This is a new way of telling you about features we have added to Facebook such as our Photo Tag Suggest. We showed this to help people to learn about Photo Tag and how they can control it.39

In the same post, Facebook articulates some positive uses for the “tag suggest” feature.40 The post states that “tagging photos can be a chore,” and the new feature is intended to “make this process easier for you.”41 Most concerning, however, is the following statement:

When you or a friend upload new photos, we use face recognition software—similar to that found in many photo editing tools—to match your new photos to other photos you’re tagged in. We group similar photos together and, whenever possible, suggest the name of the friend in the photos.42

Facebook suggested the new feature is similar to those found in photo editing tools, implying it was an incremental shift from what users may have already agreed to in the past. However, the capabilities of this “tag suggest” feature had the “ability to combine facial biometric data with extensive information about users, including biographic data, location data and associations with friends”.43 The feature was capable of identifying who is in the image. Recall the difference between verification and identification— identification FR compares the individual’s face against a larger database. And Facebook’s image database is “the largest in the world,”44 with nearly two billion monthly users who upload 350 million photos every day, making “Facebook the holder of the most comprehensive profiles on a large segment of the world population.”45 To compound the threat further, Facebook’s Chief Privacy Officer suggested that using the FR may expand beyond its tagging feature.46 Ultimately, if the FR tagging feature was no longer defunct, and instead, widely accepted, the passive acceptance of use beyond the tagging feature is more likely to be accepted as well.

One concern that emerged from the “tag suggest” feature was its “opt-out” structure. Facebook did provide instructions for opting out, stating that “[i]f for any reason you don’t want your name to be suggested, you will be able to disable suggested tags in your privacy settings.”47 The absence of user consent ignited a class action lawsuit in Illinois,48 for violating the Illinois Biometric Privacy Act,49 one of the “toughest laws in the U.S. concerning the protection of biometric data.”50 However, while concerns were raised regarding Facebook’s inadequate consent, Aletta Norval and Elpida Prasopoulou (correctly) identify that the problem is not just “whether or not face recognition should be an opt-in service.”51 The other concern is that Facebook fundamentally altered users’ images, turning them into into biometric identification information—which makes “attention to informational norms so important.”52

Norval and Prasopoulou suggest it is plausible that Facebook’s new iteration of FR was perceived as simply an “automation of practices of identification we have been doing all along.”53 This practice may be viewed as no different than using FR to verify one’s passport at the airport, when in reality, Facebook was actually “aggregating this information with data from the users’ activity.”54 Accordingly, increased disclosure of personal information may be seen as a “normal part of modern life.”55 Users may be inclined to provide consent to new FR iterations, even when the new technology violates the informational norms in a given context.

The importance of adhering to informational norms is precisely why privacy is better categorized as contextual integrity. While informed consent and compliance with Fair Information Practice Principles play an important role in regulating privacy, the theory of contextual integrity begs for further evaluation of the information norms to determine whether privacy has been breached.56 The following section argues that contextual integrity is the favourable approach to conceptualizing privacy, through an exploration of how Facebook’s FR technology is a violation of privacy as contextual integrity. Following this, the practice of data scraping will be scrutinized as another practice that violates contextual integrity. Although Facebook's tagging feature is no longer active, the ongoing risk of other social media companies or third parties leveraging large image databases to develop facial recognition (FR) systems persists.

 

1.2.3 Theorizing privacy as contextual integrity

Helen Nissenbaum’s theory of contextual integrity posits that privacy is defined by how information flows.57 Under this theory, privacy is a fluid concept which cannot simply be divided into “secret” or “not secret” information.58 Instead, contextual integrity evaluates the flow of information—transferring knowledge from one person to the next.59 Context integrity is violated when the flow of information is disrupted, for example, when information intended for one situation is inserted into another. 60

The theory of contextual integrity rejects the traditional dichotomies of “public” and “private information” and instead situates the violation in the context of its governing norms.61 Contextual integrity is the favourable approach because it provides a workable definition of privacy and a corresponding model to determine when privacy violations may occur.62 Contextual integrity can be distilled to three main ideas that build off one another: privacy is defined by how information flows; information flow is appropriate when it conforms with contextual privacy norms; and contextual norms can be described by a set of parameters including the type of information being shared and the parties to that information. 63

First, contextual integrity contends that the flow of information will be appropriate where it conforms with informational “norms.”64 Nissenbaum posits that contextual integrity is violated when “information flows” breach legitimate contextual norms,65 regardless of whether the information is traditionally considered private or public.66 Contextual informational norms are derived from the “status quo” and are determined by the “contexts, domains, spheres, institutions, or fields” that are “firmly rooted in common experience.”67 Accordingly, the general and commonly held expectations about “what will happen with shared information” are central in determining whether information norms have been breached.68 While conceptions of privacy remain highly contested, Nissenbaum provides a starting point—the status quo.[69]

To determine the “status quo,” or in other words, the norms applicable to determine a privacy breach, the following parameters must be determined: data type (what sort of information is being shared; data subject (who the information is about); sender (who is sharing the data); recipient (who is getting the data); transmission principle (the constraints imposed on the flow).70 Additionally, Nathan Malkin posits another parameter—the purpose (how the data will be used and to what end) is also relevant; contextual integrity is violated when information intended for one purpose is then used for another.71 While the data type, subject, sender and recipient are typically easy to discern, the transmission principles account for “the conditions or constraints that restrict information flow or limit it to specific circumstances.”72 These principles may include, but are not limited to, consent, any form of notice or disclosure, reciprocity, and legal requirements.73 These principles may, in turn, influence people’s expectations and norms in a given situation.74

The five parameters mentioned are non-exhaustive, but all of these parameters must be determined to discern whether there has been a breach of privacy.75 Further, suppose one of the parameters is undermined. In that case, privacy expectations cannot be determined. If we know what information is and who it is about, but we “don’t know whom it is being shared with,” then we do not know whether a privacy violation is occurring.76

One downfall of privacy as contextual integrity is it may entrench norms which may impact the advancement of technology for beneficial purposes.77 Nathan Malkin points to the example of a doctor who seeks to use AI-powered software to assist patients—perhaps the flow of information is new, but it is not necessarily inappropriate.78 Nissenbaum posits that privacy, as contextual integrity, promotes a “presumption of the status quo” 79 but challenging this status quo is possible. New normative flows of information can be evaluated through their ethical legitimacy.80

To determine the ethical legitimacy of new flows, the interests of the affected parties, the ethical and political values and the contextual functions, purposes and values must be evaluated.81 For example, Malkin suggests that if the flow results in disparate outcomes for different demographics, it might be unjust.82 However, if the benefits are superior to the status quo of how information flows, contextual integrity may find that this outweighs an individual's privacy interests.83 The following section will analyze Facebook’s FR technology and how it violates contextual integrity.

1.2.4 Facebook FR violates contextual integrity

In her work on “Developing Face Recognition Privacy in Social Networks,” Yana Welinder uses contextual integrity to evaluate the new flow of information created through Facebook’s FR “tagging feature.”84 Relying on Nissenbaum’s theory of contextual integrity, she contends that a flow of information online can be determined by evaluating the situation in an analogous context offline.85 Welinder analogizes posting photos on Facebook to the practice of showing a photo album to friends over coffee:

If we think of posting photos on Facebook primarily as social interaction, we could analogize this practice to someone showing a photo album from a recent trip to a few friends over coffee. Extraction of biometric data then would be analogous to one of my friends secretly snapping photos of my album with his camera and forwarding them to a third party, who in turn uses them to identify me in other contexts. When analogizing to an offline situation, we can see clearly that such behavior would violate the social norms governing my coffee session. 86

Based on her evaluation of the social norms offline, Welinder finds that the Facebook tag feature violates contextual integrity in three ways. First, it alters the nature of the information.87 By transforming a still photo posted on Facebook into biometric data, the photo suddenly permits a stranger to identify and access various degrees of information about the individual.88 Second, Facebook introduces new recipients beyond those the user initially would have expected when posting their image on Facebook.89 Since the photo tag feature could extract biometric data, other Facebook users may choose to identify the user in new photos, expanding the group of people with access to the information.90 Finally, Facebook shifted the transmission principle from “one where the user can delete all her photos and tags of her so that others cannot find her on Facebook to a situation where her personally identifiable information is stored in a biometric database beyond her control.”91 Accordingly, consent in this context would not render this action compliant with contextual integrity.

Welinder’s analysis of this violation of contextual integrity fits squarely with Aletta Norval and Elpida Prasopoulou’s findings that the primary concern was not the absence of consent or the opt-out model. The primary concern here is that FR disrupts the normalized flow of information, by fundamentally altering a still image into biometric data used for identification.92 In the following section, a similar analysis is applied to evaluate the privacy harm arising from online “data scraping.”

1.2.5 Data scraping violates contextual integrity

As argued in the context of Facebook’s use of FR, using still images to create biometric information violates contextual integrity by changing the nature of the information, introducing new recipients to that information, and shifting the transmission principles in the context. A similar violation occurs when third parties engage in a practice known as data scraping.

Data scraping refers to the automated process of retrieving massive amounts of information from websites hosting publicly accessible data, including images posted online.93 The OPC identifies that “[t]he capacity of data scraping technologies to collect and process vast amounts of individuals’ personal information from the internet raises significant privacy concerns, even when the information being scraped is publicly accessible.”94

The issue of data scraping from online materials became particularly relevant when Clearview AI, a United States-based technology company, scraped thousands of images of people posted on various platforms such as Facebook, Instagram, Venmo, and YouTube to create a FR database.95 The FR tool is performed in four steps.96 First, it scraped and stored images of faces and associated data from online sources. 97 Second, it created biometric identifiers through templates and numerical representations of each image. It also allowed users to upload an image (of themselves or anyone else) and match it to other images in its database.98 Finally, it provided a list of results containing all the matching images and metadata to the user.99 Clearview AI made its FR database available to the Royal Canadian Mounted Police (RCMP), along with several other policing agencies in Canada.100

Similar to Facebook’s use of FR, Clearview AI also altered the nature of the information by transforming still photos into biometric data. Clearview AI introduced new recipients beyond those the user initially expected by making the FR database available to several other entities, including policing agencies across Canada. The FR could extract biometric data and make the information available to a new group of people. Additionally, Clearview AI altered the transmission principle by shifting from a scenario where users may choose to delete their information off of Facebook or the internet to one where the personally identifiable information is stored in a database beyond the user's control. This is a clear violation of contextual integrity.

While it is not the contention of this paper to delve into law enforcement’s use of FR, it is appropriate to address it in the context of evaluating the ethical legitimacy of this new informational flow. If the Clearview AI information was made available to the police alone, it may be argued that this is an appropriate flow of information, because the benefits of the database are superior to the status quo. Certainly, databases like Clearview AI have been said to assist with finding missing children,101 and would be able to assist in other public safety and security matters. However, this flow of information is not ethically legitimate.

While facial recognition technology has been reported to work with a 99% accuracy rate for white male faces, the same cannot be said for visible minorities.102 In the case of racialized groups, particularly Black women, facial recognition technology has an error rate of approximately 35%.103 The United States National Institute of Standards and Technology (NIST) found disproportionately higher rates of false matches amongst Black, Indigenous and Asian people.104 The same study, which tested 18 million photos across 189 algorithms, also found that women, children, and elderly people were more likely to be misidentified through facial recognition technology.105 Women, children, elderly people, and visible minorities are among the most vulnerable populations to comprise modern-day society. This also fundamentally undermines the argument that the technology could help find missing children, given that children are often not well recognized by the technology.

The criminal justice system, in particular, has always demonstrated patterns of bias, racism and discrimination against minority groups. According to the 2020 General Survey (GSS) on Social Identity, it was reported that one in five Black and Indigenous people have little or no confidence in the police, which is nearly double the rate of those who are not Indigenous or visible minorities.106 Consistent research demonstrates that Black and Indigenous people face disproportionate levels of contact with the police and are more likely to rate police performance poorly based on their interactions.107 As Malkin suggests, where the flow results in disparate outcomes for different demographics—as it does here, it is unjust.108 In this context, the new flow of information is not ethically legitimate. These considerations are raised, in part, to counter the argument that Clearview AI’s FR database is a new flow of information that is ethically legitimate for public safety and security. Despite the reliability concerns, as the law currently stands, there is no explicit ban on law enforcement use of identification FR. The lengthy debate on law enforcement’s use of FR is deferred as this is not currently governed by the same legislation as private-sector organizations.

The following section will explore sections of the existing legal framework for the collection, use and disclosure of personal information in the private sector. The following section also discusses how Canadian private sector legislation aligns and diverges from contextual integrity.

1.3 The legal gaps

Canada has two federal privacy laws enforced by the Office of the Privacy Commissioner of Canada (OPC). The Privacy Act109 regulates how the federal government handles personal information. The Personal Information Protection and Electronic Documents Act (PIPEDA) 110 regulates how private-sector organizations collect, use, and disclose personal information in the course of for-profit commercial activities in Canada.111 This research does not explore provincial privacy legislation or the Privacy Act and will only focus on federal regulations for private-sector information collection.

1.3.1 Canadian privacy is already contextual

Debates surrounding the definitions of privacy continue to evolve and shift over time. At the very least, current interpretations of privacy in Canada acknowledge its highly contextual nature. In R v. Jarvis,112 the Supreme Court of Canada confirmed that assessing a reasonable expectation of privacy “requires a contextual assessment.”113 The interpretation of privacy in PIPEDA case law demonstrates similar values to the theory of contextual integrity. In Eastmond v. Canadian Pacific Railway, 114 it was confirmed that determining whether information has been collected for an appropriate purpose under PIPEDA involves considering the specific circumstances of the collection, use, and disclosure. This requires flexibility and acknowledgment that these factors can vary depending on the context. 115 Accordingly, Canadian conceptions of privacy align, in part, with the theory of contextual integrity.

For example, in 2021, the Office of the Privacy Commissioner of Canada (OPC) commenced a joint investigation into Clearview AI.116 As discussed in the previous section, Clearview AI compiled a facial recognition database by scraping more than three billion images publicly available through online websites without obtaining users’ consent.117 While Clearview AI admitted to not obtaining consent, the company gathered the information from “publicly available” online websites and, thus, argued that there was no reasonable expectation of privacy.118 However, the OPC held that individuals who posted their images online had no reasonable expectations that Clearview would collect, use, and disclose their images for identification purposes.”119 Further, the OPC found that Clearview unreasonably collected the information via indiscriminate scraping of publicly accessible websites, which was inappropriate.120 Accordingly, Clearview AI was found to be in breach of PIPEDA.

The insights from the Clearview AI case demonstrate that images scraped from public websites, including social media, remain subject to privacy protection despite being posted online.121 This aligns with contextual integrity, where the issue is “not whether the information is private or public, gathered from public or private settings”122—but rather the contextual norms of informational flow. Here, the OPC also seems to interpret the reasonable expectation of privacy to extend to the flow of information. The OPC determined that individuals who posted their images online had no reasonable expectations that the images would be used for identification purposes.123 When information intended for one purpose (posting an image to share with friends and family) is used for a different and inappropriate purpose (collecting the images to comprise a FR database), this violates contextual integrity.

In 2020, the OPC launched a joint investigation into Cadillac Fairview Corporation Limited, a large owner of shopping malls in Canada.124 The investigation was launched in response to concern that Cadillac Fairview was collecting and using facial recognition technology through its in-mall directories without adequate consent from shoppers.125 The technology took temporary images of any individual in view of the directory, used facial recognition software to convert the images into biometric templates and used that information to assess age and gender.126 The OPC held that if Cadillac Fairview wanted to continue using the technology, it must obtain meaningful, express opt-in consent explaining the privacy implications of using the technology in a comprehensible manner.127

The OPC’s analysis in the Cadillac Fairview case echoes the Supreme Court of Canada’s articulation in Jarvis, where it was held that “privacy is not an all-or-nothing concept, and being in a public or semi-public space does not automatically negate all expectations of privacy.”128 These findings echo Professor Nissenbaum’s argument that the distinction between private and public information and spaces is no longer salient.129 The OPC agreed that despite the shoppers being in a mall setting and subject to CCTV recording, this still gave rise to a privacy breach under PIPEDA. However, the difference in this case was the OPC did contend that express opt-in consent was required, and thus, the information flow would be appropriate if consent were to be provided.130

With respect to the role of consent, the Canadian interpretation of privacy diverges from contextual integrity. While the OPC found that Cadillac Fairview’s use of the biometric data was not for identification purposes, it was still noted that the “possession of a facial recognition template can allow for identification of an individual through comparison against a vast array of images readily available on the internet or via surreptitious surveillance.”131 Based on the retention of FR templates and the capabilities for potential identification, contextual integrity may have rendered this practice to breach the contextual norms, notwithstanding consent. In the following section, the shortcoming of PIPEDA’s consent-based model is explored further.

1.3.2 No legislation is immune from criticism

PIPEDA falls short of aligning with contextual integrity because it is ultimately grounded in a consent-based model.132 As articulated by Zeynep Tufek, “[d]ata privacy is not like a consumer good, where you click ‘I accept’ and all is well. Data privacy is more like air quality or safe drinking water, a public good that cannot be effectively regulated by trusting in the wisdom of millions of individual choices.”133 Contextual integrity rejects the idea that compliance with formal procedural requirements, including informed consent, guarantees the absence of a privacy breach.134 Instead, the information flow must adhere to established contextual norms for the collection to be considered privacy-compliant. This requirement holds regardless of whether consent was provided. As Nissenbaum argues, the notice-and-consent model is inadequate for online based harms.135

For clarity, contextual integrity, though not a perfect match, is still compatible with existing conceptions of privacy in Canada, because PIPEDA still requires the collection, use, or disclosure of information for purposes that a reasonable person would consider appropriate in the circumstances, in addition to any consent requirements.136

Bill C-27, also referred to as Canada’s Digital Charter Implementation Act,137 is set to replace PIPEDA.138 Bill C-27 will encompass the Consumer Privacy Protection Act (CPPA), the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act (AIDA). Under the CPPA in Bill C-27, the collection, use or disclosure of information must be done in a manner and for a purpose that a reasonable person would consider to be appropriate in the circumstances, whether or not consent is required under the Act.139 The CPPA explicitly clarifies that privacy can still be breached where consent was obtained or was not required if the purpose or manner of the collection, use, or disclosure is considered inappropriate to a reasonable person.140 The explicit recognition of the requirement of an appropriate purpose and manner, regardless of consent, indicates that private-sector privacy laws may shift away from the consent-based model and move to a model more aligned with contextual integrity.

Since the adoption of biometrics is in its relative infancy, norms are unsettled, which may make the appropriate purposes (and manner) requirement difficult to predict. Organizations may collect and use biometric information believing that the purpose and manner of collection, use or disclosure is appropriate. The only way to determine what is not appropriate is to wait for a breach to occur—or settle the norms. And with Bill-C27 reforming the current privacy landscape, there is an optimal opportunity to provide explicit guidance. Setting these expectations would further contextual integrity by establishing clear, settled norms for a given context. As suggested by Nisenbaum, there exists “plenty of room... to maintain a robust role for informed consent” where there are clear social norms to govern the particular context.141

1.4 Filling the gap

It is time to face the threat of recognition. The OPC has released a list of “no-go” zones, outlining when collecting, using, or disclosing personal information will not be appropriate under PIPEDA.142 These zones establish clear contextual norms that organizations must follow to remain PIPEDA compliant. Some of the no-go zones include collection, use, or disclosure that is contrary to the law, discriminatory, or causes significant harm.143 However, these zones are vaguely defined and do not account for the risk of harm that may arise from turning online images into biometric identification data, or when third parties scrape images from online and use them to create FR databases.

1.4.1 Establish the norms through clear “no-go” zones

It remains to be seen which “no-go” zones will emerge in the new Bill C-27. But this is an optimal time to follow in the footsteps of the EU Artificial Intelligence Act (AIA), currently regarded as the “global gold standard.”144 AIA follows a risk-based approach, with provisional agreements in place for banning unacceptable, privacy-invasive practices.145 Among others, the following two practices will be banned under the AIA:

  1. “Post remote” biometric identification systems, with an exception for law enforcement use “for the prosecution of serious crimes and only after judicial authorization.” 146
  2. Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases.147

“Post-remote biometric identification systems” will include “pictures or video footage generated by closed-circuit television cameras or private devices, which predates the system’s use concerning the individuals involved.”148 Prohibiting this practice in Canada is imperative. Facebook’s “tag suggest” feature permitted the platform to aggregate personal information and create a massive identification FR database back in 2011. Canada must ensure that another social media company, or private sector entity will not attempt to do the same as technology increases in sophistication. If the same ban from AIA were to be adopted in Canada, it would also narrow law enforcement’s use of FR. PIPEDA does not currently apply to law enforcement, but it remains to be seen whether this will differ under Bill C-27. Restricting law enforcement’s use to judicial authorization would significantly improve concerns around the exploitive use of FR by law enforcement and the concerns of inaccuracy briefly touched upon in Part II.

The AIA also explicitly forbids data scraping for the purposes of creating FR databases. While the OPC previously determined that data scraping contradicts PIPEDA implementing a specific ban is crucial.149 Detection of the violation may only occur after harm has transpired and breach has already occurred. There should be no room for ambiguity. Implementing these bans is a proactive measure, which would also ensure that private-sector companies do not attempt to seek consent for improper practices, addressing the concern that new iterations of FR may result in passive acceptance of extremely invasive identification practices.

1.5 Concluding thoughts

This research has explored the distinction between identification and verification FR, emphasizing the potential consequences of passive acceptance of increasingly privacy-invasive technologies. While the difference between identification and verification FR may seem subtle, the capabilities of identification FR databases in the private sector vary greatly and may present a grave threat to privacy in some cases.

The application of Helen Nissenbaum’s theory of contextual integrity demonstrates that turning images into identifiable biometric information, whether done through data-scraping practices or by social media companies, violates the norms of information flow and, thus, cannot be rendered appropriate with consent. The alignment of this theory with Canadian privacy conceptions underscores the inadequacy of the current consent-based model in addressing the challenges posed by evolving FR technologies, demonstrating the need to set explicit norms.

In response to these concerns, this research advocates for proactive measures, proposing two specific prohibitions in line with the EU Artificial Intelligence Act. The capabilities of new identification FR systems must be restrained through legislative action. A ban on identification biometric systems derived from existing photos or videos would prevent the transformation of personal images into biometric data. The prohibition on data scraping from social media and CCTV to create FR databases resolves ambiguity, ensuring that images posted online are not used to create biometric data.

While these measures may not resolve all of the concerns arising from the capabilities of emerging technology, at the very least, it is a step in the right direction. And in the current era of social media, where faces comprise extensive image databases, the threat of recognition is greater than ever.


Ambreena Ladhani recently completed her second year at Queen’s University, Faculty of Law. With a keen interest in intellectual property and technology, she is passionate about using her legal education to work with innovative emerging and high-growth companies. Ambreena’s passion for Privacy Law has been demonstrated through her academic coursework, and her co-authorship of a published article that examines the civil liability gap within existing privacy torts, specifically in the context of “deep fakes.” Additionally, Ambreena was selected as a Project Lead for a PBSC project entitled “OpenJustice,” where she led team of 11 students in training and refining a generative AI platform aimed at enhancing access to justice. She was awarded the PBSC Chief Justice Richard Wagner Award in recognition of her contributions to the Queen’s Chapter. This summer, Ambreena will be working at a corporate law firm in New York, specializing in technology, artificial intelligence, and innovation. In her free time, Ambreena enjoys working out, collecting sneakers, and travelling.

Bibliography

Legislation

Bill C-27, Digital Charter Implementation Act, 1st Sess, 44th Parl, 2022 (second reading 24 April 2023)

Illinois Biometric Information Privacy Act 2008, 740 ILCS 14

Personal Information Protection and Electronic Documents Act S.C. 2000, c. 5.

Privacy Act R.S.C., 1985, c. P-21

Jurisprudence

Eastmond v. Canadian Pacific Railway, 2004 FC 852

Re Facebook Biometric Info. Privacy Litig., 185 F. Supp. 3d 1155, 1158 (N.D. Cal. 2016).

R. v. Jarvis, 2019 SCC 10

PIPEDA Findings No. 2020-004 (28 October 2020)

PIPEDA Findings No. 2021-001 (2 February 2021

Government documents

Canada, Office of the Privacy Commissioner of Canada, Automated Facial Recognition in the Public and Private Sectors, (Canada: OPC, March 2013) online (PDF):<https://www.priv.gc.ca/en/opc-actions-and-decisions/research/explore-privacy-research/2013/fr_201303/>

Canada, Office of the Privacy Commissioner of Canada, Summary of Privacy Laws in Canada (Canada: OPC, January 2018)

Canada, Office of the Privacy Commissioner of Canada, Guidance on inappropriate data practices: Interpretation and application of subsection 5(3) (Canada: OPC, May 2018)

Canada, Office of the Privacy Commissioner of Canada, Data at Your Fingertips: Biometrics and the Challenges to Privacy (Canada: OPC, 1 March 2022).

European Parliament, Press Release “Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI” (9 December 2023) online: < https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai>

House of Commons, Facial Recognition Technology and the Growing Power of Artificial Intelligence: Report of the Standing Committee on Access to Information, Privacy and Ethics 44-1 (October 2022) (Chair: Pat Kelly).

Secondary material: Articles

Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic (CIPPIC), “Facial Recognition at a Crossroads: Transformation at our Borders & Beyond” (September 2020).

Grother, Patrick et al., “Face recognition vendor test part 3: Demographic effects” (2019) NIST IR 8280. https://doi.org/10.6028/NIST.IR.8280

Maghiros, Ioannis et al. “Biometrics at the frontiers.” (2005) Technical report EUR 21585 EN. Seville: Institute for Prospective Technological Studies.

Malkin, Nathan, “Contextual Integrity, Explained: A More Usable Privacy Definition” IEEE Security & Privacy, (2023) 21:1 pp. 58-65 doi: 10.1109/MSEC.2022.3201585 at 64.

Nissenbaum, Helen, “Privacy as Contextual Integrity,” (2004) Symposium, 79:105 Wash. L. Rev. 119.

Nissenbaum, Helen “A Contextual Approach to Privacy Online,” (2011) Daedalus 2011 140(4):32-48.

Nissenbaum, Helen “Contextual integrity up and down the data food chain,” (2019) Theor. Inquiries Law, 20:1 doi: 10.1515/til-2019-0008

Norval, Aletta and Prasopoulou, Elpida, “Public faces? A critical exploration of the diffusion of face recognition technologies in online social networks” (2017) New media & Society 19:4, 637–654.

Welinder, Yana, “A FACE TELLS MORE THAN A THOUSAND POSTS: DEVELOPING FACE RECOGNITION PRIVACY IN SOCIAL NETWORKS” (2012) 26:1 Harvard LJ.

Secondary material: Online

Allegretti, Matt, '”Facial Recognition Technology Is Turning Heads in Advertising” Medium, (3 October 2017), online:<medium.com/dumbstruck/facial-recognitiontechnology-is-turning-heads-in-advertising-3f932c64f21e>

Bennett, Jared, “Facebook, your face belongs to us” The Daily Beast (31 July 2017), online: <https://www.thedailybeast.com/how-facebook-fights-to-stop-laws-on-facial-recognition>

Canada Border Security Agency, “Faces on the move: Multi-Camera Screening” The Globe and Mail (14 January 2016) online (pdf): <https://www.theglobeandmail.com/files/editorial/News/0627-nw-na-facial-recognition/CBSA_FOTM_PIA.pdf>

Canada, Office of the Privacy Commissioner of Canada, PIPEDA Fair Information Principle 3 – Consent (August 2020) online: <https://www.priv.gc.ca/en/privacy-topics/privacy-laws-in-canada/the-personal-information-protection-and-electronic-documents-act-pipeda/p_principle/principles/p_consent/>

Canada, Office of the Privacy Commissioner of Canada, “Joint Statement on data scraping and the protection of privacy” (24 August 2023) online: < https://www.priv.gc.ca/en/opc-news/speeches/2023/js-dc_20230824/>

Choi, Tyler, “Explainer: Verification vs. Identification Systems,” (19 January 2022) online: <https://www.biometricupdate.com/201206/explainer-verification-vs-identification-systems>

Cotter, Adam, Canadian Centre for Justice and Community Safety Statistics, “Perceptions of and experiences with police and the justice system among the Black and Indigenous populations in Canada” (16 February 2022).

Facebook, “You may have noticed a box appearing on the right of your home page called Photos are better with friends...” (30 June 2011), online: <https://www.facebook.com/facebook/posts/245406335484993> [https://perma.cc/W8QB-QV9M].

Forrest, Maura, “RCMP’s use of facial recognition extends well beyond Clearview AI” (4 October 2022) online: <https://www.politico.com/news/2022/09/30/rcmps-facial-recognition-clearview-ai-00059639>

Gillis, Wendy and Allen, Kate, “OPC confirms use of controversial facial recognition tool Clearview AI,” Toronto Star, (1 March 2020) online: <https://www.thestar.com/news/canada/2020/03/01/opp-confirms-use-of-controversial-facial-recognition-tool-clearvi ew-ai.html.>

Government of Canada, “Facial Verification at the Border” (5 May 2021) online: <https://www.publicsafety.gc.ca/cnt/trnsprnc/brfng-mtrls/prlmntry-bndrs/20210907/13-en.aspx>

Government of Canada, “Facial Recognition” Public Safety Canada (3 July 2020) online: <https://www.publicsafety.gc.ca/cnt/trnsprnc/brfng-mtrls/prlmntrybndrs/20200708/009/index-en.aspx>

Hassanin, Nada, “Law professor explores racial bias implications in facial recognition technology” UCalgary News (23 August 2023) online: < https://ucalgary.ca/news/law-professor-explores-racial-bias-implications-facial-recognition-technology>

Jiang, Kevin, “Instagram debuts age recognition AI in Canada. Some experts are skeptical” The Toronto Star, (3 March 2023), online: <https://www.thestar.com/business/instagram-debuts-age-recognition-ai-in-canada-some-experts-are-skeptical/article_aeb8774e-2e2e-56d6-a09b-352c328d0523.html>

Lazzarotti, Joseph and Abrahams, Nadine, “Illinois Biometric Information Privacy Act FAQs” online (PDF):<https://www.jacksonlewis.com/sites/default/files/docs/IllinoisBIPAFAQs.pdf>

McCrea, Ollie, “Facial Recognition vs. FaceID?” (6 September 2023) online: <https://skybiometry.com/facial-recognition-vs-face-id/>

Mayhew, Stephen, “Gadspot to Sell Security Cameras with Facial Recognition in North America” Biometric Update (10 October 2012), online:<www.biometricupdate.com/ 201210/gadspot-to-sell-security-cameras-with-facial-recognition-in-north-america>.

NEC New Zeland “Face Detection vs Facial Recognition- what’s the difference? Publications & Media (1 June 2022), online: <https://www.nec.co.nz/market-leadership/publications-media/face-detection-vs-facial-recognition-whats-the-difference/>

Schroder, Anna, “‘Real-time’ versus ‘post’ remote biometric identification systems under the AI Act” (14 October 2022) AI Forum, online: <https://alti.amsterdam/schroder-biometric/>

Sullivan, Heather, “Smart TV offers facial recognition” NBC (26 March 2012), online: <www.nbc12.com/story/17257510/smart-tvs-offer-facial-recognition/>

Tufekci, Zeynep, “The Latest Data Privacy Debacle” The New York Times (30 January 2018), online: <www.nytimes.com/2018/01/30/opinion/strava-privacy.html>.

Visser, Broderick, “Government facial recognition project tested on nearly 3 million unsuspecting travellers at Pearson airport in 2016” (20 July 2021) online: <https://divergemedia.ca/2021/07/21/government-facial-recognition-project-tested-on-nearly-3-million-unsuspecting-travellers-at-pearson-airport-in-2016/>

End Notes


1 Ollie McCrea, “Facial Recognition vs. FaceID?” (6 September 2023) online: <https://skybiometry.com/facial-recognition-vs-face-id/>

2 House of Commons, Facial Recognition Technology and the Growing Power of Artificial Intelligence: Report of the Standing Committee on Access to Information, Privacy and Ethics 44-1 (October 2022) (Chair: Pat Kelly).

3 NEC New Zeland “Face Detection vs Facial Recognition- what’s the difference? Publications & Media (1 June 2022), online: <https://www.nec.co.nz/market-leadership/publications-media/face-detection-vs-facial-recognition-whats-the-difference/>

4 Canada, Office of the Privacy Commissioner of Canada, Data at Your Fingertips: Biometrics and the Challenges to Privacy (Canada: OPC, 1 March 2022). [Data at Your Fingertips]

5 Tyler Choi, “Explainer: Verification vs. Identification Systems”, (19 January 2022) online: <https://www.biometricupdate.com/201206/explainer-verification-vs-identification-systems>

6 Ibid.

7 Data at Your Fingertips, supra note 4.

8 Ibid.

9 McCrea, supra note 1.

10 Government of Canada, “Facial Verification at the Border” TB/CBSA (5 May 2021) online: <https://www.publicsafety.gc.ca/cnt/trnsprnc/brfng-mtrls/prlmntry-bndrs/20210907/13-en.aspx> [CBSA Facial Verification]

11 Choi, supra note 5.

12 Data at Your Fingertips, supra note 4.

13 Ibid.

14 Canada, Office of the Privacy Commissioner of Canada, Automated Facial Recognition in the Public and Private Sectors, (Canada: OPC, March 2013) online (PDF):<https://www.priv.

gc.ca/en/opc-actions-and-decisions/research/explore-privacy-research/2013/fr_201303/> [OPC 2013]

15 Government of Canada, “Facial Recognition” Public Safety Canada (3 July 2020) online:<https://www.publicsafety.gc.ca/cnt/trnsprnc/brfng-mtrls/prlmntry-bndrs/20200708/009/index-en.aspx>

16 Broderick Visser, “Government facial recognition project tested on nearly 3 million unsuspecting travellers at Pearson airport in 2016”, (20 July 2021) online: <https://divergemedia.ca/2021/07/21/government-facial-recognition-project-tested-on-nearly-3-million-unsuspecting-travellers-at-pearson-airport-in-2016/>

17 Canada Border Security Agency, “Faces on the move: Multi-Camera Screening" The Globe and Mail (14 January 2016)

18 Ibid.

19 Visser, supra note 16.

20 CBSA Facial Verification, supra note 10.

21 Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic (CIPPIC), Facial Recognition at a Crossroads: Transformation at our Borders & Beyond, September 2020. [CIPPIC]

22 Ibid.

23 Ioannis Maghiros et al. “Biometrics at the frontiers.” (2005) Technical report EUR 21585 EN. Seville: Inst. for Prospective Tech Studies.

24 OPC 2013, supra note 14.

25 Stephen Mayhew, “Gadspot to Sell Security Cameras with Facial Recognition in North America” Biometric Update (10 October 2012), online:<www.biometricupdate.com/ 201210/gadspot-to-sell-security-cameras-with-facial-recognition-in-north-america>.

26 Heather Sullivan, ‘‘Smart TV offers facial recognition” NBC (26 March 2012), online: <www.nbc12.com
/story/17257510/smart-tvs-offer-facial-recognition/>.

27 Matt Allegretti, ''Facial Recognition Technology Is Turning Heads in Advertising" Medium (3 October 2017), online:<medium.com/dumbstruck/facial-recognitiontechnology-is-turning-heads-in-advertising-3f932c64f21e>

28 Mayhew, supra note 26.

29 Ibid.

30 OPC 2013, supra note 14 at 12.

31 CIPPIC, supra note 21.

32 Kevin Jiang, “Instagram debuts age recognition AI in Canada. Some experts are skeptical” The Toronto Star, (3 March 2023), online: <https://www.thestar.com/business/instagram-debuts-age-recognition-ai-in-canada-some-experts-are-skeptical/article_aeb8774e-2e2e-56d6-a09b-352c328d0523.html>

33 Jared Bennett, “Facebook, your face belongs to us” The Daily Beast (31 July 2017), online: <https://www.thedailybeast.com/how-facebook-fights-to-stop-laws-on-facial-recognition>

34 Aletta Norval and Elpida Prasopoulou, “Public faces? A critical exploration of the diffusion of face recognition technologies in online social networks” (2017) New media & Society 19:4 at 648.

35 Ibid at 648.

36 Ibid at 639.

37 Ibid at 643.

38 Yana Welinder, “A FACE TELLS MORE THAN A THOUSAND POSTS: DEVELOPING FACE RECOGNITION PRIVACY IN SOCIAL NETWORKS” (2012) 26:1 Harvard LJ.

39 Facebook, “You may have noticed a box appearing on the right of your home page called Photos are better with friends...” (30 June 2011), online: <https://www.facebook.com/facebook/posts/245406335484993> [https://perma.cc/W8QB-QV9M]. [Facebook Post]

40 Ibid.

41 Ibid.

42 Ibid [emphasis added].

43 OPC 2013, supra note 14 at 8.

44 Ibid.

45 Jared Bennett, “Facebook, your face belongs to us” The Daily Beast (31 July 2017), online: <https://www.thedailybeast.com/how-facebook-fights-to-stop-laws-on-facial-recognition>

46 Ibid.

47 Facebook Post, supra note 39.

48 Re Facebook Biometric Info. Privacy Litig., 185 F. Supp. 3d 1155, 1158 (N.D. Cal. 2016).

49 Illinois Biometric Information Privacy Act 2008, 740 ILCS 14

50 Joseph Lazzarotti and Nadine Abrahams, “Illinois Biometric Information Privacy Act FAQs” online (PDF):<

https://www.jacksonlewis.com/sites/default/files/docs/IllinoisBIPAFAQs.pdf>

51 Norval and Prasopoulou, supra note 34 at 645.

52 Ibid.

53 Ibid.

54 Ibid at 644.

55 Ibid at 645.

56 Nathan Malkin, “Contextual Integrity, Explained: A More Usable Privacy Definition" IEEE Security & Privacy, (2023) 21:1at 64.

57 Helen Nissenbaum, "Privacy as Contextual Integrity" (2004) Symposium, 79:105 Wash. L. Rev. 119.

58 Malkin, supra note 56 at 3.

59 Ibid.

60 Nissenbaum 2004, supra note 57 at 140.

61 Ibid at 151.

62 Malkin, supra note 56 at 6.

63 Ibid at 5.

64 Nissenbaum 2004, supranote 57 at 138.

65 Helen Nissenbaum, “Contextual integrity up and down the data food chain” (2019) Theor. Inquiries Law, 20:1doi: 10.1515/til-2019-0008 at 224.

66 Nissenbaum 2004, supranote 57 at 137.

67 Ibid.

68 Malkin, supra note 56 at 4.

69 Nissenbaum 2004, supranote 57 at 145.

70 Malkin, supra note 56 at 5.

71 Nissenbaum 2004, supranote 57 at 140.

72 Malkin, supra note 56 at 5.

73 Ibid at 5.

74 Ibid.

75 Ibid.

76 Ibid.

77 Nissenbaum 2004, supranote 57 at 145.

78 Malkin, supra note 56 at 6.

79 Nissenbaum 2004, supranote 57 at 145.

80 Malkin, supra note 56 at 6.

81 Ibid.

82Ibid.

83 Ibid.

84 Yana Welinder, “A FACE TELLS MORE THAN A THOUSAND POSTS: DEVELOPING FACE RECOGNITION PRIVACY IN SOCIAL NETWORKS” (2012) 26:1 Harvard LJ.

85 Ibid at 184.

86 Ibid at 186.

87 Ibid.

88 Ibid.

89 Ibid.

90 Ibid at 186.

91 Ibid at 187.

92 Norval and Prasopoulou, supra note 34 at 645.

93 Office of the Privacy Commissioner of Canada, “Joint Statement on data scraping and the protection of privacy” (24 August 2023) online: < https://www.priv.gc.ca/en/opc-news/speeches/2023/js-dc_20230824/>

94 Ibid.

95 PIPEDA Findings No. 2021-001 (2 February 2021) [Clearview AI]

96 Ibid.

97 Ibid.

98 Ibid.

99 Ibid.

100 Wendy Gillis and Kate Allen, "OPC confirms use of controversial facial recognition tool Clearview AI," (1 March 2020) Toronto Star, online:<https://www.thestar.com/news/canada/2020/03/01/opp-confirms-use-of-controversial-facial-recognition-tool-clearvi ew-ai.html.>

101 Maura Forrest, “RCMP’s use of facial recognition extends well beyond Clearview AI” (4 October 2022)

102 Nada Hassanin, “Law professor explores racial bias implications in facial recognition technology” UCalgary News (23 August 2023) online: <https://ucalgary.ca/news/law-professor-explores-racial-bias-implications-facial-recognition-technology>

103 Ibid.

104 Patrick Grother et al., “Face recognition vendor test part 3: Demographic effects” (2019) NIST IR 8280. https://doi.org/10.6028/NIST.IR.8280 

105 Ibid at 2.

106 Adam Cotter, Canadian Centre for Justice and Community Safety Statistics, “Perceptions of and experiences with police and the justice system among the Black and Indigenous populations in Canada” (16 February 2022).

107 Ibid.

108 Malkin, supra note 56.

109 Privacy Act R.S.C., 1985, c. P-21

110 Personal Information Protection and Electronic Documents Act S.C. 2000, c. 5. [PIPEDA]

111Office of the Privacy Commissioner of Canada, “Summary of Privacy Laws in Canada” (January 2018) online: <https://www.priv.gc.ca/en/privacy-topics/privacy-laws-in-canada/02_05_d_15/#heading-0-0-2-2-2>

112 R. v. Jarvis, 2019 SCC 10 [Jarvis]

113 Ibid para 60.

114 Eastmond v. Canadian Pacific Railway, 2004 FC 852 [Eastmond]

115 Ibid at 129.

116 Clearview AI, supra note 95 (“Biometric information is considered sensitive, in almost all circumstances, and facial recognition data is particularly sensitive...individuals who posted their images online...had no reasonable expectations that Clearview would collect, use and disclose their images for identification purposes.”)

117 Ibid.

118 Ibid.

119 Ibid.

120 Ibid.

121 Ibid.

122 Nissenbaum 2004, supranote 57 at 152.

123 Clearview AI, supra note 95.

124 PIPEDA Findings No. 2020-004 (28 October 2020) [Cadillac Fairview]

125 Ibid.

126 Cadillac Fairview, supra note 124.

127 Ibid.

128 Jarvis, supra note 112 at para 41.

129 Nissenbaum 2004, supranote 57 at 152.

130 Cadillac Fairview, supra note 124.

131 Cadillac Fairview, supra note 124.

132 Canada, Office of the Privacy Commissioner of Canada, “PIPEDA Fair Information Principle 3 – Consent” (August 2020) online: <https://www.priv.gc.ca/en/privacy-topics/privacy-laws-in-canada/the-personal-information-protection-and-electronic-documents-act-pipeda/p_principle/principles/p_consent/>

133 Zeynep Tufekci, “The Latest Data Privacy Debacle” The New York Times (30 January 2018), online:<www.nytimes.com/2018/01/30/opinion/strava-privacy.html>.

134 Malkin, supra note 56 at 4.

135 Helen Nissenbaum, “A Contextual Approach to Privacy Online”, (2011) Daedalus 2011 140(4) at 34.

136 PIPEDA supra note 109, s.5(3).

137 Bill C-27, Digital Charter Implementation Act, 1st Sess, 44th Parl, 2022 (second reading 24 April 2023) [Bill C-27]

138 Personal Information Protection and Electronic Documents Act S.C. 2000, c. 5. [PIPEDA]

139 Bill C-27, supra note 136 s.12(1).

140 Ibid.

141 Nissenbaum 2011, supra note 134 at 45.

142 Office of the Privacy Commissioner of Canada, “Guidance on inappropriate data practices: Interpretation and application of subsection 5(3)” (May 2018) online:< https://www.priv.gc.ca/en/privacy-topics/collecting-personal-information/consent/gd_53_201805/>

143 Ibid.

144 European Parliament, Press Release “Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI” (9 December 2023) online: < https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai>

145 Ibid.

146 Ibid.

147 Ibid.

148 Anna Schroder, ‘Real-time’ versus ‘post’ remote biometric identification systems under the AI Act” (14 October 2022) AI Forum, online: <https://alti.amsterdam/schroder-biometric/>

149 Clearview AI, supra note 95.