The Surveillance Architecture of Long Beach: A Decade of LBPD Facial Recognition Technology Use with Inadequate Policy, Oversight, and Transparency (Full Report)

facial_recognition_LBPD_full_report

This is a longer version of a report we have also published as an abridged article. That version is about 1/3 the length. As this is a complex topic that has not been explored in Long Beach this report quite detailed and about a 45-minute read. Those with a personal or professional interest may prefer the added details in this longer version. Both versions capture the essential facts.

The Long Beach Police Department has used facial recognition technology since 2010—with the department having access to three different investigative facial recognition databases. The database used the longest and most is the government-run Los Angeles County Regional Identification System (LACRIS). The other two systems, Vigilant Solutions' FaceSearch and Clearview AI, were provided by the companies on a free trial basis. The Vigilant Solutions' free trial lasted two and a half years and gave 28 officers access. LACRIS requires each user to be trained by the Los Angeles Sheriff’s Department, but the two free trials of the commercial systems involved little training or oversight according to police documents obtained via a public records request.

Until September 2020, there was no departmental policy to guide the use of facial recognition. The new policy guidance, which was enacted through a watch report, appears to be a placeholder that was drafted in response to inquiries made for this investigation. The LBPD never took the LACRIS-recommended step of adopting a local use policy, despite LACRIS even providing a template.

The LBPD began using LACRIS on Jan. 13, 2010, and 38 users have conducted 3,999 searches on the system. The LBPD used Vigilant Solutions' FaceSearch from April 17, 2018 to Sept. 28, 2020, according to a FaceSearch inquiry log produced by the LBPD under the Public Records Act (PRA). An accompanying statement noted the program is "no longer in use and not authorized for use at this time." FaceSearch was used 290 times total by 26 users during the multi-year free trial of the system.

While the LBPD used Vigilant Solutions' FaceSearch for two and a half years before use was curtailed last month, it seems the first training conducted on the program occurred just days before the department stopped using the program. While many previous PRA requests for facial recognition have failed to return any training documents or materials, a response received by CheckLBPD on Oct. 22, 2020 produced a Sept. 24, 2020 hour-long training presentation made to the LBPD by Vigilant Solutions' Customer Success Manager (and retired LBPD Lieutenant) Chris Morgan.

Vigilant Solutions' FaceSearch LBPD Training Session conducted by Chris Morgan, Vigilant Solutions Customer Success Manager and former LBPD Lieutenant on Sept. 24, 2020

It is unknown what triggered the FaceSearch training session, but it did follow a series of detailed PRA requests made for this investigation regarding FaceSearch training and record keeping. In the presentation, Morgan mentions the current lack of LBPD policy on facial recognition but states, "we are working with the folks to get that dialed-in." From the context, “we” would be Vigilant Solutions” and “the folks” would be the LBPD top brass. Morgan said he expected the LBPD to issue a special order on facial recognition soon with guidance on how to use the technology.

RECENT LBPD WATCH REPORTS AND SPECIAL ORDERS

This is the first statement anyone has made publicly about a potential LBPD special order on facial recognition technology. As will be discussed at the end of this report, Long Beach has many possible avenues it can explore for regulating this technology. Many California cities and counties have addressed police use of facial recognition technology through the legislative process—with public input and debate allowed. This only seems fair as
the technology has the power to change privacy as we know it, transform the criminal justice system, and end the ability to exist anonymously in public.

Days after Morgan's presentation, the department would issue a Sept. 29 Watch Report with the subject line “Use of Facial Recognition Programs,” that would suspend the department's use of free trials of FaceSearch, Clearview AI, and any other facial recognition besides LACRIS. It is unknown if a special order has been issued or is being drafted; however, the collection of training documents and special orders posted by the LBPD to comply with S.B. 978 does not yet include a special order related to facial recognition as of Nov. 13, 2020.

The department has issued a special order on another topic that has also been the subject of recent CheckLBPD PRA requests. As part of our investigation into the Surveillance Architecture of Long Beach, we have been requesting documents on LBPD cellular communication interception technology (IMSI-Catchers, commonly called Stingrays™). The LBPD began the purchase process of its IMSI-catcher in late 2013, and their ownership has been reported on by both the Electronic Frontier Foundation (EFF) and the L.A. Times in 2016 and 2017, respectively—but has not been a topic of more local press coverage.

Since CheckLBPD began issuing PRA requests on cell communications interception, the department issued an Oct. 21, 2020 memorandum that distributes a Sept. 10, 2020 Special Order on Cellular Communications Interception Technology. Both these documents are signed by Chief Robert Luna. The Oct. 21 memoranda states that the special order will stay in effect until the LBPD Manual is updated. It also states that it supersedes all previous special orders on the topics.

Most LBPD special orders are available to the public on the department's website, a collection of training and policy documents posted to comply with S.B 978. However, you will not find the previous special order on cellular communication interception there, even though the department drafted such a special order in 2016 to comply with the California Electronic Communications Privacy Act of 2015. CheckLBPD was only able to find the older order because of research done by the EFF in 2016 to gauge compliance with the act.

Neither the facial recognition watch report nor the cellular interception special order appears on the department's S.B 978 page as of Nov. 13, 2020. It is currently unknown if an LBPD special order on facial recognition is being drafted, as stated in Chris Morgan’s FaceSearch training session for the LBPD.

In that presentation, Morgan spends about half his presentation addressing what he terms "myths and falsehoods" promoted by "privacy advocates" such as the Electronic Frontier Foundation (EFF). Unfortunately for those attending the presentation, many of the arguments Morgan made were not true, or were based on a misunderstanding of the point he was responding to. For instance, Morgan repeatedly claims there are no "good guys" in facial recognition databases and talks about the concerns related to children being in these databases as if it were an absurdity.

Morgan's belief seems to come from his knowledge of his company's own practice of only using mug shots and criminal offender photos to populate its database. However, this article will show that at this point, most Americans (bad guys or not, including many children) are in facial recognition databases previously used by the LBPD—making it an issue that should not only not be dismissed, but one that should be addressed in any new LBPD policy. Morgan's response to the claims related to racial bias and potential constitutional violations was based on a misunderstanding of the claims and studies he debunked, as will be discussed below in the context of a discussion of the studies themselves.

The reason many of the claims Morgan made were not correct is because of companies like Clearview AI. The same LBPD PRA response that contained the use figures for LACRIS and FaceSearch states that the timeframe of use for Clearview AI is "unknown", the number of inquiries is "not available and is not tracked." Regarding the number of users, the LBPD states, the "program is not in use and is not a company we ever had a contract with so we do not have access to how many people have ever used the program."

However, a CheckLBPD review of emails exchanged with the company, produced under a separate PRA request, shows eighteen LBPD users created accounts—with most having access for two months ending in February 2020 and one user having an account for eight months.

Since our investigation into the LBPD's use of Clearview AI (including unsuccessful PRA requests for user data or audit reports related to the LBPD's use of Clearview AI), the company has made an Oct. 21, 2020 announcement that it has added training and compliance features to its program. One new feature requires police to enter a case number and type of crime for each search conducted. While a step in the right direction, it does indicate that use up to this point (including by the LBPD and 2,400 other law enforcement agencies) was done without such compliance. A follow-up PRA request is pending to see what use records can be obtained (or have been requested) from the Clearview AI.

In future writings on the subject, CheckLBPD will take deep dives into each of the LBPD's three facial recognition systems (LACRIS, FaceSearch, and Clearview AI). Hopefully, we will have been able to obtain Clearview AI use figures, if not from the LBPD, then from the L.A. Sheriffs directly. There is also another possible path to obtain Clearview AI data if the public records system does not pan out. Hacktivists were able to obtain a list of Clearview AI clients and search totals—with many surprising private and foreign clients.

THE RAW NUMBERS ON LBPD FACIAL RECOGNITION USE

Facial recognition has been in use by the LBPD for over a decade, with its use dramatically increasing in the last two years. The total numbers of facial recognition searches had never exceeded 200 searches per year through 2018 (even when two systems were in use). That number jumped to over 600 in 2019, and to over 2,800 searches in 2020. The vast majority of these searches were done using LARCIS, with FaceSearch being used less than 1/10 as often.

Currently, only aggregate data by year has been produced regarding LACRIS use. For FaceSearch, a more detailed log of the dates and times was produced. Both systems have much more extensive record-keeping and audit capabilities. LACRIS keeps particularly detailed records to comply with the FBI's Criminal Justice Information Services security policy, the California Attorney General Information Bulletin 13-04-CJIS, and sections of the California Penal Code regarding Criminal Offender Record Information (CORI).

LBPD Use of LACRIS Facial Recognition Program
LBPD Vigilant Solutions FaceSearch Use

As interesting as the above figures are, what might be revealed in future PRA requests is even more interesting. The 100-plus LBPD FaceSearch searches that occurred following the May 31 protests and looting seem likely related to the LBPD's Looter Task Force, but the 2,688 LACRIS searches in 2020 are not as easily explained. Knowing the date of the searches would help clear this up, but so far the LBPD has only produced aggregate use figures by year for LACRIS.

That is over 2,000 more than the previous year, which itself was abnormally high. They cannot be as easily explained away as searches of potential looters since the highest estimate of the number of potential looters made by the LBPD was 300, with 2,000 closer to the total number of protestors. Indiscriminately running facial recognition searches on protestors would be a violation of the 1st Amendment and is forbidden by many police departments with facial recognition policies, including those who have adopted the local template promoted by LACRIS. The LBPD has not adopted that template or created a similar policy, meaning Long Beach has no departmental-level facial recognition use policy to protect the civil rights and liberties of the people of Long Beach.  This flawed technology is banned in many jurisdictions, and yet the LBPD has used it for a decade without proper policy guidance for officers.

THE LBPD FACIAL RECOGNITION WATCH REPORT

Since CheckLBPD.org began making PRA and Media Relations requests on the LBPD's facial recognition programs, the department has issued a watch report creating a new policy on free trials of investigative software and databases. The Sept. 29 watch report, from Investigations Bureau Deputy Chief Erik Herzog, specifies that LACRIS is the only facial recognition database currently authorized for use in investigations and the other two systems "are not yet authorized and should only be used for demonstration / evaluation purposes."

Though the watch report prevents unauthorized use from happening in the future, it does not acknowledge that it may have occurred. It portrays the free trials as evaluations of the programs conducted in conjunction with research into "best practices related to policy and procedure." In that context, the watch report states, "prior to adopting any new technology, we must have clear guidelines and policies in place to govern the use. Otherwise, we risk losing access to the technology."

The critical part of the watch report states, "[t]o have a better understanding and to ensure we are maintaining records as required by law, effective immediately any employee initiating a trial of new software they intend to utilize to assist with an investigation must first obtain the permission of their division commander." [Bold in Original]

This is a significant policy change—as previously, individual officers were able to initiate free trials. The emails with Clearview AI show some confusion over who had started the free trial, when it began, and how any decision regarding purchase would be made. No one above the rank of sergeant was involved in the evaluation of the software, according to the documents produced so far.

The documents received from the department do not entirely support the characterization that the commercial facial recognition programs were only used for evaluation purposes—with investigators emailing the company about having success with Clearview AI. The programs had little oversight or recording of use that would be expected if a department was evaluating a new technology. The length of time and pattern of use of Vigilant Solutions' FaceSearch suggests investigators saw it more as an investigative tool than a new technology that was only being evaluated.

LBPD Watch Report on Facial Recognition
LBPD Watch Report curtailing use of private facial recognition in programs Clearview AI and FaceSearch in favor of LACRIS

We showed the watch report to Mohammad Tajsar, a Senior Staff Attorney at the ACLU of Southern California, who said he thought it "suggests that members of the LBPD have been inappropriately using trial evaluations of facial recognition products to conduct searches of individuals in ongoing investigations, and not properly documenting the searches in internal records."

He added that while "it is not clear what 'maintain records as required by law' refers to," it may be the Brady requirement to disclose exculpatory evidence in criminal trials or "other internal rules regarding investigative reporting requirements, internal affairs investigations, or audits."

Tajsar concluded, "What we can safely assume is that the watch report is necessary to prevent future violations of these reporting requirements. Whether such violations can result in any kind of exposure for the Department, or can materially alter ongoing or past criminal prosecutions, is unclear."

Overview of the LBPD's Three Facial Recognition Systems

Although facial recognition technology is not new, media coverage has increased recently as the technology becomes more integral to investigations at all levels of government. It was this recent media coverage that finally exposed the Los Angeles Police Department's facial recognition program. A program which had been under the public's radar for over a decade. In September, the L.A. Times ran a story about the LAPD's use of LACRIS in 30,000 cases since 2009; it was the first coverage of the program.

The LAPD has approximately 9,000 officers, meaning that the department conducted 3.3 searches per officer since 2009. The LBPD has over 800 sworn officers and conducted at least 4,289 searches since January 2010, or about 5.3 per officer over a similar time period. That means the LBPD used the system about 60% more often than the LAPD, adjusted for department size.

38 LBPD officers had access compared to 300 LAPD officers, which means the LBPD had a higher percentage of officers using facial recognition. However, that does not explain all of the 60%. The LBPD would also have been running more searches per officer with LACRIS access.

An LBPD response to an Aug. 16 PRA request produced an estimate from detectives that the LBPD has obtained probable leads in 60 cases using facial recognition technology. The LAPD did not produce a similar estimate. The LBPD logs and records received thus far have not shed any further light on how useful the technology has been for the department, though future releases may illuminate the topic.

The L.A. County Sheriffs oversee LACRIS, and it serves 64 local police departments in L.A. County. Its facial recognition database uses information from booking photos from county and local jails and offers biometric identification for tattoos and fingerprints. LACRIS was created in 2009—with the LBPD using the system since 2010.

Still image from LBPD FaceSearch training presentation conducted by Vigilant Solutions
Still image from LBPD FaceSearch training presentation conducted by Vigilant Solutions

Almost half of all states share driver's license photos with a federal facial recognition database, but in California, the DMV does not share your photo with government facial recognition databases—according to the Electronic Frontier Foundation's "Who Has Your Face" tool. However, if you have a U.S. Passport, you are in three databases: the Department of Homeland Security's, FBI FACE Services, and a Department of Defense database. Using TSA PreCheck puts you in a Homeland Security database, and applying for an internship or job with the federal government will get you entered into the FBI database. This has been true since at least 2016, despite the repeated claim made in Vigilant Solutions' LBPD training session that only "bad guys" are in facial recognition databases.

FaceSearch™ is made by Vigilant Solutions Inc. (recently acquired by Motorola), which also runs the LBPD's Automated License Plate Reader (ALPR) system. That system scans 24.7 million license plates a year in Long Beach. FaceSearch is a 2014 addition to Vigilant Solutions' bundle of surveillance products. FaceSearch has been able to compile a nationwide gallery of millions of faces from mugshot databases uploaded to its servers by local police departments, as well as photos from CrimeStoppers and other websites related to criminal justice, like Megan's Law or sex offender registries. Morgan stated that the LBPD has not uploaded its mugshot database into FaceSearch, but was hopeful he could create a local gallery of photos for the department in the future.

Still image from LBPD FaceSearch training presentation conducted by Vigilant Solutions
Still image from LBPD FaceSearch training presentation conducted by Vigilant Solutions

Vigilant Solutions' data-sharing and interoperability features are frequently touted on their website as the main benefits of their databases, but the compiling of databases of individuals' biometric data can have unintended consequences. While the accusations against Vigilant Solutions pale in comparison to those made against Clearview AI, it is still alleged that the company may have made legal missteps in its efforts to assist law enforcement.

In Illinois, Vigilant Solutions is currently being sued in a class-action lawsuit alleging violations of the state's privacy law for retaining in their mugshot database the images of people who were wrongly convicted—despite their convictions being vacated and expunged.

There are currently no California lawsuits over FaceSearch in California, but our privacy law has only been in effect for eleven months—with Proposition 24 making so-called improvements to the California Consumer Privacy Act (CCPA). The Electronic Frontier Foundation (EFF) and the ACLU, both of which assisted in this investigation, opposed Proposition 24 as a weakening California privacy law and full of loopholes for big business.

While Vigilant Solutions has so far escaped legal action in California, Clearview AI has not—with a class-action lawsuit alleging violations of the CCPA among its four counts.

Clearview AI is a different kind of facial recognition company—not because of its search algorithm, but due to the ethical gray area they inhabit and the invasive database they have created. Clearview AI has data-scraped 3 billion images from public social media posts, regular media, personal blog pages, and sites like meetup.com—sometimes in violation of those site's terms of service. If you have any online presence, there is a good chance you are already in Clearview AI's database. You can find out if you are in their database and remove yourself from it using the California Consumer Privacy Act, a process to be discussed in more detail in our upcoming article focused on Clearview AI.

Clearview AI database size, from California Class Action Lawsuit, Burke v. Clearview AI
Clearview AI database size, from California Class Action Lawsuit, Burke v. Clearview AI

Clearview AI has announced 2,400 law enforcement agencies use their products. Despite claims in January of being focused on law enforcement in the U.S. and Canada, documents released by hacktivists in February show the company secretly had 200 corporations, 50 educational institutions, and 26 foreign governments as clients. The company has also been accused by a U.S. Senator of violating federal laws designed to protect privacy, including laws that specifically protect the privacy of children online.

In May, Clearview AI announced that it had canceled all contracts that were not associated with law enforcement or "some other federal, state, or local government department, office, or agency." In a yet unsuccessful attempt to dismiss a state lawsuit filed by the ACLU of Illinois, they have also stopped all operations in Illinois (including law enforcement) and have taken steps to remove Illinois residents from their system. An investigation of the company by Canadian privacy protection authorities led the company to decide to close down all its Canadian operations (law enforcement clients included) as of July 2020. Individual California cities have also put in Clearview AI-specific bans, as has the state of New Jersey.

Clearview AI has other controversies less directly related to its data collection practices; controversies that have caused many past users to try to downplay their relationship with the company—including Trump campaign officials and the White House Tech Office. Clearview AI CEO/Co-founder, and those involved with the company's creation and growth, have extensive, recently uncovered links to alt-right extremists and actual neo-nazis. People associated with the company have made many public racist, anti-semitic, and misogynistic comments that were just uncovered in August. One described his job as "building algorithms to ID all the illegal immigrants for the deportation squads."

What may have just been a dream back when that statement was made in 2017, recently became a reality. Immigration and Customs Enforcement (ICE) signed a contract with Clearview AI, paying them $224,000 on Aug. 12 for "Clearview licenses." Clearview AI has also demonstrated an unsettling focus on journalists who have investigated the company and has been credibly accused of using their tools for political opposition research on behalf of conservative candidates.

LBPD was quoted a price of $2,000 per user per year for the Clearview AI, with an option for unlimited use for a "Negotiated Flat Fee." LBPD emails show an officer first created an account in July 2019, though other officers did not start creating accounts until December. In Jan. 2020, Clearview AI told the LBPD the free trial had been ongoing for over a month, and it was time to decide whether or not to purchase.

The LBPD negotiated another free month for more officers to try out the system. Fourteen officers, three civilian employees, and a private intelligence analyst from SRA International contracted by the department made accounts with Clearview AI. There was no training required to access the system, though one of the introductory emails contained an attachment of search tips and offered the chance to set up a webinar to "walk thru demos."

A Feb. 13, 2020 email regarding purchasing Clearview AI after the free trial period ended (between Jesse Macias—an LBPD detective assigned to computer crimes, a Clearview AI representative, and the officer in charge of the LBPD Information Technology section) shows the detective was in favor of keeping the service. The detective, who recently won an IMPACT award from the Long Beach City prosecutor for complex high-tech investigations, stated that "I personally like it, and have heard of others having success……" [Ellipsis in Original]

February 2020 email between LBPD Detective and Clearview AI. Click image for entire LBPD Clearview email collection

From the emails and a statement made in response to a PRA request, it seems the Clearview AI free trial ended in February—though questions about how the program was used remain. The department did not maintain use records for Clearview AI, and it is unclear if they can or have requested them from Clearview AI. A PRA request is pending on that matter and for internal communications related to the decisions to purchase Clearview AI.

While apparently done to demonstrate accuracy, Clearview AI's marketing material turned over by the LBPD specifically tells officers to "feel free to run wild with your searches," including testing out the system's abilities using pictures of friends, family, and celebrities. George Clooney and Joe Montana might be the most searched faces in the world thanks to their inclusion in Clearview AI's nationwide marketing campaign.

ISSUES WITH PAST LBPD FACIAL RECOGNITION PUBLIC RECORDS ACT REQUESTS

The Times reporting on the LAPD's facial recognition program noted inconsistent responses to past PRA requests and media inquiries, and that the department's "widespread use stood in contrast to repeated denials by the LAPD that it used facial recognition technology or had records pertaining to it."

After the Times' story was published, the LAPD sent corrections to past requesters and has a review of other past requests underway at the direction of LAPD Chief Moore "just to be assured that the department addressed each of those requests appropriately." This PRA request review is an L.A. City Attorney undertaking separate from the ongoing L.A. Police Commission review of facial recognition, which was also triggered by the L.A. Times reporting, and the previous assessment conducted by the LAPD's Office of Constitutional Policing and Policy mentioned in the L.A. Times article.

What CheckLBPD found in Long Beach is very similar to what the L.A. Times discovered with the LAPD.

The LBPD admitted to using facial recognition technology once in a 2017 PRA response to a journalist with MuckRock, while also claiming to have no responsive documents regarding contacts, policy, or training. However, in 2019 the LBPD denied having any responsive documents to two separate requests. The first was to a professionally-drafted PRA request on facial recognition from the Aaron Swartz Day Police Surveillance Project which was responded to in February 2019. The second was an equally well-drafted PRA request from Freddy Martinez of the Lucy Parsons Lab in July 2019.

Both of these groups study police technology issues and routinely submit PRA requests to police departments across the nation to track the spread of police surveillance technology. Their requests were for documents related to any machines with facial recognition software, outside facial recognition contractors, contracts, bids, communications, financial records, marketing material, training records, use records, accuracy reports, or any other documents related to facial recognition products in use or under consideration.

Both received the response, "the Long Beach Police Department is not in possession of records responsive to this request." It took over two months for the Aaron Swartz Project to receive a response; Martinez's request later in the year received a response after only three days.

Martinez sent in a response "appealing [his] records request" because he did not "believe that a reasonable search was conducted with regard to [his] request." He did not receive a response. The LBPD would have been using facial recognition for a minimum of three years at this point.

The California Public Records Act envisions a 10-day response time, with a 14-day extension period if there are "unusual circumstances" and longer extensions requiring the consent of the requester. I've heard 150 days is not unusual—which can be confirmed by articles written by past requesters or posted on muckrock.com (where the record is 11 months before getting an LBPD response).

Steve Downing of the Beachcomber also filed a comprehensive request for facial recognition documents this summer, but after a 70-day delay, only received the same six Clearview AI documents obtained by CheckLBPD.org. These documents only show a Clearview AI free trial starting in December 2019 and nothing to indicate the LBPD had two other facial recognition programs that were still ongoing.

CURRENT MORATORIUM ON FACIAL RECOGNITION ON BODY CAMERA FOOTAGE

A 2018 ACLU study was one of the driving forces behind the passage of A.B. 1215—California's three-year moratorium on police body cameras and data collected by body cameras currently in effect. The study used California lawmakers' images and found 1 in 5 were wrongly linked to mugshots using Amazon's Rekogntion™ facial recognition program. A similar ACLU study done on federal legislators did not result in a federal law being enacted—although the federal Facial Recognition and Biometric Technology Moratorium Act of 2020 is currently in committee.

Image produced by the ACLU during its lobbying effort on behalf of A.B. 1215, showing California lawmakers who were wrongly matched to mugshots using facial recognition
Image produced by the ACLU during its A.B. 1215 lobbying effort, showing CA lawmakers who were wrongly matched to mugshots

A.B.1215 states that "facial recognition and other biometric surveillance is the functional equivalent of requiring every person to show a personal photo identification card at all times in violation of recognized constitutional rights. This technology also allows people to be tracked without consent. It would also generate massive databases about law-abiding Californians, and may chill the exercise of free speech in public places."

Addressing bias, the law states facial recognition "has been repeatedly demonstrated to misidentify women, young people, and people of color and to create an elevated risk of harmful "false positive" identifications." These concerns are why facial recognition use on police body camera data is banned until 2023. The assembly bill states this is because using the technology on police body cameras would "corrupt the core purpose of officer-worn body-worn cameras by transforming those devices from transparency and accountability tools into roving surveillance systems."

The desire to guard against the corruption of the accountability purpose of police body cameras resulted in a broad ban that prevents law enforcement "from installing, activating, or using any biometric surveillance system in connection with an officer camera or data collected by an officer camera."

A.B. 1215 is the only law governing facial recognition in Long Beach. Although, over a dozen other jurisdictions have adopted city or county level laws banning, limiting, or regulating the use of the technology—including San Francisco, Boston, Oakland, Berkeley, Santa Cruz, Jacksonville, MS, and both Portlands.

Other local police departments have adopted extensive department-level policies on facial recognition. For example, the Detroit Police Department adopted a new facial recognition policy last month. Detroit had a previous facial recognition policy in place (one designed to make sure facial recognition was not the sole justification for an arrest). Still, that policy failed to prevent two avoidable, wrongful arrests.

In our upcoming Vigilant Solutions FaceSearch deep dive, CheckLBPD will examine how A.B. 1215 relates to the use of facial recognition technology in other police mobile devices, such as the mobile facial recognition devices from Vigilant Solutions called TACIDs. The use of TACIDs was discontinued in San Diego County to comply with A.B. 1215—which covers both body-worn cameras and cameras carried by officers.

However, Vigilant Solutions makes a mobile companion app that can turn any cell phone into a mobile facial recognition device. This technology was discussed in the Vigilant Solutions training done for the LBPD and would have been accessible by the LBPD, though the training failed to mention San Diego's experience or that they might want to check with a lawyer to make sure any use complies with California law.

Image from the Vigilant Solutions website, notice the map shows Long Beach, as well as how often one car's location can be recorded by the LBPD's Automated License Plate Readers
Image from the Vigilant Solutions website, notice the map shows Long Beach, as well as how often one car's location can be recorded by the LBPD's Automated License Plate Readers ated License Plate Readers

The Vigilant Solutions training presentation conducted for the LBPD was designed for a national audience, with no specific discussion of California law. It is entirely possible that this training from a private company, with its many misstatements of fact and failure to cover current California law, is the only training on facial recognition that some LBPD officers have received.

This is concerning as the training does not address the Brady requirement and California specific rules related to the Brady requirement. The training also dismisses concerns related to racial bias and U.S. Constitutional violations by wrongly portraying those concerns and then only addressing the straw-man versions of legitimate concerns.

When addressing potential constitutional violations from facial recognition Morgan only discussed in detail the Fourth Amendment right to privacy. He dismisses the claim that facial recognition can violate that right by stating that facial recognition is only used on images of people taken in public spaces. While sometimes true, that is not always the case. The argument also ignores the potential Fourth Amendment issue created by Clearview AI's data-scraping 3 billion images off the web—often violating privacy laws and without legal permission from individuals or the sites whose terms of service they violate.

Morgan glossed over claims related to violations of the First and Fifth Amendments, with his entire response to the EFF's argument that misuse of facial recognition technology can impact the First Amendment or disproportionately impact people of color being that it is "just not so."

Among those claims he dismisses are that setting up a facial recognition camera to create a database of all who attend a mosque would violate the free exercise of religion. It is not an outlandish suggestion given the NYPD's decades-long mosque surveillance program. Other potential 1st Amendment violations are related to using the technology to surveil protected political protest.

The use of facial recognition at protests was never discussed in the training presentation, despite its obvious relevance to Long Beach and the potential violations that have occurred in other cities. As will be discussed more in part 4, facial recognition can be misused by police in many ways. Among the potential violations pointed out by the EFF include indiscriminately scanning crowds at protests looking for protestors with warrants to arrest—a practice the Baltimore PD engaged in during protests following the death of Freddie Gray from a spinal injury obtained while handcuffed and in police custody.

Police or federal agents have used the technology to locate a Black Lives Matter protester in New York who was accused of overzealous use of a blow horn, a Washington D.C. protester accused of fighting with police during the clearing of protestors from Lafayette Square for Presidents Trump's Bible photo op, and "violent" protesters in South Carolina. Protesters in Portland turned the tables on the police, using the technology to identify officers who remove or cover-up their name tags.

The Lucy Parsons Lab, a Chicago-based anti-surveillance organization, created OpenOversight, a "public searchable database of law enforcement officers" in response to the trend of police officers hiding their identities while confronting protestors. Activists in California have started databases for Oakland, Berkeley, and the University of California system—although Long Beach is yet unrepresented.

Other constitutionally problematic uses discussed by the EFF include: using the technology to identify non-violent protesters, to identify protest leaders, and staging mass arrests of non-violent protesters for curfew violations or blocking streets in order to enter them into facial recognition and other biometric databases. These are potential violations that even pro-facial recognition police forces, such as the Detroit PD, have acknowledged and taken steps to prevent. In Long Beach, our police just received training that dismissed such concerns and the department’s new facial recognition watch report does not address 1st Amendment concerns related to facial recognition.

There is a right to peacefully protest in America, and police subjecting protestors to facial recognition scan as a special price to be paid for exercising that right is unconstitutional. If that is what caused any of the 2,000+ increase in LBPD uses of facial recognition technology in 2020, then the LBPD may have violated the 1st Amendment rights of any non-violent, non-looting protesters it ran through facial recognition databases without probable cause.

There are other more nuts-and-bolts legal concerns regarding facial recognition as well. How to handle facial identification matches (or mismatches) in discovery at criminal trials is an issue that has not been decided by courts, other than one lower court decision in Florida that is still the subject of litigation.

This is an issue not mentioned in the Vigilant Solutions training presentation made to the LBPD, which may be the most extensive training on facial recognition conducted by the department. Although that might be for the best, given Vigilant Solutions' past history of recommending officers leave out evidence related to Automated License Plate Reader from police reports, even when that technology was what led to an arrest.

STUDIES SHOWING THE PROBLEMS, INACCURACIES, AND BIASES IN FACIAL RECOGNITION

Most facial recognition algorithms work by creating a map of a face using the distance between features or things like the curve of your chin. These measurements are then compared to a database of images that have been processed by the same algorithm. When the map of the input image matches a map of an image in the database, a match is returned.

As racially neutral as the above sounds, there are a significant number of studies conducted by governments, non-profit organizations, and universities that have found accuracy issues along race and gender lines. These studies also show that improper use by police can compound past patterns of over-policing of communities of color. The accuracy issues stem from training the algorithms on images of mostly white males. In fact, the only study I have ever seen claiming facial recognition is 100% accurate is the unpublished, non-peer reviewed internal study from Clearview AI cited in the marketing material sent to the LBPD.

Tajsar of the ACLU says, "facial recognition is inaccurate and should not be used given its well documented and researched accuracy concerns." These studies have built on each other and been the driving force behind much of the legislation on facial recognition that has been passed across the nation.

The most definitive study on the technology was conducted in December 2019 by the National Institute of Standards and Technology (NIST), a U.S. government physical science laboratory. NIST ran "18.27 million images of 8.49 million people through 189 mostly commercial algorithms from 99 developers." The study looked at "false positives" by demographic groups and found some systems were 10 to "beyond 100 times" more likely to misidentify a Black or East Asian face. Women were misidentified more often than men, with Black women the most misidentified group.

Clearview AI was not one of the companies that allowed NIST to test its algorithm, so the company's 100% accuracy claim should be taken with a healthy dose of skepticism.

Chris Morgan of Vigilant Solutions, in his training session for the LBPD, addressed the issue of racial bias and inaccuracy in facial recognition by correctly pointing out that the algorithms do not see race, and in fact, use a color-neutral map of a face. This ignores the actual findings that the bias comes from training the algorithm on mostly white male faces and then using that algorithm on faces of different races and genders. Morgan also ignores another form of racial bias raised not by flaws in the algorithm—but a flawed justice system that has historically disproportionately targeted racial minorities.

The NIST study was inspired by a 2018 report from the Georgetown Law School Center on Privacy and Technology called The Perpetual Line-up: Unregulated Police Face Recognition in America that found that half of all American adults were already in law enforcement facial recognition databases. Georgetown Law School found that the way police use facial recognition can replicate past biases. The report found that "due to disproportionately high arrest rates, systems that rely on mug shot databases likely include a disproportionate number of African-American." When future arrests are then made by searching those databases, the over-policing of communities of color continues—just with a digital facade.

Another complaint frequently made by experts is that police have increasingly used the technology to solve petty crimes—without any resulting improvement in public safety. The current trend is that those with criminal records committing property offenses captured by surveillance cameras are low hanging fruit that gets picked over and over—while harder to solve and prosecute crimes consequently get less attention.

The Georgetown Law School report found widespread adoption and frequent use of facial recognition technology by police. The report also found problems such as lax oversight, failure to safeguard Constitutional Rights, failure to audit for misuse, missing Brady disclosures to defense counsel, and police using the technology to investigate petty crimes. The report summarized the situation as "law enforcement face recognition is unregulated and in many instances out of control." In Long Beach, that is undoubtedly true. The LBPD can not even say how many searches it ran on Clearview AI, let alone the reason for the searches or what was done with the results.

Long Beach’s lack of policy to guide its facial recognition use is not unusual. Georgetown discovered that only 4 of the 52 departments found to be using facial recognition technology had policies governing its use, and only one of those included a prohibition on using facial recognition to track individuals engaging in political, religious, or other protected free speech. Nine of the 52 departments claim to log and audit their officers' face recognition searches for improper use—though only one was willing to prove this with documentary evidence. The LAPD was surveyed for this study but supplied an incorrect response regarding the existence of their facial recognition program—one of many inaccurate LAPD public statements on facial recognition covered in the L.A. Times reporting.

Georgetown also found flaws with how police used the technology. Human review of matches is an essential part of using any facial recognition program. Previous studies have found that without training, human reviewer decisions on matches made by a facial recognition program are wrong half of the time. Despite this, Georgetown found that only 8 out of 52 departments had specialized personnel review potential matches. From what was said in the LBPD training session, it does not appear that specialized review of matches is a practice used by the LBPD—although Morgan and online training material from Vigilant Solutions recommend it.

Georgetown Law's "Perpetual Line-Up" study built on an 2012 FBI-authored study that found "female, Black, and younger cohorts are more difficult to recognize for all matchers used in this study." The Electronic Frontier Foundation (EFF) details a related FBI privacy impact assessment that found facial recognition "may not be sufficiently reliable to accurately locate other photos of the same identity, resulting in an increased percentage of misidentifications." Another source of misidentification described is when the database does not contain the face being searched for, and the system produces its closest match anyway—which then can put that innocent individual in the position of having to prove their innocence.

Research by M.I.T. and Stanford on facial recognition programs from major companies found that while white men never face an error rate higher than 0.8%, the error rates for dark-skinned women were hitting 20 and 38 percent with some programs. Separate studies from the ACLU on state and federal legislators' photographs have found accuracy issues along racial and gender lines. It was these studies Clearview AI compared to their internal "rated 100% accurate" study, which is also described as 99.5% accurate in smaller print portions of the presentation.

Clearview AI did not disclose the study's methodology to the police and has not published their study. They also did not disclose that a judge who signed off as an independent reviewer of the study had prior links to a founder of the company. Although, as will be discussed in our Clearview AI-focused article, accuracy is not the foremost concern experts have with Clearview AI.

"Garbage In, Garbage Out" is the title of a second Georgetown Law School study on facial recognition done in 2019. It describes some of the questionable ways police have used facial recognition. Police have submitted celebrity photos to facial recognition programs because they did not have a suspect's picture, but did have a description that the suspect looked like a particular celebrity or athlete.

The phrase "garbage in, garbage out" is also frequently used by Chris Morgan in the training he conducted for the LBPD—although at least one of the image inputs he suggested falls into the study’s garbage category. At one point in the training, Morgan suggests you can use a drawing of someone, as long as "it is really good."

Celebrity doppelgängers are not the only questionable images submitted to find suspects; some departments have used sketch artist drawing to run facial recognition searches. Sketches can be problematic as they are based on both subjective witness memory and artistic interpretation. As you can see from the image below, the process can lead to innocent people being identified, investigated, and potentially arrested and put in the position of having to disprove the algorithm's match.

Image from Georgetown Law School Center on Privacy and Technology Study, "Garbage in, Garbage Out"
Image from Georgetown Law School Center on Privacy and Technology Study, "Garbage in, Garbage Out"

It is not yet known if Long Beach runs facial identification on artist sketches or uses celebrity doppelgängers, but there is no technological or policy limit that would stop them. As will be discussed in the Vigilant Solutions specific article, the company once advertised the ability to run sketches through its facial recognition database—only removing the content from its website after it was mentioned by Georgetown Law in "Garbage In, Garbage Out.

The Georgetown Law "Garbage In, Garbage Out" report states that Vigilant Solutions' FaceSearch webpage marketed a tool for "creating a proxy image from a sketch artist or artist rendering" to be submitted to its face recognition system. That was based on how the page appeared in May 2019, and the current page does not contain a description of the proxy image tool. The Wayback Machine (a non-profit digital library of Internet sites and other cultural artifacts in digital form) confirms that Vigilant Solutions removed that language from their site sometime between September 2019 and June 2020—after Georgetown Law called attention to it.

Material removed from Vigilant Solutions' website found using the Wayback Machine.
Material removed from Vigilant Solutions' website found using the Wayback Machine.

It is currently unknown in what types of cases the LBPD used Vigilant Solutions' FaceSearch and whether they had any "success" as they did with Clearview AI—before use of the programs were curtailed by the watch report. That information will hopefully be uncovered through pending PRA requests, as FaceSearch was set up to collect that information both before a search is conducted and after the results are reviewed.

FALSE ARRESTS BASED ON FACIAL RECOGNITION MISMATCHES

2020 will leave its marks in the history books for a lot of reasons. Something that might get lost in the endless stream of environmental, political, and public health stories is that 2020 marks the first civil lawsuit over a wrongful arrest based on a mistaken facial recognition algorithm match.

The wrongful arrest that triggered the lawsuit was not the first wrongful arrest made based on a facial recognition mismatch. It wasn't even the first made by the Detroit Police Department—who wrongly arrested Robert Julian-Borchak Williams in early 2019.

It is no coincidence that the two wrongful arrests made after bad facial recognition matches were of Black men, who had evidence of their innocence. One had an alibi, and the other had tattoos to prove that they were not the suspect in the surveillance camera footage.

Mohammad Tajsar of the ACLU Southern California in an interview for this investigation, said the false arrests in Detroit are particularly relevant to Californians because they happened under the city's old facial recognition policy—a policy that is nearly identical to policies currently being proposed in many California cities. Policies that are being promoted with the claim that they are sufficient to protect people from wrongful arrest based on facial recognition.

The first arrest happened after the innocent man provided an alibi and pointed out that the surveillance video suspect looked nothing like him other than being Black. This led one of the detective's to remark to his partner that "I guess the computer got it wrong." Unable to overcome that presumption, he spent 30 hours in custody before he made bail. Even when the prosecutor finally dismissed his case, they did it "with prejudice," meaning they could refile any time and said they might if another witness came forward. The ACLU is working on getting him a full dismissal, an apology, and removed from criminal databases.

The facial recognition search that almost ruined Robert Julian-Borchak Williams' life.
The facial recognition search that almost ruined Robert Julian-Borchak Williams' life.

The arrest of Michael Oliver was sloppy by any measure of the word and is currently the subject of a $12 million lawsuit. It is particularly egregious because the department apparently learned nothing from its first mistaken arrest based on facial recognition technology. The suspect they were looking for was heavier and had no visible tattoos, while the man they arrested days after had many visible, faded tattoos. Even this was not enough to stop his arrest or prosecution. At his first hearing, the prosecutor successfully argued to a judge that he could have gotten the faded tattoos in the three months between the recording of the incident and the arrest. The same prosecutor would drop all charges the next month before a second hearing.

A second Detroit PD mistake. Cell phone image suspect taken during the incident on left, facial recognition "match" and wrongfully arrested man, Michael Oliver, on the right.
A second Detroit PD mistake. Cell phone image suspect taken during the incident on left, facial recognition "match" and wrongfully arrested man, Michael Oliver, on the right.

In these two cases, police were too dependent on a flawed technology. Both men allege police built their investigations around proving the computer matches correct, instead of looking for evidence that would have proved the matches were wrong. Luckily for these two men, they had evidence of their innocence. Although they were not too lucky, as they spent days in jail, and Oliver lost his job and had his car impounded. Williams got arrested at his home, at gunpoint, in front of his distraught wife and young children. He was so humiliated by what happened he never even told his mother of the incident or truthfully explained his two-day absence to his colleagues.

What made these two cases in Detroit particularly egregious is that the crimes were relatively minor property offenses, shoplifting some watches, and grabbing the phone of a man trying to record a fight and breaking its screen by throwing it on the ground. Detroit made some attempts at fixing their broken facial recognition program, including limiting the use of facial recognition to violent crimes. The improvements were needed; if these men had not had proof of their innocence, things could have gone very differently for them.

That may be the case for another Black man, Willie Lynch, who was arrested and convicted for a 2015 sale of $50 worth of crack cocaine to an undercover officer, who snapped a grainy cell photo during the sale. Unable to identify him through other means, the police turned to the state's facial recognition, which has both mug shots and driver's license photos.

Officers ran the photo through a facial recognition database and got five hits, including a "one-star match" on the man they would later arrest. At his criminal trial, the analyst who ran the facial recognition search admitted she did not know how many total stars were possible, but pointed out that the other four matches returned were zero-star matches.

Police hid their use of facial recognition in that case, with the arrest report making no mention of it. When the fact came out during depositions for the case, Lynch's attorney sought unsuccessfully to examine the facial recognition results. He was denied and was not even given the list of other suspects produced by the system.

The trial judge ruled he had no right to this evidence. Lynch was convicted and sentenced to eight years in prison. A Florida Appeals court denied his bid for a new trial. The case was appealed to the Florida Supreme Court, with the EFF, ACLU, Georgetown Law, and the Innocence Project filing a brief urging the court to hear the case—which has not happened.

STATE, LOCAL, FEDERAL, AND CORPORATE BANS

California was the third state, after Oregon and New Hampshire, to pass a limited ban (A.B. 1215) on facial recognition, with the ban only covering data from police body cameras and other devices carried by police.

San Francisco was the first U.S. city to ban facial recognition in May 2019. Their Stop Secret Surveillance Ordinance goes far beyond facial recognition and creates rules regarding all government surveillance stating "it is essential to have an informed public debate as early as possible about decisions related to surveillance technology." Consequently, the ordinance requires a public debate, the involvement of elected officials, and the adoption of policies that establish oversight rules, use parameters, and include protections for civil rights and liberties for any surveillance technology that is adopted in the future.

San Francisco was soon followed by Boston-suburb Somerville, MA, which banned facial recognition software on cameras that record public spaces. Six other Massachusetts communities would eventually adopt bans as well.

Oakland acted in July 2019, with a ban on "acquiring, obtaining, retaining, requesting, or accessing" facial recognition technology that was backed by the ACLU. The >statement from the Oakland city council president stated, "face recognition technology runs the risk of making Oakland residents less safe as the misidentification of individuals could lead to the misuse of force, false incarceration, and minority-based persecution."

Their next-door neighbor Berkeley passed a similar ordinance soon after, making it the fourth ban in the nation>.

Santa Cruz banned any facial recognition use by police in June (passed as part of the nation's first ban on predictive policing), with any future use requiring new legislation and evidence that proves the technology will not perpetuate bias.

Boston passed a broad ban on police use of technology in June as well—making it the second-largest city with a ban in effect. Speaking before the city council, the Boston Police Commissioner William Gross said he's not interested in facial recognition while the technology is still racially biased.

Portland, Maine passed a ban on any city employees using facial recognition in August.

Portland, Oregon, not to be outdone, passed the toughest facial recognition ban in the nation this September—banning all governmental and private use of facial recognition. The Portland, OR ban even covers airports, so Delta will not be able to use their new facial recognition boarding scan system in the city.

Jacksonville, Mississippi passed a ban on police use of facial recognition in August. That ban cites the fact that "[l]aw enforcement officers frequently search facial recognition databases without warrants and even reasonable suspicion, thus violating the fourth amendment and basic human rights."

Massachusetts is on its way to becoming the first state to have a comprehensive ban on facial recognition—unless New Hampshire or Oregon beats them to it.

Mohammad Tajsar, of the ACLU, describes this as "a wave sweeping across the country" and says, "Long Beach and other cities in Southern California should join the party if they are serious about protecting civil rights."

Federally, a group of Democratic U.S. Senators and Representatives has proposed the Facial Recognition and Biometric Technology Moratorium Act of 2020. The bill bans federal use of facial recognition without "explicit statutory authorization" and will withhold "certain federal public safety grants from state and local governments that engage in biometric surveillance." The bill creates a right for individuals to sue if they are harmed by a violation of the act and would prohibit the use in court of any evidence obtained in violation of the bill.

Senator Ed Markey (D. Mass) spearheaded the legislation. In a press release, he describes his motivation, "I've spent years pushing back against the proliferation of facial recognition surveillance systems because the implications for our civil liberties are chilling and the disproportionate burden on communities of color is unacceptable. At this moment, the only responsible thing to do is to prohibit government and law enforcement from using these surveillance mechanisms."

A Republican-sponsored bill purports to be a federal privacy act. In actuality, it allows private companies to use facial recognition if they get affirmed assent. This is very similar to California's A.B. 2261, which would have allowed a company's terms of services to determine the limits of private facial recognition use—instead of developing an actual public policy. A.B. 2261 was drafted with so much involvement from Microsoft that the ACLU refers to it as Microsoft's bill, though even the backing of deep pockets was not able to get the bill passed with it stalling in committee this summer.

It's not just cities that have exited the facial recognition space. In June 2020, Amazon issued a one-year moratorium on selling facial recognition technology to police—who had been using their Rekognition program.

Amazon's action was triggered by an earlier announcement from IBM that they were no longer offering "general purpose IBM facial recognition or analysis software." In a letter to Congress, their CEO stated they "firmly oppose and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms."

Microsoft became the third tech giant to announce a ban, stating they would not sell the technology to police until there was federal legislation on the issue. Notably, Microsoft's announcement came days after the facial recognition "privacy" bill they heavily supported in California (A.B. 2261) failed to move out of committee and became inactive.

The move by Microsoft was seen as a betrayal by President Trump, who retweeted a comment from Richard Grennell, his former Acting Director of National Intelligence, that Microsoft should be banned from all federal contracts because "there should be consequences for not selling technology to police departments."

However, there is no shortage of smaller companies offering facial recognition who have announced they have no plans to suspend services to law enforcement. Many of these small companies compete for business by offering a free trial. Long Beach has taken up at least two companies on such offers, only putting limits on the practice in September.

According to Dave Maass of the EFF, who was interviewed for this investigation before the LBPD's policy change curtailing free trials, "free trials are often a backdoor for police to adopt new technologies without oversight. Surveillance shouldn't be treated like a subscription to a wine of the month club." While the LBPD has not banned free trials, the September LBPD watch report at least now requires a division commander's approval to start one.

The LBPD's facial recognition watch report is similar to an order issued after an unauthorized free trial of Clearview AI by police. The San Diego County District Attorney informed police that "they are not to participate in free trials of any kind without authorization." It is not known if the LBPD obtained authorization from prosecutors or the city attorney for its two free trials of facial recognition services. However, if the TigerText scandal is any guide, they likely decided it is easier to ask forgiveness after than permission before.

WHAT UNREGULATED FACIAL RECOGNITION CAN BECOME

To see what unrestricted facial recognition use looks like, look to China. In China, facial recognition is used on all crimes, no matter how small. By using a combination of video analytics and facial recognition, the police automatically generate jaywalking tickets— like we get red-light tickets in the U.S. There are also digital displays on the street that show the images of these jaywalking scofflaws after they commit their crime, even children.

Chinese anti-jaywalking facial recognition deployment.
Chinese anti-jaywalking facial recognition deployment.

Chinese use of facial recognition doesn't stop at petty crime. To prevent toilet paper theft, some public toilet paper dispensers scan your face before dispensing your serving and then lock you out for 9 minutes. When combined with social credit scores, facial recognition technology can be used to lock you out of much of society.

Chinese company Sesame Credit, part of Jack Ma's vast portfolio that includes the Chinese equivalent of Amazon—Alibaba, has been put in charge of the social credit score scheme. The government gave the program the official goal of "allow[ing] the trustworthy to roam everywhere under heaven, while making it hard for the discredited to take a single step."

According to reporting from Time magazine, if you call someone on a blacklist, you will hear a siren followed by the message "warning, this person is on the blacklist. Be careful and urge them to repay their debts." If a blacklisted person is spotted by a camera in the area, their face is put up on a nearby digital billboard with a warning. The millions with insufficient social credit scores can not rent cars or buy plane or train tickets, while those with high enough scores get to skip lines—even at the hospital.

One company makes a camera designed specifically to identify Uyghur Chinese citizens. A bipartisan group of 17 senators sent a letter to Secretary of State Pompeo stating, "China uses facial recognition to profile Uyghur individuals, classify them on the basis of their ethnicity, and single them out for tracking, mistreatment, and detention."

The letter continued stating somethings that could be applied to some U.S. practices, "these technologies are deployed in service of a dystopian vision for technology governance, that harnesses the economic benefits of the internet in the absence of political freedom and sees technology companies as instruments of state power."

China currently has one CCTV camera for every 5.9 citizens, 30 times Washington D.C. By the time CheckLBPD's investigation in the Surveillance Architecture of Long Beach is complete; we should know Long Beach's number—after adding up figures for all city surveillance cameras, registered Ring doorbell cameras, traffic cameras, and business and apartment complex cameras registered with the Long Beach Common Operating Picture (LBCOP). Tips are welcome; cameras can be anywhere—even hidden in streetlights and traffic cones.

WHAT LONG BEACH CAN DO ABOUT FACIAL RECOGNITION

The Long Beach City Council could pass a ban on police use of facial recognition technology, or at least a moratorium until the technology's bugs are worked out, and issues of racial and gender bias are addressed. CheckLBPD sees the wisdom in such a policy, but also recognizes that such a policy would face an uphill battle in Long Beach given our politically powerful police union.

Dave Maass of the EFF prefers cities have a public debate—with the decision being made by elected officials, not the police. Facial recognition policy falls within the City Council's purview—as I discussed at length above, many cities have adopted bans on some or all uses of facial recognition.

Maass says, "too often, police are making decisions about technology in private meetings with vendors, who tell them all the potential miracles the technology can generate without telling them about the potential risks. It's important that the community and elected officials have a say and that these decisions are made in the sunlight. Justice requires this kind of transparency." This was something Maass said in an interview before the Vigilant Solutions LBPD training session recording had been turned over by the LBPD, but it described that session accurately.

Mr. Tajsar of the ACLU agrees that the combination of profit-motive and lack of transparency is "a recipe for disaster when it comes to people's rights."

Regarding facial recognition, in particular, Maass regards it as "a dangerous technology that can disproportionately impact communities of color and pull innocent people into the criminal justice system. It's irresponsible for police to use any surveillance technology without training or policy, but it is extremely reckless to do with biometric technology."

If the city council continues to not address facial recognition, there are steps the LBPD could take on its own. Right now, the closest the LBPD has to a facial recognition policy is the watch report on facial recognition and the LACRIS policy they agreed to as a condition of access. LACRIS encourages police to go beyond that minimal policy and to adopt an official local policy—even supplying a template.

Image from LACRIS website showing training session, including handouts not retained by the LBPD
Image from LACRIS website showing training session, including handouts not retained by the LBPD

LACRIS's local Facial Recognition Policy Template states that it "may be used by member agencies for creating policy and procedures that adhere to Federal and State laws pertaining to facial recognition use" as well as "current LACRIS policies and procedures." According to statements from the LBPD, a departmental level policy has not been adopted.

The unadopted local use policy template was drafted so local departments could comply with the body camera moratorium (A.B. 1215), while also respecting the First, Fourth, and Fourteenth Amendments of the US Constitution. The template makes sure the rights created by those amendments are not violated by the use of LACRIS without proper justifications or on protected political activities or non-criminal organizations. The policy also covers record-keeping, oversight, and departmental audits of system use. This is the absolute bare minimum that the LBPD should do, but the people of Long Beach should push for more.

If the LBPD wanted a policy that included additional ways to reduce and prevent mistakes, they could look to the new facial recognition policy adopted on Sept. 12 by the Detroit Police Department. Detroit had a previous facial recognition policy in place, a policy designed to make sure facial recognition was not the sole justification for an arrest, but that policy failed to prevent two avoidable, wrongful arrests.

The policy the Detroit PD's put in place after its original policy was proven ineffective requires multi-level peer review, sets oversight requirements, limits use to violent crime, bans use at protests and for immigration enforcement, and sets other rules for how the systems must be used—with violations of the policy considered "major misconduct" resulting in dismissal from the department.

If a compromise solution is the best Long Beach can hope for, the Detroit policy strikes a good balance between protecting people's rights and making sure police can still use the technology to solve serious crime—while also putting in place procedures to reduce the likelihood of a wrongful arrest.

Given the technology’s short-comings and the risks inherent in police making contact with suspects, multi-level review and verification by specially-trained facial examiners should be a requirement before any contact is made with a suspect. The value of these two safeguards was mentioned in the Vigilant Solutions' training session for the LBPD. Vigilant Solutions' online training material recommends a second round of review using 3-5 reviewers, although it is unknown if this was a step taken by the LBPD.

Training material from Vigilant Solutions' website
Training material from Vigilant Solutions' website

A group no one will ever confuse with the ACLU also has issued guidance on facial recognition. The International Association of Chiefs of Police (IACP) issued a report by a team of 18 criminal justice experts and high-ranking police officers. This report came up with four recommendations, all of which would be changes to current LBPD transparency, practices, and policy.

The IACP recommendations are: "1. Fully Inform the Public, 2. Establish Use Parameters, 3. Publicize its Effectiveness, and 4. Create Best Practices and Policy."

The LBPD has hired the IACP for advice in the past—paying $96,000 for an in-depth study of the LBPD. A study the department later canceled and attempted to suppress. Stephen Downing of the Beachcomber doggedly worked for the release of the report. The pressure paid off—with the 124-page LBPD Operations and Management Study published by the Beachcomber on July 24.

The IACP report has sections devoted to technology and policy, but neither of those sections makes any reference to facial recognition or the fact that the LBPD is using it without informing the public, establishing how and when to use it, tracking its effectiveness, or having any departmental policy to guide its use. That draft report also does not mention Automated License Plate Readers, cellular interception technology, social media monitoring, or many other potentially problematic technologies the LBPD uses (often without policy guidance).

Mr. Tajsar of the ACLU says society needs to confront how much privacy we are willing to give up for the sake of police investigations. He used the example of police collecting and storing the DNA of every person in Long Beach and using it to solve any crime they could, no matter how big or small. He says, "would that be useful to the police? Absolutely, but would we want that? I think the answer is no."

He says, "facial recognition, like other forms of biometric data collection, is uniquely invasive and problematic." It has "the capacity to blow up any conception of individual autonomy, privacy, and associational integrity because it allows for the rapid easy almost costless surveillance, monitoring, and tracking of individuals without their consent regardless of how limited its deployment. It is a kind of technology that should be forbidden in police departments, whether it may or may not be useful in any particular use case."

Tajsar also pointed out that there is "not any real empirical data that suggests that facial recognition creates better public safety outcomes. In fact, we know the opposite. Increased surveillance and increased police encounters actually have an inverse effect on public safety." He says omnipresent surveillance "increases people's anxieties, fears, and creates a society that is more likely to actually engage in deviancy and criminal behavior because the forces that conduct the surveillance are forces that are themselves disruptive of the social fabric."

Whatever Long Beach does, experts on both sides of the debate agree the decision-making process should be transparent and include public involvement. These technologies can and will transform society as we know it, so the people should have a voice in any decision made. The decision should not be made behind closed doors by the police in consultation with the companies that make the technology.

WHAT YOU CAN DO ABOUT FACIAL RECOGNITION

Lobbying local governments has been an effective way of getting legislation passed on facial recognition in cities across the nation this year. Hopefully, our elected representatives in Long Beach will address the issue—now that the LBPD’s use of facial recognition technology is public knowledge.

However, if you are not the type of person to wait for the government to act, you can opt-out of Clearview AI and remove your photos from their 3 billion photo database under the California Consumer Protection Act (CCPA). This process, and what others have discovered through it, will be discussed in an upcoming segment focused solely on Clearview AI.

Vigilant Solutions does not have the same interpretation of its responsibilities under the CCPA. I have tried to get copies of my automated license plate reader and any facial recognition data that may be in the Vigilant Solutions databases, but the company will not acknowledge that any data stored about my geolocation and stored under my license plate number is my personal data. Their argument seems to be based on the claim they have not linked the plate number to a name on their system, though that is both a legally and factually questionable argument.

Vigilant Solutions' argument regarding license plate data seems to overlook that the CCPA covers "information that identifies, relates to, describes, is reasonably capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household” and that the law has a clear definition for “deidentified” data they they seem to fall short of meeting. Their FaceSearch database contains names and other personal information, as seen in Vigilant Solutions' training material. So far, it appears Vigilant Solutions has not expanded beyond mugshots and crime stoppers photos, but even people charged or convicted of crimes have rights under the CCPA.

Perhaps these views on other people's data are why Vigilant Solutions is facing a class-action lawsuit in Illinois for violating the state's privacy laws by retaining the mugshots of wrongfully-arrested and exonerated people, even after their convictions were overturned and their records expunged.

Or if you are looking for a fashionable way to defeat facial recognition, there are options.

17 - CV Dazzle Anti-FR Hair, woman
18 - CV Dazzle Anti-FR Hair - man
19 - CV Dazzle Anti-FR Hair - Pair

Anti-facial recognition hair and make-up styles created CV Dazzle styling. Photograph: Adam Harvey, DIS Magazine

Reflectacles' Ghost glasses, anti-surveillance reflective eyewear.

Reflectacles' Ghost glasses, anti-surveillance reflective eyewear.

Have you been affected by facial recognition in Long Beach? Contact us to help us further report on this issue.

Questions, comments, or tips can be directed to Greg@CheckLBPD.org (encrypted on our end with protonmail)

This article was written by Greg Buhl, a Long Beach resident and attorney.

This work is licensed under a Creative Commons Attribution 4.0 License. Feel free to distribute, remix, adapt, or build upon this work.

cc_by-300x105