Framework on Reconciliation December 9, 2020 Meeting
The Framework on Reconciliation has its next meeting on December 9th. Sign up for the meeting here. Questions can be submitted to the Framework planners here: adam.lara@longbeach.gov
The Framework has announced an almost overwhelming number of specific policy goals. One of those announced goals is to “explore the practice of facial recognition technology and other predictive policing models and their disproportionate impacts on Black people and people of color by reviewing evidence-based practices.” (Note: Facial recognition is not a type of predictive policing, despite what the language used by the Framework suggests. These are two separate technologies that contribute to our racially-imbalanced criminal justice system in their own unique ways.)
This goal was labeled as a medium-term goal, which means the Framework plans to address it in 1-2 years. Considering that the LBPD currently uses facial recognition at an extremely high rate (with an extraordinarily low amount of policy and supervision), Check LBPD is asking that this goal be reprioritized to an immediate goal. The problems created by this technology are immediate, and disproportionately felt by communities of color.
The issue of predictive policing is a separate issue the Framework and the LBPD need to address. The LBPD has denied using the technology to journalists and researchers, but the circumstantial evidence that the LBPD uses the technology is extremely convincing. Check LBPD has appealed the denial of our Public Records Act Request filed in June. Our appeal listed a long list of reasons why the denial was likely mistaken and suggested where the LBPD could look to find proof of the program in its own records. That request is still pending over five months later.
However, the LBPD knows the truth about what technologies it has been using. If the LBPD is going to be a participant in the Framework on Reconciliation, it should at least be bringing the truth to the table. As reconciliation efforts around the world have shown, before you can have reconciliation you must have the truth.
The following has been submitted to the Framework on Reconciliation:
A Three-Part Question for the Framework on Reconciliation
Apologies for the length of this multi-part question. If you are familiar with the extent of the LBPD’s facial recognition, predictive policing programs, and social media monitoring programs you can skip to the questions in bold. Since many are unfamiliar with these programs, I thought some context might be necessary to understand my questions.
Part 1:
The Framework on Reconciliation’s initial report lists the medium (1-2 year) goal of “explor[ing] the practice of facial recognition technology and other predictive policing models and their disproportionate impacts on Black people and people of color by reviewing evidence-based practices.”
Is the Framework’s 1-2 year delay in addressing facial recognition because the drafters thought these were future issues that could wait to be addressed? Or were they aware of the LBPD’s unregulated, sporadically-supervised, decade-long facial recognition program, but thought it should continue for another year or two before being addressed?
Through Public Records Act requests completed in September I have learned that the LBPD has been using facial recognition since 2010, at 160% the rate of LAPD (per officer). The LBPD primarily used the LA Sheriffs’ official LACRIS database, but also had dozens of officers using free trials of private databases (Clearview AI and Vigilant Solutions’ FaceSearch). These improperly documented free trials started in April 2018, and were only curtailed when CheckLBPD.org began filing public records requests and asking the department about the private databases.
(Read about the LBPD’s facial recognition program in Check LBPD’s report on the topic here: checklbpd.org/facial-recognition-part-one)
One of the free trials the LBPD used was from Clearview AI, which has data-scrapped over 3 billion images from social media and the web. The LBPD had eighteen officers using Clearview AI without recording their use, other than noting about success in emails with the company. This should be particularly concerning to the Framework—as Clearview AI has extensive ties to the alt-right and white supremacists. The company has made its software available to secret corporate clients, political campaigns, and repressive foreign governments. One of the company’s early visionaries described his job as “building algorithms to ID all the illegal immigrants for the deportation squads.”
Many cities and entire states have rightly banned police from using this company’s program. The company has stopped all operations in Illinois in an effort to dismiss a class-action lawsuit it is facing in the state. The LBPD’s use of Clearview AI was ongoing until at least February 2020. Although the free trial of Clearview AI has stopped, addressing how such a problematic program was adopted would help make sure such mistakes did not happen again in the future.
The LBPD’s decade of facial recognition use occurred without any departmental policy ever being enacted, or any review by the LBPD’s Office of Constitutional Policing. The LAPD’s Office of Constitutional Policy and Policing has addressed facial recognition in the past, and after the LA Times Sept. 2020 report uncovered LAPD’s use of the technology their Police Commission announced it would take up the matter. They have already passed a much stricter ban on private facial recognition databases than what the LBPD enacted in its September Watch Report; Long Beach should follow suit.
The LBPD seems to have put some effort into keeping the program secret. This information was obtained through a series of persistent California Public Records Acts requests. For years, the LBPD had been giving untrue responses to nationally recognized non-profit organizations like the Aaron Swartz Day Police Surveillance Project and the Lucy Parsons Lab about facial recognition. The LBPD had also issued a response to press questions this summer with the misleading statement, “the LBCOP does not utilize Facial Recognition.”
It is as indisputable that the LBPD uses facial recognition, as it is that this technology has a disproportionate impact on Black people and people of color. Facial recognition has accuracy issues that disproportionately affect people of color. Even if it worked perfectly, the reliance on a data pool of past mugshots means the over-policing of communities of color—just with a colorblind digital facade.
Given the extent of the LBPD’s facial recognition program, shouldn’t addressing the LBPD’s use of facial recognition be an immediate goal of the Framework on Reconciliation?
I am asking that this goal be reprioritized to an immediate goal. The problems created by this technology are immediate, and disproportionately felt by communities of color.
Part 2:
The LBPD has also given a series of untrue public records and press responses on other technology besides facial recognition. The LBPD has been using cell phone interception technology since 2014. After some long-delayed PRA requests, Check LBPD has proof, including purchase records, logs of use, and a secret LBPD policy. This policy has never been posted to the LBPD webpage created to comply with SB 978. When asked by the press about cell interception this summer the LBPD media representative replied, “I have no knowledge of that technology being used within the Department.”
There is also ample evidence that the LBPD used predictive policing, and yet the LBPD has denied it in responses to journalists and denied having responsive documents to Check LBPD’s public records act requests.
Predictive policing is the most racially problematic of all police technology. There is a mountain of evidence the LBPD at least used a free trial of PredPol predictive policing technology. After an initial denial in June, they have been delaying appeals and more specific PRA requests on the matter.
Predictive policing is so problematic that it has been banned in some jurisdictions, and even the LAPD has abandoned using PredPol. This is particularly noteworthy as it was the LAPD who originally commissioned the algorithm that led to PredPol’s creation.
This technology is linked to things such as the CalGangs database and target lists from which it is impossible for even the innocent or reformed to get removed. The LAPD currently has six officers facing charges for falsifying information in the CalGangs database, leading that department to abandon its use. The LBPD continues to use the CalGangs database, and has been less than forthcoming about the question of whether they use predictive policing to find their targets. I have PRA requests pending from June on the matter.
If a medium-term goal is going to be addressing predictive policing, shouldn’t a short-term goal be getting an honest accounting of the extent of the LBPD predictive policing program? How can the city move on to Reconciliation, when the Truth about what the department has been doing is still being withheld?
There is ample evidence of an LBPD predictive policing program—including the department’s $500,000 beta-testing/reference client contract with Palantir, the work done by the departments three full-time intelligence analyst contractors from SRA Int’l, and the strong likelihood that the LBPD used PredPol starting in 2015 (after PredPol had presented at the CopWest Expo in 2014 at the Long Beach Convention Center, and created a log-in domain specifically for Long Beach).
Part 3:
Finally, if you are going to be addressing facial recognition and predictive policing, I would suggest you add the LBPD’s social media monitoring program to the Framework. The LBPD used the services of Media Sonar from January 2015 until the company was banned from Facebook, Twitter, and Instagram in 2017. The incident that caused the company to lose their API access to social media data was a January 2015 marketing email they sent out to police departments nationwide (uncovered by the ACLU in 2017). The marketing campaign offered to track the users of certain hashtags as threats to public safety.
Those hashtags: #BlackLivesMatter, #PoliceBrutality, #NoJusticeNoPeace, #WeOrganize, #WeWantJustice, #DontShoot, #ImUnarmed, #RIPMichaelBrown, and #ItsTimeForChange.
Hopefully, it is just a coincidence that the LBPD began using Media Sonar the same month that the ban-worthy marketing email was sent out. Either way, it seems an appropriate topic for the Framework on Reconciliation to address along with facial recognition and predictive policing. Many departments found new ways to secretly access social media data, details about any LBPD’s current program are unknown.
Could the Framework add social media monitoring to the discussion of technologies that contribute to a racially-imbalanced criminal justice system?
Thank you for considering my questions,
Greg Buhl
CheckLBPD.org
Greg@CheckLBPD.org (encrypted on our end with protonmail)