Rite Aid has been banned from using facial recognition software for five years after the Federal Trade Commission (FTC) found that the “reckless use of facial surveillance systems” by the US drugstore giant left customers humiliated and put their “confidential information at risk.”
An FTC Order which is subject to approval by the US Bankruptcy Court after Rite Aid filed for bankruptcy, Chapter 11 in October, also instructs Rite Aid to delete all images collected as part of the facial recognition system implementation, as well as any products that were built from those images. The company must also implement a robust data security program to protect any personal data it collects.
A 2020 report detailed how the drugstore chain secretly introduced facial recognition systems in about 200 US stores over an eight-year period starting in 2012, with “largely low-income, non-white neighborhoods” serving as technology test beds.
With increased FTC focus on the misuse of biometric surveillance, Rite Aid firmly fell under the government agency’s scrutiny. Among its allegations are that Rite Aid – in partnership with two contracted companies – built a “watch list database” containing images of customers alleged to have engaged in criminal activity in one of its stores. These images, often of low quality, were captured by CCTV cameras or employees’ cell phones.
When a customer entered a store allegedly matching an existing image in its database, employees received an automatic alert instructing them to act – and most often that instruction was to “approach and identify,” i.e., verify the customer’s identity and ask them to leave. Many times, these “matches” were false positives that led employees to wrongly accuse customers of wrongdoing, creating “embarrassment, harassment, and other harm,” according to the FTC.
“Employees, acting on false positive alerts, followed consumers in the stores, searched them, ordered them to leave, called the police to confront or remove them, and publicly accused them, sometimes in front of friends or family, of shoplifting or other irregularities.,” the complaint says.
Additionally, the FTC said that Rite Aid did not inform customers that facial recognition technology was in use, while at the same time instructing employees to specifically not disclose this information to customers.
Facing
Facial recognition software has emerged as one of the most controversial facets of the AI-powered surveillance era. In recent years, we’ve seen cities widely ban the technology, while politicians have struggled to regulate how police use it. Meanwhile, companies like Clearview AI have been hit with lawsuits and fines around the world for major data privacy violations surrounding facial recognition technology.
The FTC’s latest findings on Rite Aid also shed light on inherent biases in AI systems. For example, the FTC states that Rite Aid failed to mitigate risks to certain consumers due to their race – its technology was “more likely to generate false positives in stores located in Black and Asian plurality communities than in white plurality communities,” the report notes.
Additionally, the FTC said Rite Aid failed to test or measure the accuracy of its facial recognition system before or after deployment.
In a Press Release Rite Aid said it was “glad to have reached a settlement with the FTC,” but disagreed with the core of the allegations.
“The allegations relate to a pilot facial recognition technology program that the company deployed in a limited number of stores,” Rite Aid said in its statement. “Rite Aid stopped using the technology in this small group of stores over three years ago, before the start of the FTC’s investigation of the company’s use of the technology.”