IBM vowed not to develop facial recognition systems but traded
Software

IBM vowed not to develop facial recognition systems but traded the principles for $70 million

In 2020, in the wake of the rise of the Black Lives Matter movement, IBM announced it was refusing to sell its facial recognition technology. But contrary to its own claims, the company has placed a US$69.8 million order with the UK government to supply such a system.

    Image source: Carson Masterson / unsplash.com

Image source: Carson Masterson / unsplash.com

In June 2020, IBM CEO Arvind Krishna sent a letter to the US Congress notably stating: “IBM strongly opposes and will not use any technology, including facial recognition technology from other vendors, for mass surveillance, racial profiling, violations of fundamental human rights and freedoms, or any other purpose that is contrary to our values ​​and “Principles.” not advocating trust and transparency”. The company then went a step further and called for the introduction of export control regulations in the US that would not allow the use of facial recognition systems. “to suppress dissent, violate the rights of minorities, or obliterate basic privacy requirements” even abroad.

However, in August 2023, IBM signed a £54.7 million ($69.8 million) deal with the UK government to develop a national biometric facial recognition platform that law enforcement and immigration officials can use, journalists at the American source found out. The edge and British organization for investigative journalism Liberty determined. The UK Home Office’s Biometrics Matcher platform will initially include a fingerprint matching feature, and in later stages has plans to introduce a facial recognition system for immigration – as detailed in documents “Strategic Face Matching Tool for Law Enforcement”. In the final stages of the project, function matters “Face Matching for Law Enforcement Purposes”.

In other words, it is the comparison of photos of people with images from a database – this is called a one-to-many matching system. In September 2020, IBM specifically stated that one-to-many mapping systems do this “A type of facial recognition technology most likely used for mass surveillance, racial profiling and other human rights abuses.”.

    Image Source: Tumisu / pixabay.com

Image Source: Tumisu / pixabay.com

IBM does not seem to see any contradiction between its actions and its own statements. In particular, the representative of the company Imtiaz Mufti stated: “In line with our 2020 commitment, IBM is no longer offering [систем] General purpose face recognition, does not support the use of [систем] Facial recognition for mass surveillance, profiling based on race or other human rights violations. <..> The MIA Biometrics Matching Platform and contract for related services are not used for mass surveillance. It helps police and immigration officers identify suspects using a database of fingerprints and photo data. It’s not capable of capturing video, which is typically required to support biometric capture of “faces in the crowd.”.

Human rights activists have criticized IBM’s position, emphasizing that one-to-many facial recognition systems are incompatible with human rights laws and they say the company should stop selling these systems. The use of such systems by police has been linked to wrongful arrests in the United States and its legality has been challenged in courts in the United Kingdom, The Verge and Liberty Investigates recalled. In August 2020, the UK Court of Appeal ruled that South Wales Police’s use of facial recognition breached privacy and equality laws. Following this decision, the police stopped using this system, but later reverted to it.

After IBM, Amazon and Microsoft also introduced moratoria on the sale of facial recognition systems to US law enforcement agencies. In June 2020, a one-year moratorium on sales of the Amazon Rekognition system to the police was imposed, with intentions to extend it “indefinite term” An Amazon spokeswoman confirmed that it remains in effect. Then, in June 2020, Microsoft announced it would also not sell facial recognition software to U.S. law enforcement agencies until federal law is passed governing the use of that technology. At the request of The Verge and Liberty Investigates, a Microsoft representative referred to information on the company’s website that clearly states that the use of the Azure AI Face service is involved “Prohibited by or under Microsoft policy by the U.S. state or local law enforcement agency”. The UK Home Office did not respond to a request.

About the author

Robbie Elmers

Robbie Elmers is a staff writer for Tech News Space, covering software, applications and services.

Add Comment

Click here to post a comment