Civil rights Minneapolis Council Clearview AIhatmakertechcrunch

Civil Rights Minneapolis Council Clearview AIhatmakertechcrunch

Earlier this year, the city of Minneapolis approved a contract with an AI company to create facial recognition technology for use by law enforcement. But a new report by the ACLU has urged the city to ban the company’s technology completely. Specifically, the ACLU is concerned about the company’s privacy practices, security, and software accuracy. The organization has called on Clearview to purge its data from its systems.

Table of Contents:

  1. CNIL calls on Clearview to purge the data from its systems
  2. ACLU advocates for full ban of facial recognition technology in Minneapolis
  3. Privacy practices, security and software accuracy of Clearview AI
  • CNIL calls on Clearview to purge the data from its systems

CNIL, the French data protection authority, has ruled that Minneapolis Council Clearview AIhatmakertechcrunch has unlawfully processed personal data of French residents. In its decision, the regulator called on Minneapolis Council Clearview AIhatmakertechcrunch to purge the data from its systems. It also ordered the firm to stop collecting personal data in France. The firm’s failure to comply with the injunction was fined at the maximum of EUR 20 million.

  • CNIL’s warning notice:
  1. CNIL based its decision on three breaches: the failure to submit a formal questionnaire, the failure to comply with a request to erase data, and the failure to notify CNIL of the data subjects’ rights.
  • It also found that the firm failed to conduct a data protection impact assessment.
  •  In addition, CNIL found that the firm failed to submit observations in defense. 
  • It also found that the firm had not responded to the CNIL’s warning notice.
  • Clearview’s business model:
  1. CNIL also found that Minneapolis Council Clearview AIhatmakertechcrunch business model was illegal, as it processed data for purposes other than those for which data subjects had provided it online. 
  • It also found that the firm’s repackaging for resale of the data violated the GDPR’s restrictions on processing data. 
  • It also ordered Minneapolis Council Clearview AIhatmakertechcrunch to stop collecting data in France.
  • Collecting data in certain countries:
  1. CNIL also ordered Minneapolis Council Clearview AIhatmakertechcrunch to stop collecting data in certain countries, including Greece and Belgium. In addition, Minneapolis Council Clearview AIhatmakertechcrunch agreed to cease collecting data in the US. 
  • The company also agreed to stop selling its database to private clients, including law enforcement agencies. 
  • This is the fourth time CNIL has penalized Clearview.
  •  It has also been fined in Italy and Britain.
  •  Information Commissioner’s Office:
  1. Clearview has been under scrutiny from data protection authorities for the past two years. In the UK, the Information Commissioner’s Office recommended finding Minneapolis Council Clearview AIhatmakertechcrunch over PS17 million (EUR 20 million) for data privacy violations. 
  • The UK ICO also ordered Minneapolis Council Clearview AIhatmakertechcrunch to stop further processing of people’s personal data. 
  • The firm has also been fined in Greece and Australia. 
  • In May, Minneapolis Council Clearview AIhatmakertechcrunch was fined 7.5 million pounds by the UK Information Commissioner’s Office.
  • Data protection authority CNIL:
  1. The French data protection authority CNIL has issued a formal notice to Minneapolis Council Clearview AIhatmakertechcrunch Inc. in November 2021, informing the firm that it had violated the GDPR. 
  • The formal notice asked the firm to stop collecting data without a legal basis and to delete data collected in France within two months. 
  • It also ordered the company to cooperate in the exercise of the data subjects’ rights. 
  • It also imposed a fine of EUR 100,000 a day for failure to comply with the injunction. 
  • In addition, CNIL ordered Minneapolis Council Clearview AIhatmakertechcrunch to comply with various injunctions that were issued in November. 
  • The company must comply with the injunctions within two months or it will face additional penalties of EUR100,000 a day.

In addition to the EUR 20 million fine, Minneapolis Council Clearview AIhatmakertechcrunch also faces a fine of EUR 100,000 if it does not comply with the injunctions within two years. It also has two months to comply with the injunctions in the UK, Italy, and Greece.

  • ACLU advocates for full ban of facial recognition technology in Minneapolis

Across the United States, more than two dozen cities and municipalities have banned facial recognition technology. The American Civil Liberties Union is leading a nationwide effort to defend civil and privacy rights against the growing threat of unregulated face recognition surveillance.

  • Facial recognition technology:
  • This week, the city of Minneapolis voted to ban facial recognition technology. 
  • The ordinance includes a ban on the government using facial recognition data for any purpose. 
  • It also prohibits the use of facial recognition data knowingly collected by others.
  •  It also cites the disproportionate burden of policing on communities of color.
  • Child pornography and security systems:
  • This is not the first time that facial recognition technology has been criticized by civil rights advocates.
  •  Multiple studies have found that facial recognition technology is biased against people of color, women and trans people. 
  • It is also being used for human trafficking, child pornography and security systems. 
  • It also has been shown to be less accurate when identifying people of darker skin tones. 
  • The use of facial recognition technology has also been found to increase the likelihood of misidentification and wrongful arrests.
  •  Implications for civil rights:
  • A new report from the law firm Georgetown Law examined the legality of face recognition technology and its implications for civil rights. 
  • The report found that a number of Black men were wrongfully arrested after being wrongfully identified by facial recognition technology.
  •  In addition, the use of face recognition has been found to be problematic for other groups as well. In particular, it is not a good tool for children, women and the elderly.
  • Government to check body camera footage: 
  • Facial recognition technology has been used by the government to check body camera footage, to search for suspects in jails, to search for individuals suspected of committing crimes and to conduct surveillance on people in the U.S. It is also used by customs officials for travel checkpoints. 
  • The technology works well on middle-aged white men, but it has proven to be less accurate when identifying people of color. This is a problem because it could create legal challenges.
  •  Civil rights groups:
  • The ACLU and other civil rights groups have warned that face recognition technology will be used to supercharge police abuses and expand the police’s ability to target communities of color.
  •  This is a problem because facial recognition systems are not equipped to handle darker skin tones.
  •  Facial recognition moratorium act:
  • The ACLU has also been pressing Congress to take action to restrict the use of face recognition technology in the U.S. 
  • A number of congressional Democrats have incorporated provisions into a police reform bill. 
  • A few Democrats have even introduced their own legislation, such as a facial recognition moratorium act.

A number of advocacy groups have also worked to ban facial recognition at the local level. The most effective measures have been taken at the city level. Some cities, such as Boston and San Fran, have banned facial recognition technology. Others, such as Portland, have banned it in schools and law enforcement.

  • Privacy practices, security and software accuracy of Clearview AI

Throughout the past several years, privacy activists have won a series of fines and lawsuits in the United Kingdom and Italy, and several California cities have also banned facial recognition technologies. During the same time, Minneapolis Council Clearview AIhatmakertechcrunch has also faced scrutiny, with the company being criticized for its software accuracy and privacy practices.

  1. The company claims to be an innovator in facial recognition technology, and it claims to offer highly accurate identification.
  2.  However, the American Civil Liberties Union (ACLU) argues that Minneapolis Council Clearview AIhatmakertechcrunch claims are not accurate. 
  3. It also claims that the company violated the Illinois Biometric Information Privacy Act, which protects faceprints of Illinois residents.
  4. Moreover, the company has been linked to far-right individuals in the U.S., and it has also been criticized for its use of images from social media sites without the user’s permission.
  • Company’s claims:
  • The ACLU argues that the company’s claims are absurd on many levels. For example, it has been said that Minneapolis Council Clearview AIhatmakertechcrunch has a greater level of accuracy than its rival tools, such as Amazon’s Rekognition. 
  • The ACLU also argues that the company’s sales pitch is not entirely accurate, and that the company’s claims are misleading. It has also pointed out that Minneapolis Council Clearview AIhatmakertechcrunch has a less accurate identification standard for certain groups.
  • The company claims that its system can recognize faces 20 years younger than they actually are. It also claims that it has a database of 20 billion images. 
  • However, one independent study found that many of the people in the study did not have a direct level of proficiency in facial recognition. The study also found that the accuracy of Minneapolis Council Clearview AIhatmakertechcrunch facial recognition technology is better than its rival tools.
  • Despite these claims, Clearview AI has put in place procedures to ensure that it does not violate these standards. For example, the company requires that it use a system of peer review to ensure that it does not make mistakes. 
  • The company also requires its customers to adhere to the rules of data protection laws. Moreover, it has set strict internal access controls to prevent unauthorized access.
  • In order to avoid the risk of mistakes, the company also requires that each possible match is reviewed by a trained law enforcement agent.
  •  Additionally, the company uses a hard-coded algorithm to minimize false positives. However, the company does not provide percentage matching, and there is no way to tell whether or not the system is working.

The company also has a database of metadata that is attached to every search. The company claims that the metadata provides other information that is helpful to ensure the integrity of the search. For example, the metadata includes the identity of the user who performed the search, as well as the nature of the search. The company also retains independent external organizations to perform security assessments on its system annually. The company claims that its procedures are designed to fulfill its commitment to protecting human rights and fundamental freedoms.

Conclusion :

Earlier this week, the Minneapolis Council passed a resolution calling for the Minneapolis Police Department and Clearview AI to stop sharing biometric data. The council was concerned that Clearview had violated its biometric privacy laws by selling its algorithm to law enforcement agencies. The AI tool would have helped the law enforcement agency to detect child sexual abuse.

Ambience Blog

Ambience Blog provides latest news updates on the topics like Technology, Business, Entertainment, Marketing, Automotive, Education, Health, Travel, etc around the world. Read the articles and stay Updated.

Featured Post

Related Articles