Microsoft Corp, fearing human rights violations, recently came up rejecting a request made by a law enforcement agency in California for installation of facial recognition technology in cars and body cameras of its officers, company President Brad Smith said on Tuesday.
Microsoft made the decision based on its conclusion that the providing facial recognition technology to the agency is more likely to be resulting in uneven holding of minorities and innocent women for inquiries as the artificial intelligence has been developed working mostly on pictures of white and male people.
Many of the research projects have found AI misidentifying women and minorities most of the times.
Smith, without naming the agency, said that whenever they pulled over anybody, they have to go through a face scan against a database of suspects and after understanding the disproportionate impacts, company told them that the facial technology is not suitable to serve their purpose.
Microsoft had also rejected a deal of installing facial recognition technology on cameras covering the capital city of an unnamed country, which was recognized as not free by the nonprofit Freedom House, said Smith, while speaking on “human-centered artificial intelligence” at a Stanford University conference, adding it would have suppressed the people’s freedom of assembly in that country.
Microsoft, on the other hand, has made an agreement of providing the facial recognition technology to a prison in the United States, after coming to a conclusion that it would improve safety measures inside the limited environment of that unnamed institution.
Smith said that these decisions are the part of a human rights commitment, which has been increasing critically with the advancement of technology that is empowering governments to carry out extensive surveillance, helping those deploying autonomous weapons and making them able to take other steps that are impossible to be reversed.