Friday, February 7, 2025

Outrage as Google scraps its promise not to use AI for weapons or surveillance

Google has updated its AI ethical guidelines and removed a key pledge not to use the tech in a dangerous way.

The company erased the 2018 pledge on Tuesday which stated the tech giant ‘would not use AI for weapons or surveillance’.

The revised policy now shows that Google will only develop AI ‘responsibly’ and in line with ‘widely accepted principles of international law and human rights.’ 

Google’s change has sparked internal backlash as employees called the move ‘deeply concerning’ and that the company should not be involved in ‘the business of war.’

Matt Mahmoudi, Amnesty adviser on AI and human rights, shamed Google for the move, saying the tech giant set a ‘dangerous precedent.

‘AI-powered technologies could fuel surveillance and lethal killing systems at a vast scale, potentially leading to mass violations and infringing on the fundamental right to privacy,’ he added. 

Mahmoudi warned that AI-powered technologies can ‘lead to surveillance and lethal killing systems.’

The move comes nearly seven years after Google was also involved in a military project with the US Department of Defense’s Project Maven that uses AI to help the military detect objects in images and identify potential targets.

Google removed a section of four applications from its original AI principles, released in 2018, which stated it would not pursue weapons or surveillance

Google released the 2018 AI principles after its employees protested its involvement with the US Department of Defense's Project Maven. Project Maven usesAI to help the military detect objects in images and identify potential targets

DailyMail.com has reached out to Google for comment.

Project Maven used Google’s AI software to analyze aerial surveillance video to look for patterns that can help military intelligence analysts.

In April 2018, before the principles were published, more than 3,000 Google employees penned an open letter calling on CEO Sundar Pichai to end its involvement with the project.

The tech giant pulled out of Project Maven that June and published the AI principles one week later.

Google has worked with the military on other projects such as cloud computing, AI, and disaster response since 2018, but nothing compared to Project Maven.  

The updated AI principles now focus on three core tenets, the first being ‘Bold Innovation.’

‘We develop AI to assist, empower, and inspire people in almost every field of human endeavor, drive economic progress and improve lives, enable scientific breakthroughs, and help address humanity’s biggest challenges,’ the post reads.

The second is ‘Responsible Development and Deployment.

The revised policy state that Google pursues AI 'responsibly' and in line with 'widely accepted principles of international law and human rights

‘Because we understand that AI, as a still-emerging transformative technology, poses new complexities and risks, we consider it imperative to pursue AI responsibly throughout the development and deployment lifecycle — from design to testing to deployment to iteration — learning as AI advances and uses evolve,’ shared the executives.

And the third is ‘Collaborative Progress, Together.’

‘We learn from others, and build technology that empowers others to harness AI positively,’ the blog states.

Parul Koul, a Google software engineer and president of the Alphabet Union Workers-CWA, told Wired: ‘It’s deeply concerning to see Google drop its commitment to the ethical use of AI technology without input from its employees or the broader public.’

Documents published by The Washington Post in January claimed that Google employees gave Israel’s military access to the company’s latest artificial intelligence technology from the early weeks of the Israel-Gaza war.

The report stated that a Google employee warned the company that if it did allow more access to Israel, the nation would seek technology from Amazon.

The Post reported that some documents showed Google employees requested additional access to AI technology for the IDF from the spring and summer of 2024.

While the documents do not detail how the AI was used, it appears to have occurred months before Google updated its AI uses.

Michael Horowitz, a political science professor at the University of Pennsylvania, told the Post: Google’s [2025] announcement is more evidence that the relationship between the U.S. technology sector and [Defense Department] continues to get closer, including leading AI companies.

 

 

 

 

 

This post was originally published on this site

RELATED ARTICLES
Advertisements

Most Popular

Recent Comments