SILICON VALLEY, ETHICS, AND AI — SIGNAL READERS RESPOND
A few weeks ago, I wrote about the backlash within Google over its participation in Project Maven, a Pentagon program designed to improve the use of image recognition in drone surveillance footage. I asked readers whether grassroots pressure could act as an effective brake on controversial uses of AI. One well-placed Signal reader in Silicon Valley was skeptical: “Money matters to people,” he replied. Most employees who have invested time and resources in building a career at Google aren’t likely to leave if it starts dabbling in military AI, particularly when military collaboration is only likely ever to be a small part of what Google does. Still, our reader wrote, “We should be careful, nonetheless.”
Google subsequently backed away from the Pentagon project, but a set of new AI principles published by CEO Sundar Pichai in the wake of the controversy made clear that the company would work with governments and the military in other areas that don’t involve weapons or human harm, like cybersecurity. Despite Google’s attempt to draw a clear line on the issue, the boundaries between what constitutes direct harm and mere support for the military’s mission are blurry. Growing US-China competition in AI is also a factor here, according to another reader, who argued that if grassroots movements curtail AI development in the US, “we can rest assured China will extend its lead in this area.” That’s a concern shared by more than a few people in Washington. Google may have backed down in this case, but given these pressures, the debate over tech companies’ involvement in defense and law enforcement is far from over.