MADRID – Engineers, scientists and others working for various major U.S. tech platforms have been in a rebellious mood lately.
They do not want their firms to work for the Pentagon or to be drawn into the fight against illegal immigration by “virtue” of using facial recognition techniques. In recent weeks, they have even forced their CEOs to take a stand and establish red lines in various areas of artificial intelligence.
Many of the technologies that we carry around in our pockets today, and from which Silicon Valley companies derive a steady income stream, have a military origin. That is true for the internet, GPS and touch screens, as well as the overall impetus given to computing.
The Defense Advanced Research Projects Agency (DARPA), with a budget of $3.18 billion in 2018 and which reports to the Pentagon) plays a major role in this regard. Have things changed?
Pressure groups such as the Tech Workers Coalition and Coworker.org have enabled a degree of mobilization that is unprecedented in terms of speed and scope. They have operated with an impressive level of self-organization that large corporations will have to contend with from now on.
At Google, thousands of workers signed a public letter calling on its CEO, Sundar Pichai, to end its participation in the Algorithmic Warfare Cross-Functional Team.
Known as Project Maven, it is based on a contract with the Pentagon to create a “customized AI surveillance engine” for military drones. The firm’s employees argued that Google “should not be in the business of war,” warning that the “Google brand” and its “ability to compete for talent” would otherwise be damaged.
At Amazon, in the wake of the removal of illegal immigrants’ young children ordained by President Donald Trump, thousands of employees asked its CEO, Jeff Bezos, to halt all sales of facial recognition software to the government, because its Rekognition tool could be used unjustly against immigrants.
Employees at Microsoft wrote their own letter to protest against its contract with the Immigration and Custom Enforcement Agency.
Still, all major corporations in the United States (as well as in Europe and China, where the most advanced surveillance state is being developed) are investing in these technologies, which have clear civilian applications.
The people who have possibly gone furthest in response to these employee rebellions are the CEO of Google and Satya Nadella of Microsoft. In his blog last June, Pichai published some “principles” for implementing the AI that Google is developing.
The first sentence of the Google Code of Conduct released in 2000 was “don’t be evil.” This was withdrawn last May. However, according to the new principles, among the AI applications that Google is not going to design or roll out there are:
“Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.”
“Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.”
“Technologies that gather or use information for surveillance, violating internationally accepted norms.”
“Technologies whose purpose contravenes widely accepted principles of international law and human rights.”
Eventually, the employees succeeded in persuading Google to abandon Project Maven.
Two weeks ago, the president of Microsoft, Bradford L. Smith, was one of the first to call for a whole area to be regulated, something unusual in itself: Government rules on facial recognition technology.
In addition to the benefits they promise, Smith argued, these technologies have a sinister side, such as surveillance by the state, being used without the explicit consent of those they are used upon and can even “perpetuate racial discrimination.”
The EU has made far more progress on the issue of consent (although its actual impact in practice remains to be seen).
Facebook, which has been criticized for its role in fostering hatred in countries such as Sri Lanka and Myanmar, has launched a plan to block content that incites people to hatred, although not to the extent of censoring Holocaust denial on its platform.
This outbreak of ethics in Silicon Valley — a place where secrecy generally prevails — underscores a political awakening that is spreading and becoming viral.
The spark to the resistance movement may have been the anti-immigration policies of President Trump. It matters in this regard that more than half of the most valuable tech companies in Silicon Valley were founded and/or are run by first- and second-generation immigrants. Many of the engineers and other employees who work for major tech firms were not born in the U.S.
In the industry, talent counts above all else. The idea that working there should help to change the world for the better is also prevalent.
There is also a degree of pacifism in the air, especially amid the growing militarization of AI, above all when it is not defensive.
There is a certain parallel with what occurred in the aftermath of the Manhattan Project, which was dedicated to building the atom bomb during World War II. After it was deployed in Hiroshima and Nagasaki, there were many scientists who criticized it.
Likewise, many in Silicon Valley now baulk at taking part in battlefield programs or fostering military technology or excessive security. The movement against autonomous weapons, so-called killer robots, is growing apace, and not only in the U.S.
It has been argued that the opposition to Project Maven could be the “MeToo moment” for tech employees in the U.S.
Either way, ethics is becoming more and more prominent in the debate about the impact of new technologies. Better yet, it has only just started to get going.
Autonomous weapons, the ideological bias of algorithms and blanket surveillance, to stay within the field of security, are phenomena that have taken on great significance.
The companies and employees of the firms that are creating these enabling technologies are no longer on the side-lines. Robert McGinn put them on notice with his 2018 book “The Ethical Engineer,” a must for such professionals
Andres Ortega is senior research fellow at the Elcano Royal Institute, a major Spanish foreign affairs think tank.