The Invention of “Ethical AI”

Kissinger declared the possibility of “a world relying on machines powered by data and algorithms and ungoverned by ethical or philosophical norms,”
How Big Tech Manipulates Academia to Avoid Regulation
Rodrigo Ochigame, The Intercept, December 20 2019
The irony of the ethical scandal enveloping Joichi Ito, the former director of the MIT Media Lab, is that he used to lead academic initiatives on ethics. After the revelation of his financial ties to Jeffrey Epstein, the financier charged with sex trafficking underage girls as young as 14, Ito resigned from multiple roles at MIT, a visiting professorship at Harvard Law School, and the boards of the John D. and Catherine T. MacArthur Foundation, the John S. and James L. Knight Foundation, and the New York Times Company.
Many spectators are puzzled by Ito’s influential role as an ethicist of artificial intelligence. Indeed, his initiatives were crucial in establishing the discourse of “ethical AI” that is now ubiquitous in academia and in the mainstream press. In 2016, then-President Barack Obama described him as an “expert” on AI and ethics. Since 2017, Ito financed many projects through the $27 million Ethics and Governance of AI Fund, an initiative anchored by the MIT Media Lab and the Berkman Klein Center for Internet and Society at Harvard University. What was all the talk of “ethics” really about?
………………….. Inspired by whistleblower Signe Swenson and others who have spoken out, I have decided to report what I came to learn regarding Ito’s role in shaping the field of AI ethics, since this is a matter of public concern. The emergence of this field is a recent phenomenon, as past AI researchers had been largely uninterested in the study of ethics……………………………
At the Media Lab, I learned that the discourse of “ethical AI,” championed substantially by Ito, was aligned strategically with a Silicon Valley effort seeking to avoid legally enforceable restrictions of controversial technologies………………….
I also watched MIT help the U.S. military brush aside the moral complexities of drone warfare, hosting a superficial talk on AI and ethics by Henry Kissinger, the former secretary of state and notorious war criminal, and giving input on the U.S. Department of Defense’s “AI Ethics Principles” for warfare, which embraced “permissibly biased” algorithms and which avoided using the word “fairness” because the Pentagon believes “that fights should not be fair.”
…………………….IT lent credibility to the idea that big tech could police its own use of artificial intelligence at a time when the industry faced increasing criticism and calls for legal regulation.
…..corporations have tried to shift the discussion to focus on voluntary “ethical principles,” “responsible practices,” and technical adjustments or “safeguards” framed in terms of “bias” and “fairness”…………………………..
To characterize the corporate agenda, it is helpful to distinguish between three kinds of regulatory possibilities for a given technology: (1) no legal regulation at all, leaving “ethical principles” and “responsible practices” as merely voluntary; (2) moderate legal regulation encouraging or requiring technical adjustments that do not conflict significantly with profits; or (3) restrictive legal regulation curbing or banning deployment of the technology. Unsurprisingly, the tech industry tends to support the first two and oppose the last. The corporate sponsored discussion on ethical AI” enables precisely this position. ……………………….
Thus, Silicon Valley’s vigorous promotion of “ethical AI” has constituted a strategic lobbying effort, one that has enrolled academia to legitimize itself. Ito played a key role in this corporate-academic fraternizing, meeting regularly with tech executives. The MIT-Harvard fund’s initial director was the former “global public policy lead” for AI at Google. Through the fund, Ito and his associates sponsored many projects, including the creation of a prominent conference on “Fairness, Accountability, and Transparency” in computer science; other sponsors of the conference included Google, Facebook, and Microsoft.
……………………………………….. After the initial steps by MIT and Harvard, many other universities and new institutes received money from the tech industry to work on AI ethics. Most such organizations are also headed by current or former executives of tech firms……………………..
Big tech money and direction proved incompatible with an honest exploration of ethics, at least judging from my experience with the “Partnership on AI to Benefit People and Society,” a group founded by Microsoft, Google/DeepMind, Facebook, IBM, and Amazon in 2016. PAI, of which the Media Lab is a member, defines itself as a “multistakeholder body” and claims it is “not a lobbying organization.” In an April 2018 hearing at the U.S. House Committee on Oversight and Government Reform, the Partnership’s executive director claimed that the organization is merely “a resource to policymakers — for instance, in conducting research that informs AI best practices and exploring the societal consequences of certain AI systems, as well as policies around the development and use of AI systems.”
……— the partnership has certainly sought to influence legislation…….
…………………………………………………………………………………………………… the corporate-academic alliances were too robust and convenient. The Media Lab remained in the Partnership, and Ito continued to fraternize with Silicon Valley and Wall Street executives and investors. …………………………………..
Regardless of individual actors’ intentions the corporate lobby’s effort to shape academic research was extremely successful. There is now an enormous amount of work under the rubric of “AI ethics.” To be fair, some of the research is useful and nuanced, especially in the humanities and social sciences. But the majority of well-funded work on “ethical AI” is aligned with the tech lobby’s agenda: to voluntarily or moderately adjust, rather than legally restrict, the deployment of controversial technologies. How did five corporations, using only a small fraction of their budgets, manage to influence and frame so much academic activity, in so many disciplines, so quickly?
…………….The field has also become relevant to the U.S. military, not only in official responses to moral concerns about technologies of targeted killing but also in disputes among Silicon Valley firms over lucrative military contracts. On November 1st the Department of Defense’s Innovation Board published its recommendations for “AI Ethics Principles.” The board is chaired by Eric Schmidt, who was the executive chair of Alphabet, Google’s parent company,…….. The board includes multiple executives from Google, Microsoft, and Facebook, raising controversies regarding conflicts of interest. ……………………
The recommendations seek to compel the Pentagon to increase military investments in AI and to adopt “ethical AI” systems such as those developed and sold by Silicon Valley firms. …………………………………………..
“some applications will be permissibly and justifiably biased,” specifically “to target certain adversarial combatants more successfully.” The Pentagon’s conception of AI ethics forecloses many important possibilities for moral deliberation, such as the prohibition of drones for targeted killing.
The corporate, academic, and military proponents of “ethical AI” have collaborated closely for mutual benefit. For example, Ito told me that he informally advised Schmidt on which academic AI ethicists Schmidt’s private foundation should fund. Ito even asked me for second order advice on whether Schmidt should fund a certain professor who, like Ito, later served as an “expert consultant” to the Pentagon’s innovation board…………….Kissinger declared the possibility of “a world relying on machines powered by data and algorithms and ungoverned by ethical or philosophical norms,”….
No defensible claim to ethics” can sidestep the urgency of legally enforceable restrictions to the deployment of technologies of mass surveillance and systemic violence. Until such restrictions exist, moral and political deliberation about computing will remain subsidiary to the profit-making imperative expressed by the Media Lab’s motto, “Deploy or Die.” While some deploy, even if ostensibly “ethically,” others die. https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/
No comments yet.
-
Archives
- December 2025 (213)
- November 2025 (359)
- October 2025 (377)
- September 2025 (258)
- August 2025 (319)
- July 2025 (230)
- June 2025 (348)
- May 2025 (261)
- April 2025 (305)
- March 2025 (319)
- February 2025 (234)
- January 2025 (250)
-
Categories
- 1
- 1 NUCLEAR ISSUES
- business and costs
- climate change
- culture and arts
- ENERGY
- environment
- health
- history
- indigenous issues
- Legal
- marketing of nuclear
- media
- opposition to nuclear
- PERSONAL STORIES
- politics
- politics international
- Religion and ethics
- safety
- secrets,lies and civil liberties
- spinbuster
- technology
- Uranium
- wastes
- weapons and war
- Women
- 2 WORLD
- ACTION
- AFRICA
- Atrocities
- AUSTRALIA
- Christina's notes
- Christina's themes
- culture and arts
- Events
- Fuk 2022
- Fuk 2023
- Fukushima 2017
- Fukushima 2018
- fukushima 2019
- Fukushima 2020
- Fukushima 2021
- general
- global warming
- Humour (God we need it)
- Nuclear
- RARE EARTHS
- Reference
- resources – print
- Resources -audiovicual
- Weekly Newsletter
- World
- World Nuclear
- YouTube
-
RSS
Entries RSS
Comments RSS



Leave a comment