If you are convinced that artificial intelligence (AI) consists of disembodied and unstoppable computer programs enslaving humanity, Possible Minds: 25 Ways of Looking at AI is not for you. However, if you’re, say, an accountant, and curious as to your future work-life balance, and perhaps the job prospects for your children if they should follow your path, then the book, a series of essays from 25 scientists, is instructive and only mildly disconcerting.
It is true that one contributor, Venki Ramakrishnan, an Indian Nobel Prize-winning biologist, believes that accountants – along with “many legal and medical professionals, financial analysts and stockbrokers, travel agents – in fact, a large fraction of white-collar jobs – will disappear as a result of sophisticated machine-learning programs,” but he is in the minority. Steven Pinker, the Canadian experimental psychologist, for example, believes AI will be a boon to the profession, shouldering the weight of tasks that accountants don’t have the time or ability to do such as sorting millions of transaction records.
But Possible Minds covers the corporate role of AI more from an ethical, rather than a practical, perspective. Daniel C. Dennett, Professor of Philosophy at Tufts University in the United States, would like to see AI operators licensed. “Once we recognize that people are starting to make life-or-death decisions largely on the basis of advice from AI systems… those who in any way encourage people to put more trust in these systems than they warrant should be held morally and legally accountable.”
Stuart Russell, Professor of Computer Science and Engineering at the University of California, Berkeley, believes in self-limiting robots. “A robot that’s uncertain about human preferences actually benefits from being switched off, because it understands that the human will press the off switch to prevent the robot from doing something counter to those preferences.” AI, Russell notes, is very much under the consumer spotlight. “If one poorly designed domestic robot cooks the cat for dinner, not realizing that its sentimental value outweighs its nutritional value, the domestic-robot industry will be out of business.”
However, the central concern of most of the authors is not that runaway AI will somehow exert its own power over humanity, but that malicious operators will pervert technology to achieve their own ends. W. Daniel Hillis, Professor of Engineering and Medicine at the University of Southern California, worries about the corporate control of AI.
“Our most powerful and rapidly improving [AI is] controlled by for-profit corporations,” he writes, citing Google, Amazon, Baidu, Microsoft, Facebook, Apple and IBM. Their machines, he adds, “will be designed to have goals aligned with those of the corporation… [and] they will become more powerful and autonomous than nation-states.”
Possible Minds looks at the culture and science from an overwhelmingly American perspective. There is no mention of the AI research and development carried out in China in companies such as Tencent or Alibaba, or even the latter’s Zoloz, the Kansas City-based AI start-up owned by Ant Financial that is the core of Alipay. There is also no mention of Beijing’s New Generation AI Development Plan, a policy announced in 2017 that outlines China’s strategy to build a domestic AI industry worth nearly US$150 billion in the next few years and to become the leading AI power by 2030.
The book’s perspective is also overwhelmingly male: just three of the 25 essays are written by women. That is a shame because some of them have the most illuminating insights about AI. Caroline A. Jones, Professor of Art History at the Massachusetts Institute of Technology, presents a fascinating history of the role of AI in the arts, invoking the Xiamen-born sculptor Wen-Ying Tsai and his interactive “cybernetic art” that enthralled New Yorkers in the 1960s.
Anca Dragan, an Assistant Professor of Electrical Engineering and Computer Sciences at UC Berkeley, specializes in AI-human interaction, a field highlighted by such incidents as the self-driving Uber car fatally striking a pedestrian in Arizona in 2018. “The autonomous car shares the road with pedestrians, human-driven vehicles, and other autonomous cars,” she writes. “How to combine these people’s values when they might be in conflict is an important problem we need to solve.”
Moreover, developmental psychologist Alison Gopnik, a professor at UC Berkeley, warns that although AI and machine learning “sound scary,” humanity’s “natural stupidity can wreak far more havoc” than AI. “But there is not much basis for either the apocalyptic or the utopian vision of AIs replacing humans,” Gopnik adds. “Until we solve the basic paradox of learning, the best artificial intelligences will be unable to compete with the average human four-year-old.”
Editor interview: John Brockman
John Brockman has spent a life following the avant-garde, whether in literature, art, science and technology, or since the 1990s, in the online world. In 1996, he started www.edge.org as an online manifestation of The Reality Club, an informal gathering of thinkers who had been meeting since 1981 to discuss pressing issues in science, technology and the humanities, from forecasting to behavioural psychology.
Possible Minds: 25 Ways of Looking at AI is one of a series of anthologies produced annually by Brockman’s Edge Foundation. “In terms of big issues, I think the next shoe to drop is this whole world of AI,” he says.
It is not the first time that Brockman and his intellectual acquaintances have studied the subject. “I met the original cyberneticists in 1965 and… it got pretty boring in the 1980s and I just walked away from it.”
He then expected the Japanese, with their industrial robotics knowhow, would own the technology in that decade. “Everyone said ‘they’re coming, they’re coming” but nothing happened – it petered out into another AI winter.”
Two decades later, it’s a different world, he acknowledges. “You wake up and there’s something called unsupervised self-fulfilling deep learning: the AlphaGo software, [British neuroscientist] Demis Hassabis and [AI company] DeepMind. So I put together a dinner in London and invited Demis, the idea being let’s have him talk to [U.K. physicist] David Deutsch and get a sense of what’s going on.”
Gathering people together to talk about “what’s going on” has been a Brockman hallmark since childhood. Born in 1941, into a family of Austrian immigrants to Boston, Brockman admired his father’s hard work as a wholesale flower-seller.
He studied business at Columbia University in New York, but found his true métier in the arts, working odd jobs in theatres and hanging out with figures such as novelist Ken Kesey and actor-playwright Sam Shepard.
After a brief career in sales, Brockman wrote an early business self-help book that was not well-received by critics before launching a career as a literary agent, eventually signing up heavyweights such as popular science authors Richard Dawkins and Jared Diamond. In the 1980s, he created The Reality Club, reminiscent of 18th century salons where thinkers, many unorthodox, gathered to express themselves about issues and ideas – often in fields that were not their core studies.
Brockman’s London AI dinner also featured the writer Ian McEwan and film director Terry Gilliam, formerly of the Monty Python comedy troupe. “There was a group of 10 or so and many were people that have nothing to do with computing,” Brockman says. “But they have a lot to say about reality.”