using the arrow buttons.
by clicking on the page.
the page around when zoomed in by dragging it.
the zoom using the slider when zoomed-in.
by clicking on the zoomed-in page.
by entering text in the search field, and select "This Issue" or "All Issues"
by clicking on thumbnails to select pages, and then press the print button.
displays sections with thumbnails and descriptions.
displays a slider of thumbnails. Click on a page to jump.
allows you to browse the full archive.
about your subscription?
Lions Roar : September 2019
most a simulation of a cognitive process but is not itself a cognitive process, the same way that a computer simulation of a hurricane is not itself a hurricane. In strong AI, the idea is that machines could have the kinds of mental states we have. In principle, they could be pro- grammed to actually have a mind—to be intelligent, to understand, to perceive, have beliefs, and exhibit other cognitive states normally ascribed to human beings. We can further distinguish between two ver- sions of strong AI: the claim that machines can think, and the claim that machines can enjoy conscious states. All of today’s AI systems, such as virtual assistants, self- driving cars, facial recognition, recommendation engines, IBM Watson, Deep Blue, and AlphaGo, are instances of weak AI. We are far away from strong AI. Even the most optimistic proponents argue that strong AI is decades away, while others conjecture centuries. Many theorize that it is not possible at all, espe- cially the claim that machines can have consciousness. As an AI scientist and a Buddhist teacher, I am in the last camp, together with UC Berkeley philosophy professor John Searle, who writes that computers “have, literally... no intelligence, no moti- vation, no autonomy, and no agency. We design them to behave as if they had cer- tain sorts of psychology, but there is no psychological reality to the corresponding processes or behavior... The machinery has no beliefs, desires, [or] motivations. ” The “processes” that Searle talks about are lines of code, or algorithms, that primarily learn to recognize patterns in data (images, words, speech, etc.). There is no real intelligence, no motivation, no autonomy, and no agency. The apocalyptic fears of super-intel- ligent machine overlords are primarily born out of the strong AI hypothesis. By allowing ourselves to be disabused of fears of strong AI in a far-off future, our energies are freed to turn to the challenge at hand: weak AI. There are many aspects of weak AI to be concerned about, which should propel us toward right action. Traditionally, weak AI needed a set of rules predefined by humans to make a decision. Today, machine learning algo- rithms (a technique in AI) allow systems to infer rules by themselves by finding patterns in data. This “learning” requires a large amount of data of the right kind. If the learning algorithm (designed by humans) or data selection (chosen by humans) is biased, so will be the decision of the AI system. There have been many examples in which weak AI systems were found to exhibit racial and gender bias. One example is COMPAS, where the sys- tem predicts that Black defendants pose a higher risk of recidivism than they actually do. Another example is mort- gage algorithms that perpetuate racial bias in lending. With weak AI, the humans are still behind the wheel, and that’s where the real challenge of AI lies. We can design robotic doctors, nurses, and caretakers who never tire or become error-prone, with super steady hands to perform operations any time of the day. Or AI powered drones to help fight ocean plastic. On the other hand, we can build robotic killing machines, giving them the power over life and death without human oversight, or create deepfake videos to swing an election. To err is to be human: our algorithms are not perfect, nor is our data. There- fore, neither are the AI systems we create. From where I sit, the challenge of ethical AI is as crucial as climate change. We need to remain vigilant about data fairness, privacy, and the ethics of AI systems. Given the potential for abuse, San Francisco has just banned the use of facial recognition software by police and other agencies, and many cities are following suit. IEEE, the world’s largest technical professional organization for the advancement of technol- ogy, has more than a dozen working groups on ethical design standards for autono- mous and intelligent systems. There are many more exam- ples of organizations mobiliz- ing to create guidelines and regulations to infuse ethics in AI. There is another side to the story of fear in Buddhism: many teachings refer to fear as a valuable ally and instrument, a doorway to awakening if explored skillfully. Among these are fear of moral remorse and disesteem (hiri and ottappa), which are considered to be the ethical guardians of the world. According to Giuliano Giustarini in The Journal of Indian Philosophy, stirring fear (bhaya) and letting go of fear and fostering fear- lessness (abhaya) “are two essential steps of the same process,” the latter being a “quality of liberation as well as an atti- tude to be cultivated.” As Buddhists, we need to skillfully utilize fear and concern not to become frozen in a feeling of overwhelm or inaction, but to wisely and compassion- ately use our voice, votes, and energy to ensure humanity’s AI systems are designed with kindness, fairness, hon- esty, and full ethical considerations. Our freedom depends on it. ♦ ISTOCK.COM/METAMORWORKS LION’S ROAR | SEPTEMBER 2019 12 CULTURE • LIFE • PRACTICE