Artificial Intelligence and Universal Values
Title:
Artificial Intelligence and Universal Values
Subject Classification:
Artificial Intelligence, Morals, Society and Culture
BIC Classification: UYQ, HPQ, GPQ
BISAC Classification:
PHI005000, COM004000, PSY008000
Binding:
Hardback, eBook
Publication date:
11 Jul 2024
ISBN (Hardback):
978-1-80441-605-1
ISBN (eBook):
978-1-80441-606-8
To view a sample of the book, please click here
e-books available for libraries from Proquest and EBSCO with non-institutional availability from GooglePlay
For larger orders, or orders where you require an invoice, contact us admin@ethicspress.com
Description
The field of value alignment, or more broadly machine ethics, is becoming increasingly important as artificial intelligence developments accelerate. By ‘alignment’ we mean giving a generally intelligent software system the capability to act in ways that are beneficial, or at least minimally harmful, to humans. There are a large number of techniques that are being experimented with, but this work often fails to specify what values exactly we should be aligning. When making a decision, an agent is supposed to maximize the expected utility of its value function. Classically, this has been referred to as happiness, but this is just one of many things that people value.
In order to resolve this issue, we need to determine a set of human values that represent humanity's interests. Although this problem might seem intractable, research shows that people of various cultures and religions actually share more in common than they realize. In this book we review world religions, moral philosophy and evolutionary psychology to elucidate a common set of shared values. We then show how these values can be used to address the alignment problem and conclude with problems and goals for future research.
The key audience for this book will be researchers in the field of ethics and artificial intelligence who are interested in, or working on this problem. These people will come from various professions and include philosophers, computer programmers and psychologists, as the problem itself is multi-disciplinary.
Biography
Author(s): Dr. Jay Friedenberg is Professor of Psychology, Department of Social and Behavioral Sciences, Manhattan College, New York, USA.
Reviews
"A strikingly different approach to the vital question of safe superintelligence"
- David Wood, Chair of London Futurists, and Principal of Delta Wisdom
"Artificial Intelligence and Universal Values" by Jay Friedenberg delves into the complex interplay between advancing AI technologies and the foundational values that guide human societies. This book is a comprehensive exploration of the ethical, psychological, and ecological implications of AI, aimed at ensuring that as AI systems become more integrated into our lives, they do so in alignment with universal human values. Friedenberg meticulously examines the trajectory of AI development, from the potential of AGI and ASI to the urgent need for robust AI safety measures. He argues for the necessity of value alignment, where AI systems are designed to adhere to human values to prevent risks associated with misaligned objectives. The book provides an interdisciplinary approach, incorporating insights from philosophy, psychology, ecology, and AI research, to propose a framework for aligning AI with values that promote the well-being of humanity and the planet."
- Dr. Roman Yampolskiy, Author of AI: Unexplainable, Unpredictable, Uncontrollable.