Olaf Groth--Hult International Business School professor--and Mark Nitzberg--executive director of U.C. Berkeley's Center for Human Compatible Artificial Intelligence--discuss their thought-provoking collaborative work, Solomon's Code: Humanity in a World of Thinking Machines.
"The Trolley Problem" has long been a favorite of college philosophy classes--flip a switch on a train line and save five peoples' lives at the expense of one? The solution to this conundrum has remained academic--until now.
As write the code for driverless cars, not only have to discuss all the ramifications and scenarios of the Trolley Problem; they also have to provide a black-and-white answer for every conceivable variation of it. This is just one example of how the continuing advancement of technology collides with age-old ethical and cultural mainstays.
As thinking machines make more decisions on our behalf, they can enrich, enable and influence our lives. Whether in medicine, money or love, technologies powered by forms of artificial intelligence are playing an increasingly prominent role in our lives. How do we ensure they make the most beneficial choices for us? And if one person benefits at another's expense, who gets to decide what's best? The increasing use of AI raises critical questions about our values, cultures, economies and power relationships. And the answers might depend on your race, gender, age, behavior, or nationality. In clear and accessible prose, Groth and Nitzberg explore the history of intelligent technology, revealing how close were are to designing machines that have some sort of consciousness. Now we must decide how to give these machines a conscience.
Simultaneously thrilling and provocative, Solomon's Code is alone in raising the difficult questions that need to be considered given the speed of technological development. It is a book that will make you think deeply about what it means to be human us as our technology becomes just as--or more--powerful than we are.
Olaf Groth--Hult International Business School professor--and Mark Nitzberg--executive director of U.C. Berkeley's Center for Human Compatible Artificial Intelligence--discuss their thought-provoking collaborative work, Solomon's Code: Humanity in a World of Thinking Machines.
"The Trolley Problem" has long been a favorite of college philosophy classes--flip a switch on a train line and save five peoples' lives at the expense of one? The solution to this conundrum has remained academic--until now.
As write the code for driverless cars, not only have to discuss all the ramifications and scenarios of the Trolley Problem; they also have to provide a black-and-white answer for every conceivable variation of it. This is just one example of how the continuing advancement of technology collides with age-old ethical and cultural mainstays.
As thinking machines make more decisions on our behalf, they can enrich, enable and influence our lives. Whether in medicine, money or love, technologies powered by forms of artificial intelligence are playing an increasingly prominent role in our lives. How do we ensure they make the most beneficial choices for us? And if one person benefits at another's expense, who gets to decide what's best? The increasing use of AI raises critical questions about our values, cultures, economies and power relationships. And the answers might depend on your race, gender, age, behavior, or nationality. In clear and accessible prose, Groth and Nitzberg explore the history of intelligent technology, revealing how close were are to designing machines that have some sort of consciousness. Now we must decide how to give these machines a conscience.
Simultaneously thrilling and provocative, Solomon's Code is alone in raising the difficult questions that need to be considered given the speed of technological development. It is a book that will make you think deeply about what it means to be human us as our technology becomes just as--or more--powerful than we are.
read more
show less