fbpx

The New York Times: If ‘All Models Are Wrong,’ Why Do We Give Them So Much Power?

The New York Times, June 4, 2021, If ‘All Models Are Wrong,’ Why Do We Give Them So Much Power?

If you talk to many of the people working on the cutting edge of artificial intelligence research, you’ll hear that we are on the cusp of a technology that will be far more transformative than simply computers and the internet, one that could bring about a new industrial revolution and usher in a utopia — or perhaps pose the greatest threat in our species’s history.

Brian Christian’s recent book “The Alignment Problem” is the best book on the key technical and moral questions of A.I. that I’ve read. At its center is the term from which the book gets its name. “Alignment problem” originated in economics as a way to describe the fact that the systems and incentives we create often fail to align with our goals. And that’s a central worry with A.I., too: that we will create something to help us that will instead harm us, in part because we didn’t understand how it really worked or what we had actually asked it to do.

Print Friendly, PDF & Email
Scroll to Top