Public library services for Canadians with print disabilities
  • Mobile accessibility tips
    • Change contrast
      • AYellow on black selected
      • ABlack on yellow selected
      • AWhite on black selected
      • ABlack on white selected
      • ADefault colours selected
    • Change text size
      • Text size Small selected
      • Text size Medium selected
      • Text size Large selected
      • Text size Maximum selected
    • Change font
      • Arial selected
      • Verdana selected
      • Comic Sans MS selected
    • Change text spacing
      • Narrow selected
      • Medium selected
      • Wide selected
  • Register
  • Log in
  • Français
  • Home
  • Newspapers
  • Magazines
  • Recommended
  • For libraries
  • Help
  • Skip to content
      • Change contrast
        • AYellow on black selected
        • ABlack on yellow selected
        • AWhite on black selected
        • ABlack on white selected
        • ADefault colours selected
      • Change text size
        • Text size Small selected
        • Text size Medium selected
        • Text size Large selected
        • Text size Maximum selected
      • Change font
        • Arial selected
        • Verdana selected
        • Comic Sans MS selected
      • Change text spacing
        • Narrow selected
        • Medium selected
        • Wide selected
  • Accessibility tips
CELAPublic library services for Canadians with print disabilities

Centre for Equitable Library Access
Public library service for Canadians with print disabilities

  • Register
  • Log in
  • Français
  • Home
  • Newspapers
  • Magazines
  • Recommended
  • For libraries
  • Help
  • Advanced search
  • Browse by category
  • Search tips
Breadcrumb
  1. Home

The Alignment Problem: Machine Learning and Human Values

By Brian Christian

Business and economics, Computers and internet, Science and technology

Synthetic audio, Automated braille

Summary

"If you’re going to read one book on artificial intelligence, this is the one." —Stephen Marche, New York Times A jaw-dropping exploration of everything that goes wrong when we build AI systems and the movement to fix them. Today’s "machine-learning"… systems, trained by data, are so effective that we’ve invited them to see and hear for us—and to make decisions on our behalf. But alarm bells are ringing. Recent years have seen an eruption of concern as the field of machine learning advances. When the systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem. Systems cull résumés until, years later, we discover that they have inherent gender biases. Algorithms decide bail and parole—and appear to assess Black and White defendants differently. We can no longer assume that our mortgage application, or even our medical tests, will be seen by human eyes. And as autonomous vehicles share our streets, we are increasingly putting our lives in their hands. The mathematical and computational models driving these changes range in complexity from something that can fit on a spreadsheet to a complex system that might credibly be called "artificial intelligence." They are steadily replacing both human judgment and explicitly programmed software. In best-selling author Brian Christian’s riveting account, we meet the alignment problem’s "first-responders," and learn their ambitious plan to solve it before our hands are completely off the wheel. In a masterful blend of history and on-the ground reporting, Christian traces the explosive growth in the field of machine learning and surveys its current, sprawling frontier. Readers encounter a discipline finding its legs amid exhilarating and sometimes terrifying progress. Whether they—and we—succeed or fail in solving the alignment problem will be a defining human story. The Alignment Problem offers an unflinching reckoning with humanity’s biases and blind spots, our own unstated assumptions and often contradictory goals. A dazzlingly interdisciplinary work, it takes a hard look not only at our technology but at our culture—and finds a story by turns harrowing and hopeful.

Title Details

ISBN 9780393635836
Publisher W. W. Norton & Company
Copyright Date 2020
Book number 6892228
Report a problem with this book

The Alignment Problem: Machine Learning and Human Values

FAQ

Which devices can I use to read books and magazines from CELA?

Answer: CELA books and magazines work with many popular accessible reading devices and apps. Find out more on ourCompatible devices and formats page.

Go to Frequently Asked Questions page

About us

The Centre for Equitable Library Access, CELA, is an accessible library service, providing books and other materials to Canadians with print disabilities.

  • Learn more about CELA
  • Privacy
  • Terms of acceptable use
  • Member libraries

Follow us

Keep up with news from CELA!

  • Subscribe to our newsletters
  • Blog
  • Facebook
  • Bluesky
  • Twitter
  • Youtube

Suggestion Box

CELA welcomes all feedback and suggestions:

  • Join our Educator Advisory Group
  • Apply for our User Advisory Group
  • Suggest a title for the collection
  • Report a problem with a book

Contact Us

Email us at help@celalibrary.ca or call us at 1-855-655-2273 for support.

Go to contact page for full details

Copyright 2025 CELA. All rights reserved.