logo

The Alignment Problem

Brian Christian

The Alignment Problem: Machine Learning and Human Values

Brian Christian

The Alignment Problem Part 1 Summary & Analysis

Part 1: “Prophecy”

Part 1, Chapter 1 Summary: “Representation”

Chapter 1 presents some of the preceding models upon which current AI systems are built—such as the neural network perceptron, the AlexNet network, image processing models, machine learning systems, and word-embedding models—together with the ethical concerns they bring.

In 1958, Frank Rosenblatt introduced the “perceptron” during a demonstration organized by the Office of Naval Research in Washington, D.C. This device could learn from its mistakes through adjustments after each error. The perceptron, a basic neural network, determined the position of colored squares on flashcards based solely on the binary data received from its camera. Rosenblatt’s presentation showcased the perceptron’s potential to learn and adapt through experience, described as a “self-induced change in the wiring diagram” (18). This idea pointed to the gap in understanding neural networks at the time the perceptron was introduced. The challenge of achieving “suitably connected” networks, a concept envisioned by earlier theorists like McCulloch and Pitts but not fully realized in practical applications, was eventually carried out by Rosenblatt by including “a model architecture” (18), which contains adjustable parameters that can be modified by use of an “optimization algorithm or training algorithm” (18).

Rosenblatt’s demonstration presented the perceptron as a foundational model for future machine learning systems, emphasizing its ability to form connections and make deductions from basic binary inputs, just like the human brain does.

blurred text

Unlock this
Study Guide!

Join SuperSummary to gain instant access to all 67 pages of this Study Guide and thousands of other learning resources.
Get Started
blurred text