Christian completely changed the way I think about prompt engineering. I thought if the input was good enough, it would result in better output. "Be specific," for example.
But no. The book argues that the Alignment Problem is inherently architectural, not instructional. And this entire "making the machines like humans" goal (if we can call it so, to begin with) is innately absurd. Humans are contradictory. We say X, but we do Z. We hold certian values opposite to each other. In short, we are complex. And complex is the problem to deal with when it is the matter of creating seemingly the most aligned thinking machines.
Amazing read. Christian manages to offer a detailed, narrative-based insights into the inner architectures of how AI works.