We want to tame artificial intelligence. We want to make it serve ourselves. But sometimes, Language Models fail us. They throw us responses that we don't want.
When your model errs, it is also called hallucination. Hallucinations in LLMs remain a persistent problem, and AI engineers constantly work to reduce the frequency and severity of confabulated outputs their models produce...
My Notes
Normal folks cannot do computer science, so they tackle the hallucination problem from a simpler approach. Prompt engineering.
Specific prompts with clear intentions increase accuracy. Some people now make money teaching these techniques.
Taming the AI is hard. Otherwise we wouldn't need prompt engineering and hallucination mitigation in the first place.
People refer to this big challenge as the Alignment Problem. The elusive taming of the artificial systems we have created.
But the Alignment Problem is insoluble. Because solving it requires understanding what we want first. You cannot get what you don't entirely know.
We've always argued about the values, about morality, about really everything. What one person finds true, the other finds it false. What one soul deems righteous, the other sees it as evil. We hold contradictory desires simultaneously. There is no golden-middle. No, there is no golden-point that would push all variables into a "click."
If there is any alignment problem to solve, it shouldn't refer to AI systems. It should refer to us. Because we are the actual misaligned ones. The AI merely reflects the incoherence we've built into the request.