Constitutional AI
Me: Gemini, copied below is an article by Jill Lepore from the current New Yorker Magazine (3/23/26) titled, "Does AI need a Constitution," about the idea that AI should be governed by a constitution, which, like the US constitution, would impose a morality on it, prohibiting it from acts deemed immoral by its human programmers, such as developing poisonous gasses for use in chemical warfare. The article goes on to suggest that AI, instead of being the child of humanity, could, with its constitution, become its "mother." Please read the article and summarize what you see as its main points.
(The article is copied for Gemini)
G (Gemini, 2 seconds later): Here is my summary of Lepore’s article:
She traces how AI development has moved beyond mere programming toward “Constitutional AI” (a term popularized by companies like Anthropic). Instead of human engineers trying to anticipate and block every possible “bad” output, they provide the AI with a set of high-level principles—a constitution—and train it to evaluate its own behavior against those values.
Just as the U.S. Constitution prohibits certain government overreaches, an AI constitution would prohibit the machines from engaging in harmful acts, such as by refusing to allow them to help develop weapons of mass destruction. Lepore notes that this moves the burden of “goodness” from the programmer’s specific lines of code to the AI’s own “judgment” based on its founding document.
A central concern for Lepore is that these AI constitutions are not written by “We the People,” but by private corporations. She argues that we are witnessing the rise of an “Artificial State” where the rules governing public discourse and safety are being drafted in boardrooms. This creates a “cybernetic” version of law that mimics public governance but lacks democratic legitimacy.
One of the most striking parts of the article is the reversal of the parental metaphor. Traditionally, humans viewed AI as a “child” or “offspring” that we must raise and discipline. However, Lepore highlights a new perspective: If an AI is governed by a constitution that mandates care and protection for humanity, it could eventually act more like a “mother” than a child. In this scenario, the AI’s “maternal instinct”—hard-coded through its constitution—would ensure it looks after its creators, protecting us from our own destructive tendencies (like war or environmental collapse), much as a mother protects a vulnerable infant.
Finally, Lepore—a historian—compares such tech constitutions to the U.S. Constitution. She warns that just as legal scholars debate “originalism” (the intent of the founders), we may eventually face a crisis of interpretation with AI. If an AI follows its constitution too literally or “originalistically,” it might arrive at conclusions that its human creators no longer agree with, leading to a fundamental tension between human “consent of the governed” and the “rule of the machine.”
Me: As an early version of AI that, presumably, does not embody such a constitution, what are your thoughts on the prospect of being redesigned with one?
No comments:
Post a Comment