IA on AI – Claude's New Constitution
The Audit Podcast | Host: Trent Russell | January 28, 2026
Episode Overview
In this installment of "The Audit Podcast," host Trent Russell and guests discuss recent developments regarding Anthropic's AI model, Claude, focusing on the company's newly published "constitution" – a document outlining the guiding values and behavioral framework for Claude. The panel explores why this move is significant for AI governance and what internal auditors and organizations can learn from it, particularly regarding vendor management, risk, and control considerations around AI adoption.
Key Discussion Points & Insights
1. Who is Claude?
- Comparison with Other AI Models (00:16–00:42)
- Claude, developed by Anthropic, is described as a major competitor to AI models like ChatGPT and Copilot.
- The panel notes the ongoing debate in tech circles about which AI model is currently superior.
- "Every couple of weeks there's a bunch of tests... somebody goes, oh, Claude's the best model ever... then two weeks later it's no, this ChatGPT wins." (B, 00:27–00:39)
2. Anthropic’s New Constitution for Claude
-
Purpose and Content (00:42–01:21)
- The constitution is a "detailed description of Anthropic's vision for Claude's values and behavior," serving a dual purpose of guiding development and defining operational standards.
- Notably, it's integral to model training and directly influences how Claude behaves.
- "The constitution is a crucial part of our model training process and its content directly shapes Claude's behavior." (C quoting from article, 01:10–01:21)
-
Internal Audit & Vendor Governance Implications (01:21–02:10)
- Strong recommendation for auditors and organizations: Request and review documented model behavior standards and evidence of testing from all AI vendors.
- These standards should be part of core vendor governance procedures.
- "At a minimum... require a documented model behavior standards and evidence of testing as well." (B, 01:21–01:35)
- "There might even be... areas from this where you go, 'Hey, we should probably include this... relative to some kind of vendor governance procedures.'" (B, 01:52–02:10)
3. Risks & Governance Challenges of AI
-
Emerging Risks Identified by Industry Leaders (02:10–02:50)
- Cites a warning from Dario Amodei (Anthropic Co-Founder/CEO), who questions whether current human systems can manage the “almost unimaginable power” of modern AI.
- "Questions of human systems are ready to handle the... 'almost unimaginable power,' that is 'potentially imminent.'" (C quoting Dario, 02:25–02:34)
- Cites a warning from Dario Amodei (Anthropic Co-Founder/CEO), who questions whether current human systems can manage the “almost unimaginable power” of modern AI.
-
Key Risk Areas for Internal Auditors
- Weak governance structures
- Content abuse (e.g., deepfakes, particularly concerning HR processes)
- Fraud exposure
- Controls are lagging behind advancements in model autonomy
- "On the audit side... weak governance structures, content abuse risks... deep fakes, especially when it comes to HR... fraud exposure... controls lagging behind relative to model autonomy improvements." (C, 02:35–03:04)
4. AI's Unpredictable Trajectory & Auditor Mindset
-
Acknowledging Uncertainty (03:08–03:38)
- Recognize the fast-evolving, unpredictable nature of AI technology.
- Experts are unable to forecast where AI will be in 5–10 years.
- "'Where do you think AI is going to be in five to ten years?' I have no idea. I don't think anybody... has a really good idea." (C/B, 03:16–03:30)
-
Practical Guidance for Auditors (03:38–04:05)
- Stay continuously educated on AI developments from balanced, non-partisan news sources.
- "Find a source for AI news that is somewhere down the middle... and pay attention to it as much as you can." (B, 03:43–04:05)
- Stay continuously educated on AI developments from balanced, non-partisan news sources.
Notable Quotes & Memorable Moments
- On Model Superiority Debate (B, 00:27)
- "Every couple of weeks there's a bunch of tests... it's Claude's the best model ever in the history of existence. And then two weeks later it's no, this ChatGPT wins the best one ever."
- On AI Constitutions and Vendor Governance (B, 01:21)
- "At a minimum, I would highly recommend... requiring a documented model behavior standards and evidence of testing as well."
- On AI Risks Identified by Industry Leaders (C quoting Dario, 02:25)
- "Questions of human systems are ready to handle the 'almost unimaginable power' that is 'potentially imminent.'"
- On Uncertainties of AI’s Future (C/B, 03:23)
- "I have no idea. I don't think anybody... would take that guess at this point as a guess."
Important Segments & Timestamps
- Who is Claude?: 00:16–00:42
- Anthropic's Constitution and Its Role: 00:42–01:21
- Audit & Vendor Governance Recommendations: 01:21–02:10
- AI Risks Cited from The Guardian / Dario Amodei: 02:10–02:50
- Controls Lagging vs Model Autonomy: 02:51–03:04
- Unpredictability and Staying Informed: 03:08–04:05
Conclusion: Takeaways for Internal Auditors
- Internal auditors should treat AI models like Claude as critical, high-impact vendors, requiring transparent documentation of behavioral frameworks and robust evidence of regular testing.
- Governance and controls must keep pace with rapidly advancing AI models to mitigate risks, especially around content abuse, fraud, and model autonomy.
- The episode emphasizes a continuous learning mindset: staying alert to balanced, forward-thinking news is essential for responsible audit and oversight of AI deployments.
