A Weekend Assignment for CMOs
By Monday, you'll have the governance document you need!
It’s “Practical Friday” again! Here is a thought experiment worth running before you leave the office today.
Pick the most significant AI system your marketing team is currently operating - the one that touches the most customers, processes the most data, or runs with the least human review. Now answer these four questions about it:
Who is the named individual inside your organization who is accountable for what that system does? Not the vendor or IT. A specific person whose name is attached to the outcome if something goes wrong.
What is the system authorized to do, and what is it explicitly not authorized to do? In writing, not in someone’s head.
What does human oversight actually mean for this system, and how would you prove it happened if you needed to?
If the system produced a damaging output today, what would happen in the first hour, and who would make the decisions?
If you can answer all four cleanly, your AI governance for that system is in better shape than most. If any of them produces a pause or a vague answer, you have found the gap.
Why Just Two Pages?
AI governance conversations in most organizations tend to become large. A working group gets formed. A consultant proposes a framework. A steering committee is established. Six months later, there is a governance document that nobody reads and a deployment environment that nobody has actually assessed.
The two-page approach is a deliberate counter to that tendency. It forces decisions that frameworks defer. It names a person rather than a process, draws a line around authorized scope rather than describing a philosophy of responsible use, and specifies an escalation path rather than gesturing at the importance of having one.
This series has repeatedly argued that the difference between genuine AI governance and compliance theater lies in whether the governance infrastructure produces documented decisions before something goes wrong, or scrambled accountability after it does. This two-pager is the minimum viable version of the former. It takes an afternoon per system. It produces exactly the four things that courts, regulators, and public scrutiny will ask for: who owned it, what it was authorized to do, what human oversight looked like, and what the incident response plan was.
The Template
The downloadable template below covers six sections:
System Identification: what the tool is, who built it, when it was deployed, what it does, and whether it is customer-facing.
Ownership: the named owner, their manager, the IT/security contact, and the legal/compliance contact. This is the accountability map that makes the incident response section functional.
Authorized Scope: specific authorized use cases written out, explicit exclusions written out, and the named person who can authorize any expansion. This section is the one most teams find uncomfortable, because writing down what a system is not authorized to do requires admitting that it could be asked to do those things.
Human Oversight: the review requirement (every output, spot-check, automated only), who is authorized to review, whether the reviewer has override authority, and how the review is documented. The last field is the one regulators and courts will focus on: “we reviewed it” without a documentation mechanism is not oversight.
Escalation and Incident Response: what constitutes a reviewable incident for this specific system, and a step-by-step first-hour protocol. This section should be written assuming the person filling it in will not be the person executing it under pressure. Generic answers (”notify legal”) are not useful. Specific answers (”named owner notified within 15 minutes, system paused, [specific person] looped in for customer communication decision”) are.
Review Schedule: when this document gets updated, and who signs off.
There is also a deployment readiness checklist at the bottom. Seven items. If any of them can’t be checked, the system should not be live without a plan to address the gap.
How to Use It This Weekend
The exercise is not to complete a two-page document for every AI system your team operates. Start with one - the highest-stakes deployment, or the one you’re least confident about if asked to justify it publicly.
Fill it out honestly. Where you find yourself writing vague answers (”someone on the team,” “as needed,” “we’d figure it out”), those are the governance gaps. Write them down anyway, because naming the gap is the first step to closing it.
Bring it to Monday’s team meeting, or your next one-on-one with the person you’d name as the system owner. The conversation that starts is more valuable than any framework document.
This template will be updated and kept current on the Ethicore resources page alongside the AI Regulatory Tracker. Both are free.
Ethicore Advisors Author’s Note
Craig McDonogh is the founder of Ethicore Advisors and the author of the forthcoming book “Guardrails: How to Embrace AI Without Damaging Your Brand.” The template is free to use and share, but it’s not legal advice - if your organization’s AI governance questions have regulatory or legal dimensions, involve your counsel.


