How to Draft an AI Policy: Mighty Citizen’s Approach
Insights, delivered.
On the untamed frontier of AI technology, organizations are rolling the dice on some towering ethical and legal unknowns. Are AI platform users unwittingly taking part in the largest case of copyright infringement in modern history? Is misinformation being senselessly spread by unchecked AI outputs? Are we sacrificing jobs—sometimes our own—for the razzle-dazzle of the latest toy?
Are these questions outlandish? Maybe. But so are the inflated claims of every tech CEO.
You could avoid the technology altogether and spare your organization from the pitfalls. But that could mean your organization gives the inside lane to competitors. And while the technology won’t dramatically transform your marketing output—no matter what the Sam Altmans of the world say—it can nudge your organization forward in meaningful ways.
So, if you should be using the technology but you’re concerned about ethical and legal quagmires, what do you do?
You create an AI policy for your organization. [Read: Does Your Organization Need an AI Policy?]
An AI policy serves as a clear statement of intent. It defines your stance and your use cases for the technology. Most importantly, it forces you to consider its potential negative impacts and create a framework to mitigate them.
Here is the process that we used here at Mighty Citizen to create our own AI policy.
Step 1: Assess the current state of AI in your organization.
Who is using AI-powered tools in your organization? Which tools are they using and for what tasks? What are the prevailing attitudes towards the technology?
The first two questions are important because someone in your organization might already be using AI in a way that violates your existing privacy and security protocols. They might not even know it. That’s because many free AI-powered tools—and even some paid tools—incorporate user inputs into their training data.
The third question—or a series of similar qualitative questions—helps you determine your organization’s appetite for the technology. Is your team mostly ambivalent but open to adopting AI? Is your team already on the AI hype train and in need of ground rules? Or is your team too concerned about the technology’s pitfalls?
This all helps determine if AI-powered tools have a place in your organization at all.
Step 2: Form a committee
If AI tools do have a place in your organization, then it’s time to assemble a committee. It should include people from different departments, represent the full range of the organizational chart, and include people with diverse attitudes about the technology.
This ensures your committee is a forum for discussion and debate, not an echo chamber. It also ensures that decision-makers share the table with people who will be directly impacted by the policy.
Step 3: Research, research, research.
There’s a lot of information circulating about AI technology. It’s hard to separate the hot takes from the measured and thoughtful analyses. Have your committee research all of it—hot takes, expert analyses, critical analyses, use cases, etc. Read existing AI policies (many AI platforms make theirs publicly available) and scrutinize them.
Have each committee member note key takeaways, whether they’re for or against the technology. Then compile those takeaways and discuss them within the committee. Where are you in alignment? Where do you need to work towards consensus?
Step 4: Break down your findings into distinct categories.
Broadly speaking, your committee’s findings can fall into these categories and grapple with these questions:
Privacy - How will your organization address the risk of personal identifying information entering an AI tool?
Security - How will your organization protect its own intellectual property and that of its clients?
Transparency - How will your organization inform internal and external stakeholders about the use of AI tools, if at all?
Accountability - What processes can you create to ensure the quality and integrity of your work, AI-assisted or otherwise?
Ethics - How can you ensure your use of AI tools aligns with your organization’s stances on key ethical concerns like job security or biases against protected classes?
Legality - Does your organization understand how the use of AI tools impacts copyright law, plagiarism, or other speech-related legislation?
When answering these questions, the discussion should focus on how your AI policy can align with your organization’s values. You should also consider some of your existing policies, as well as how you want to approach the risks inherent with AI technology.
Step 5: Clearly define what’s allowed and what’s not.
After your committee has at least sketched a picture of how your organization approaches AI use, it’s time to get specific.
We recommend you look into each department and discipline in your organization to explore use cases. Copywriting currently has the most use cases because most AI tools are text-based. Graphic design, video and audio editing, coding and development, marketing automation—tools in these areas aren’t as well-documented.
Dig into the tools and use cases that are out there, whether your organization already applies them or not. You want to ask yourself which use cases can improve your organization’s productivity and which ones might go too far. Your final use case lists might start to look something like this for each discipline:
DO: Use an approved generative AI tool to ideate headlines, subject lines, and taglines.
DON’T: Use or publish any of the outputs verbatim to avoid potential copyright infringement.
Step 6: Draft, refine, revise.
At this stage, we recommend you draft the policy and take it through one round of workshopping within the committee. Share the second draft with key internal stakeholders in each department. They should focus most on use cases relevant to their department. Are there any use cases missing? Are there guidelines that are too restrictive? Too permissive?
While there’s no prescribed length for an AI policy, a final draft might be anywhere from 5 to 10 or more pages depending on the size of your organization and how pervasively the technology will be adopted. It should also be mandatory reading for current and new employees, alongside other policy documents.
At Mighty Citizen, people come first.
At Mighty Citizen, we put people first. Any value AI delivers must not come at the expense of our organization, our employees, our clients, and the lives we seek to improve. Reach out if you have a question about AI use in your organization. We can tell you what we think, why we think that way, and how you might start to answer the question for your organization.