Does Your Organization Need an AI Policy?
Insights, delivered.
Each day that passes without AI legislation is another day of risk-taking for AI users. It’s another day an AI user—maybe even someone in your organization—plays chicken with a technology that’s behind the wheel of a speeding car.
Do you know if (or how) your organization is using AI? Do you know how your organization should be using AI? Do you know which AI-powered tools your organization can trust?
Does your organization need an AI policy?
How AI-powered tools can augment your organization
While some of the AI hype is just bluster from CEOs seeking to inflate the profile (and share value) of their platforms, the technology’s value is real.
Marketing strategists are using it to quickly outline content strategies. Graphic designers are using it to easily edit stock photos. Developers are using it to assist with coding. Copywriters are using it to produce more work in less time.
And organizations like yours are using AI at enterprise scale to tie it all together.
The legal and ethical pitfalls
Like any good medicine, AI technology has its side effects.
First, many tools (especially their free versions) incorporate your inputs into their training data. That means intellectual property or sensitive personal information can make their way into a tool that nearly anyone in the world has access to. Most paid versions of tools offer some protection, but there’s no consistency.
Second, ownership of AI-generated content is open-ended. Some AI platforms are starting to pay licensing fees to publishers for use of their copyrighted material. Others are not. That opens the door for plagiarism and copyright infringement. Even in cases where an AI output is not a word-for-word reproduction of copyrighted material, neither the AI platform or the user can claim ownership of it.
Lastly, AI outputs might be plain wrong—legally, ethically, or both. Training data might include inaccurate, outdated, or unverified statistics. The algorithms can’t distinguish between fact and fiction. Or prejudiced opinions and peer-reviewed journal articles.
Imagine your nonprofit unwittingly sharing the name of a major gift donor who asked to be kept anonymous. Imagine your university’s content team incorrectly citing source material. Imagine your association or government agency publishing offensive content. Blunders like these can ruin your organization’s reputation. At worst, they could invite lawsuits.
So, does your organization need an AI policy?
Even if your organization intends to transition back to typewriters and landlines, you probably still need an AI policy. Internally, it ensures your team knows where you stand on AI use, and why. Externally, prospective members, students, clients, or constituents know what they should expect from your organization. They also get a sense of what your values are.
The risks are simply too great.
Establishing an AI policy is your guardrail against those risks. It ensures your organization is using the most secure tools. It ensures you understand the technology’s shortcomings. And, depending on how AI legislation looks in the coming years, it ensures you’re on the right side of compliance (and history!).
Here at Mighty Citizen, we’re working on our own agency-wide AI policy. We created a committee made up of team members from across the agency, working to familiarize ourselves with the latest developments in AI. We’re vetting the available AI tools and surveying our current use of AI tools. They’re charting a path to consistent, responsible usage.
Stay tuned for a look at lessons learned during our AI policy project. In the meantime, feel free to reach out if you have any questions. We’d be happy to share some of our insights and help get your organization started on an AI policy.