Skip to Main Content

Apr 22, 2024 BY Andy Fabian Marketing

What Generative AI Can and Can’t Do for Your Content

Mighty Insights

Insights, delivered.

Unless you’ve been living off-grid, you’ve heard the term “generative AI.” The headlines assure us it will revolutionize content creation. But it’s also raising ethical and legal questions in writer’s rooms across the country.

But what is generative AI? How does it work? And what, exactly, can it do for your organization’s content? Below, we’ll answer these questions, along with some you may not have considered.

What’s generative AI and how does it work?

Generative AI uses algorithms to recognize patterns in data. Then it “learns” to generate new outputs based on those patterns. But the dataset must be BIG. Especially for language-specific generative AI software like ChatGPT or JasperAI, which rely on something called Large Language Models (LLM).

LLMs learn and process natural language. Their dataset includes Wikipedia pages, newspaper articles, academic papers, blogs, Google reviews, social media comments, etc., most of which are “scraped” from the Web in huge quantities. Using this data, LLMs are “trained” to understand written requests and provide relevant outputs.

What can generative AI actually do?

A lot.

In no particular order, it can come up with email subject lines, article headlines, ad campaign concepts, and even serviceable first drafts of long-form content. It can take a blog and generate social media posts for sharing it. It can do these things because it’s read hundreds of thousands of pieces of similar content.

When it comes to brand consistency, AI tools can give your organization a steady voice.

When it comes to brand consistency, AI tools can give your organization a steady voice. It can suggest ways to simplify your sentences, adjust your tone, or create an outline. Some tools even have features that can take recordings and generate a bulleted list of takeaways.

In essence, generative AI can help you start your content or just fine-tune it. That leaves you more time to focus on content strategy. It means more time curating and assessing your content. It means less time staring at a blank page, panicking about deadlines under the crushing weight of writer’s block.

Some let you set up a style guide that can catch any off-brand words or phrases. That can be helpful when multiple teams or authors are creating content for your organization.

What can’t generative AI do?

It can’t write your final draft for you—not if you want it to sing and sound like you. AI outputs lack the full range of nuance that gives content a personality. If you want to engage your audience, you need to sound like your audience. For now, only a skilled human can make writing sound, well, human.

Generative AI can’t (and shouldn’t) replace your writers, and it’s just not very creative. It requires quality inputs to generate serviceable outputs. Only an experienced writer with a clear understanding of the assignment parameters can cook up the best prompts. But even with the best prompts and inputs, generative AI produces work that’s meh.

A skilled writer is still the best arbiter of content quality and your best bet for creativity.

It can’t guarantee accuracy, and its outputs shouldn’t be implicitly trusted, especially for organizations in highly technical fields. Platforms like ChatGPT have been known to provide information that’s flat-out false. Publishing false information can pose big (costly) problems for organizations in high-risk sectors like healthcare. A human with domain knowledge should check all claims and data in an output.

Is it ethical to use generative AI?

This is murky. Some AI platforms have been criticized (and sued) for using copyrighted material in their datasets without the author’s permission. Comedians, novelists, journalists, and scientists have all voiced concerns about their hard-won content being part of LLM datasets. But not all platforms are created equal, and some use datasets without copyrighted material. It’s also important to note that AI-generated outputs are themselves difficult to copyright because, in many cases, nobody legally owns the output.

More broadly, there’s the issue of systemic biases in AI. The Internet is full of harmful content. Some of it is based on negative stereotypes. Some of it is outright bigoted. AI platforms still need a lot of training to spot harmful content or understand the use of inclusive language.

Then there’s the issue of job security. Though generative AI can’t replace writers, that won’t stop many organizations from trying. Historically, policymakers have been slow to respond to groundbreaking technologies like AI. So, that leaves organizations to decide for themselves if they will adopt a humanistic approach to AI.

In case you missed it - check out what our team has to say about all the emerging AI tools:

Should your organization use generative AI?

In short, yes.

Generative AI can boost your entire content workflow. A savvy creator can produce more high-quality content in less time. Content moderators can more easily ensure consistency when drafts are pouring in from far-flung departments. Strategists can spend even more time assessing the broad arc of campaigns.

The wins quickly pile up. But only if it’s used with the oversight of a skilled, experienced human.*

If you’d like to speak to a skilled, experienced human at Mighty Citizen about what generative AI can do for your organization, reach out—we’d love to chat!

*This blog was written with the assistance of a generative AI tool.

Copyright © 2024 Mighty Citizen. All rights reserved.