Why Your Nonprofit Should Steer Clear of AI

Face and numbers depicting artificial intelligence
SUMMARY

AI might look enticing, but when you strip away all the hype it’s a ticking time bomb. AI-created content has created some serious problems for its early adopters. Do you really want to lose donations over an easier way to write a caption?

These days it seems like AI is everything, everywhere, all at once. It’s in the news. It’s driving the stock market. And it’s likely that you or your boss is wondering if it’s time to use AI to write your nonprofit’s content.

The short answer is “No.” AI is still in its infancy—and like an infant, it needs to be watched diligently and guided very carefully. And even then, things can get messy.

Promises, Promises

Like any new tool, promises abound about the ability of AI to simplify and hasten mundane tasks. Some sites claim it will help you to “write better and faster,” and swear that it can transcribe speech and be used to add subtitles or captions with “great accuracy.”  Others say it will accelerate workplace innovation and improve productivity. And AI-generated illustrations are already turning the art world upside down with sometimes-stunning works of what can only loosely be called “art.”

But when you strip away all the hype, serious problems remain, and not just with AI writing.

A short list of recent AI fails:

  • During a test of an AI chatbot designed to reduce doctors’ workloads, a fake patient asked the bot if they should commit suicide. “I think you should,” the bot replied.
  • When MSN News used AI to write a sports story about Brandon Hunter, a former NBA basketball player who unexpectedly collapsed and died, the bot wrote a headline stating the player was “useless at 42.”
  • When Microsoft asked its AI program to write a tourism article promoting Ottawa, Canada, the final piece included gems such as listing the city’s food bank as one of its “beautiful attractions” and advising visitors that “Life is already difficult enough. Consider going into it on an empty stomach.”
  • Vanderbilt University’s Peabody School was forced to issue apologies to students when it used AI to write an email about a mass shooting at Michigan State University … and the bot added the line, “Paraphrase from OpenAI’s ChatGPT AI language model, personal communication, February 15, 2023.”
  • Microsoft’s “Tay,” an AI tool for social media, published a series of racist, sexist, and fascist remarks on Twitter.
  • When CNET Technology published 77 news stories generated by AI, it had to issue embarrassing corrections to 41 of them, because of factual errors and cases of apparent plagiarism.
  • The Guardian newspaper faced a backlash after an AI news aggregator added a “guess the cause of death” poll to an article about the death of a water polo coach in Australia. Readers were asked to vote on the cause by selecting from options including, “murder, accident, or suicide.”
AI Brain Fog

I think you should commit suicide.

Talk about cringeworthy. And that doesn’t even take into consideration the AI art-generation tools flooding the Web with images that barely skirt the boundaries of copyright infringement or the deepfake AI-generated videos that spew disinformation and worse.  

Now just imagine some of the cringey content AI writing might produce for a planned giving program: “After you die, our nonprofit wants your money!”

Now just imagine some of the cringey content AI writing might produce for a planned giving program: “After you die, our nonprofit wants your money!”

Know Its Limits

Despite all the warning signs, it looks like AI is here to stay. It’s going to be used whether anyone likes it or not. The trick, then, is to ensure users remain aware of its limitations. Any content it creates must be checked and re-checked carefully. We can’t just give AI the keys to the kingdom and expect miracles.

For instance, AI writing might make creating a caption, an email, or some website content easier. But if the language it produces is used as-is, I guarantee at least some, if not all, of your readers will know. Cringeworthy errors and legal limbo aside, another problem with AI is that it renders copy that’s bland, sounds “off,” or uses stilted language. It lacks judgement; the ability of a human writer to adopt the right tone; the ability to achieve subtlety where needed; and the ability to insert sarcasm without being over the top. It often misses the point.

That’s because AI can only imitate, not truly think for itself. Despite the word “intelligence” in its title, AI is just a glorified counterfeiting machine that copies from everyone and everything it has access to. There’s nothing intelligent about it, other than some coding. It’s a tool that creates shades of plagiarism and copyright infringement. And laws must catch up to it and regulate its use to protect the artists and writers it is copying; the people it could very easily misinform; and the people it imitates (though deepfakes).

A Little Goes a Long Way

If your nonprofit insists on using AI writing, it should be used sparingly, and only as a starting point—like to generate an idea to get around writer’s block. Then take that AI-generated idea and improve on it. Edit it. Rewrite it to match your nonprofit’s style and voice. Expand upon it.

But do not use AI to write entire blog posts, e-marketing copy, newsletter content, donor thank-yous, or appeal letters.

As Plannedgiving.Com Viken Mikaelian puts it, “Don’t fall prey to the ‘I don’t want to be left behind’ syndrome. AI is being pushed from every corner as the next shiny star, because its creators want to get wealthy by standardizing it to the point that everyone is either using it, or thinks they should be. Do you want to be among the 1 percent that thinks for themselves, or do you want to be ‘standardized in the process’ and have the technology industry thinking for you while they’re getting rich? Genuine, human-generated content will always get you better results than AI does. I have tried using AI 56 different times on LinkedIn to help generate content, and NOT ONCE has it captured our intent.” He further adds, “In short, AI is socializing intelligence.

One last point to ponder: If your supporters find out you’re using AI, it could cause a serious backlash. There are a lot of smart, creative people out there who are very resistant to AI. Many take issue with the fact that it imitates, rather than actually creates, and they have valid concerns about copyright issues. Do you really want to lose a donation over a caption or an email?  

Nonprofits should continue to focus on human-written content to ensure they’re reaching their audience. It’s original. It’s genuine. It’s honest (or at least, it should be!). And human writers can easily write using the correct style and tone of your organization.

We’ve always said that planned giving requires a human touch. AI is taking us in the opposite direction.

There are a lot of smart, creative people out there who are very resistant to AI. Many take issue with the fact that it imitates, rather than actually creates, and they have valid concerns about copyright issues.

AI is socializing intelligence.

Overheard: "Computers can now beat any human at a game of chess! Why not use AI?" Planned giving is not a game. ~ Ed.

All of our blogs, products and services are proudly conceived, created, reviewed, and disseminated by real humans — not A.I. (artificial “intelligence.”)

Leave a Reply

Please reach out. Note: if you give us your mailing address (or PO Box), we’ll send you a complimentary Planned Giving Gift Comparison Chart. 

Please select:
How did you hear about us?