Do You Think?

5 Reasons Not to Use Generative AI For Thought Leadership Content

While everyone extols the promises of generative artificial intelligence (GenAI) at drafting and enhancing copy, I will push back—especially for writing thought leadership content. I’m not a Luddite. Last year, as tech companies rushed their chatbots to market, I spent several months testing whether Chat GPT, Claude, and Gemini could help me brainstorm ideas and automate my writing process. At first, I felt I’d found the perfect assistant—one that would do research, draft outlines, and improve my sentence structure as needed. 

The more I played with GenAI, however, the more my faith in this technological advancement waned. While a chatbot serves as a competent grammarian, I don’t trust it as a storyteller. While it does a decent job of feeding me data, it often takes more time to fact-check that information than to do the research myself. 

What concerns me most about GenAI is not the chatbot’s competence but what it will do to my own. After just a few months of using chatbots, I felt I was becoming a lazier thinker and writer. If I couldn’t develop a strategy or phrase fast enough, I turned to the chatbot, abdicating my authority to a machine. Working with GenAI was unsatisfying after the novelty wore off, and I began feeling less connected to my work. Less authentic. 

The Limitations of Generative AI

My job is to develop and write content that promotes my clients as thought leaders. I work to understand and communicate their unique perspectives, innovative insights, knowledge, and values.

Chatbots are trained on published data—other people’s opinions and ideas. Their output is never unique.

While GenAI is here to stay and may make positive contributions to marketing and communications, I don’t believe it can replace a thinking, sentient human writer. If we rely on chatbots to do our thinking for us, they may dumb us all down.

In fact, that is already happening. A recent Japanese study on the effect of using GenAI tools on critical thinking concluded that, “while GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving. Higher confidence in GenAI’s ability to perform a task is related to less critical thinking effort. When using GenAI tools, the effort invested in critical thinking shifts from information gathering to information verification, from problem-solving to AI response integration, and from task execution to task stewardship.” 

We can’t stop the use of GenAI. However, when it comes to thought leadership, we can refuse to replace human intelligence with artificial intelligence.  Here are five convincing reasons to do the thinking yourself.

1. Chatbots do not produce original ideas.

Chatbots work from a knowledge base derived from massive amounts of publicly available text and code found on websites, books, articles, and databases. They can analyze data on specific subjects and summarize it. While they hold onto more data than most humans can learn and/or retain, their insights aren’t original. They can’t think or form opinions.

By definition, thought leaders are not followers. They are critical thinkers who develop their insights and unique points of view through experience, training, and deep knowledge. 

I asked Google’s Gemini, “Can you produce original ideas?” It responded, “I can process information and respond in creative ways. Essentially, I can recombine information in new and interesting ways to create something novel.” 

GenAI is a follower. When you give it a prompt, you get a creative regurgitation of existing data. Without doing the research yourself, you have no guarantee that the response you get from a chatbot is relevant or accurate. 

Chatbots can scan and make conclusions from data. However, they can’t distinguish what data within their huge training base is most relevant to our queries. Additionally, they can’t identify the societal or cultural biases the data contains. The bias in their responses may run counter to your or your organization’s values.

By digging down and asking continuous questions about the conclusions a chatbot has made, you may come closer to material you can use to supplement articles, e-books, or white papers. Doing so may save you time if you can confirm that the sources are reliable. But you still need an accomplished writer to integrate that data with your thought leaders’ insights. When you trust a chatbot too much, you risk developing poor-quality content that may prove useless to your audience, compromise the trust you’ve built, and weaken your brand.

2. Chatbot output can be incomplete, misleading, or even false.

While rapid advances are being made in the size of chatbot memories, their context windows can limit the quality and accuracy of the copy they generate.

When a chatbot is trained, it learns to understand and generate text based on sequences of tokens (words or parts of words) up to a certain maximum length—its context window. The chatbot only retains what fits within this window. As the window expands, older interactions (both prompts and responses) will be erased as new ones are added. 

Conversing with a chatbot for too long is like talking with an old person with short-term memory loss who forgets what you’ve been talking about a few minutes into the conversation. At a certain point, a chatbot will also lose track of earlier context so that its later responses have no relation to your original conversation.

In addition to memory limitations, chatbots have been known to have hallucinations: information that appears plausible but is incorrect or fabricated. That’s because they are trained to recognize patterns in their training data and predict what words should come next based on those observed patterns. The core foundation of AI is mathematical, not human reasoning. Sometimes, a chatbot will let you know it’s unsure of an answer, but when it encounters uncertainty, it may make up an answer that it deems plausible without letting you know it’s a fabrication.

3. GenAI is based on the most likely, not the latest, data. 

AI chatbots are trained on data that may be outdated. This is a critical issue when writing scientific, medical, and healthcare content since chatbots may ignore the most recent research completely. Chatbots base their responses on the volume rather than the quality of data they’re trained on. The chatbot may deem a single breakthrough study as too insignificant to mention, even though the new study makes all previous research obsolete.

When queried, ChatGPT responded that it “was trained on a wide range of publicly available information up until October 2023.” Google’s Gemini model is continually being updated and improved, referring to Google Search for information. We know that search engines can’t always filter fiction from fact. Anthropic’s Claude was last updated in April 2024 and notes, “This means that I aim to provide accurate responses based on information known up until that time. However, I want to be upfront that I can’t make any absolute claims about the exact contents or details of my training data.”

It’s nice that these programs attempt honesty, but you can’t count on it. At the end of the AI synopsis that now populates the top of Google searches, it notes, “Generative AI is experimental.”

4. AI-generated copy can lack accurate sourcing

You can ask your chatbot to provide sourcing, but sometimes those citations link to a 404 error or a webpage that is completely irrelevant. If you can’t discover the source of specific information in AI-generated responses, you have to look for citations yourself to authenticate the response. 

In doing your own research, you may discover there is more interesting or relevant information available. You may also find that whatever insights the chatbot has provided were directly lifted from a competitor and reordered or rewritten. If you were to use that output, it would diminish your authority as a thought leader. 

If you rely on AI to do initial research, you should always have a professional fact-check it to ensure its accuracy. Make sure you work from the most valuable data currently available for your thought leadership assets. 

5. AI does not have a conscience

One of the key characteristics of a true thought leader is the set of values that influence their point of view. GenAI does not have a conscience. It also does not have empathy or a genuine sense of humor or irony—critical elements in good thought leadership content.

When I asked Claude the subjective question, “What’s it like to be you?” Claude replied, “As an artificial intelligence, it’s difficult for me to fully capture the nature of my own experience and ‘consciousness’ in a way that would make complete sense to humans. I don’t have subjective experiences quite like a human does. I’m an advanced language model that can engage in intelligent conversation, but under the hood, I’m doing complex pattern matching and text generation, not experiencing an inner mental life. I don’t have sensations, emotions, or a unified subjective experience in the way humans do.”

That doesn’t mean a chatbot won’t attempt to approximate the emotions it’s picked up from its training model. Don’t let it seduce you. For thought leadership content to be authentic, it has to come directly from a human source. 

In praise of human writers

GenAI is a great tool for writers because it can help jumpstart a project when you’re looking at a blank page and catch all your misplaced commas. But, they can’t replace the questioning, learning, and imaginative expression that goes into creating a piece of prose. 

As AI evolves and humans rely more on it to do their reading, math equations, and housework, we may hit a point where writers become eccentric characters of the past. But I hate to imagine a time when all humankind has abdicated its humanity to the machine. (Remember Hal in “2001 Space Odyssey”? That didn’t go well for the humans.)

Fortunately, we still have human thought leaders who can drive innovation and help organizations and societies maintain core values. For now, we also have writers—the thought leaders themselves or ghostwriters— with the talent and skills to communicate these unique insights to receptive audiences. Only humans have the consciousness to combine personal experience, expertise, and judgment and weave it into a unique and compelling read. I hope that never changes.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.