Quality over quantity is something we strongly believe in at WildRock. And in the age of AI-generated content and platforms like ChatGPT, it’s a mantra we continue to lean into as we analyze the value for our team and more importantly, our clients. It’s no secret that ChatGPT can push out content at a rapid pace, allowing organizations to quickly produce an expansive content library with minimal effort. However, it’s important to remember that ChatGPT is essentially pulling from content that already exists – it’s not new or unique, it’s regurgitated from other sources, and in some cases pulled from competitors’ content.
After testing the tool (and others) for months, conducting our own research and attending AI conferences, from our perspective this is simply a tool for writing and cannot replace the creative and brand-specific messaging a human can produce. Because Google prioritizes unique content, we’ve also discovered that using ChatGPT can hurt organizations and their SEO.
Our research shows that there is very little produced from ChatGPT that can be considered creative at all. Jason Mars at Forbes explains that ChatGPT, “provides answers based on aggregated information from the massive corpus of text data it was trained on.” So even if it can pull from every corner of the web, it only “knows” a certain percentage of it. With that, this “box” of content may not include relevant information or current search or keyword phrases that may help an organization show up prominently in Google searches, allowing customers to find them. Mars also outlines that ChatGPT, and other large language models, are primarily built for only producing information, not telling stories.
At WildRock, we are storytellers, and there are limitations to programs like ChatGPT when it comes to truly narrating a company’s value. Because there is a zero-customization factor, every user receives the same answer. What’s more, ChatGPT sometimes puts out incorrect content. Made Simple Media adds that grammar, syntax, context and facts can be completely incorrect or made up, and substantial editing becomes imperative. So even if you save time in article production with a tool like ChatGPT, your editing time turns into a nightmare, often resulting in a scrapping of the entire piece.
Even then, editing often won’t get a ChatGPT piece to where it needs to be either. In an article published by The Guardian, both a university student and a marketing executive declare that “the chatbot’s [ChatGPT’s] output requires heavy editing,” even going as far as to say that “a huge amount of editing is still required to make the copy sound human.” Whether the information is someone else’s, is incorrect, reads like a textbook or all of the above, editing ChatGPT content can take just as long or longer than if it was written from scratch. It begs the question: is it worth it?
After all that, your originality might still be in question. Originality-ai, an AI originality, plagiarism and fact checker, reveals that AI detection is powered by the same algorithms as AI generation, as well as trained on AI-generated content to identify robotic patterns. Detection systems essentially have the codebook for every AI strategy. Not to mention, OpenAI – the parent company of ChatGPT – has its own AI detector called Text Classifier. This program (and many others) is trained on GPT-3 content – one of the largest language models – so it knows some of the most complex AI tricks and therefore, can detect ChatGPT output. If that doesn’t raise any flags, Originality-ai asserts that given this training, Text Classifier, “can be a powerful way to combat cheating and fraud,” particularly in academic circumstances.
Google specifically has and still does utilize Natural Language Processing and Machine Learning Algorithms for unoriginality and AI detection. These systems analyze patterns in writing, syntax trends and the consistency of the article to ensure that content is produced sincerely and isn’t just loaded with keywords. Take it from John Mueller, a Google Search advocate, “Content automatically generated with AI writing tools is considered spam, according to the search engine’s webmaster guidelines…which is something we’ve had in the Webmaster Guidelines since almost the beginning.”
These guidelines – based on experience, expertise, authoritativeness and trustworthiness (EEAT) – require personnel to take action against unoriginal content if it is discovered and can even penalize creators by devaluing or not ranking content if they intend to influence SEO. Google prioritizes “helpful content,” which they do by “[helping] people see more original, helpful content written by people, for people, in search results.” So, in this sense, even if AI-generated content isn’t penalized by Google, authentic content will always be rewarded by Google.
If your AI content is penalized but also isn’t rewarded where does that leave you? What’s the harm? It can likely mean you spent time producing an article that may not even be noticed at all by your target audiences. And by at all, we mean – at all! If the primary goal of your blog post or article is to gain traffic on your website, copying and pasting ChatGPT’s output will do the exact opposite and any intended influence will be obsolete.
Alexandre Lores with Medium asserts that such a strategy “will get flagged by platforms like Google or Facebook and this will hurt your SEO and any chances you have of organic reach on those platforms.” No audience will be reached, never mind your target audience, and any amount of time spent on brand messaging will be all for not.
Prominent universities have also adapted their academic integrity policies to discourage AI-generated content and any plagiarism associated with the content. At Syracuse University, every course syllabus includes the policy, stating that “any work a student submits for a course must be solely their own unless the instructor explicitly allows collaboration or editing… These expectations extend to the new, fast-growing realm of artificial intelligence (AI).” Should assignments be flagged for plagiarism, students can face a zero score, course failure, a letter of reprimand, academic probation lasting to graduation, suspension or even expulsion depending on the extent of plagiarism. Following a 468% increase in AI detection at Louisiana State University (LSU), Kyrsti Wyatt – LSU’s senior case manager of student advocacy – also explains that consequences can “range anywhere from warning to suspension from the university, it just depends on the severity of the incident.”
All this isn’t to say that ChatGPT should be completely banished. It can streamline parts of content creation, but the only reliable use cases are in research acquisition, organization and structure of content. Sam Altman, the CEO of OpenAI, said it himself in December of 2022: “ChatGPT is incredibly limited but good enough at some things to create a misleading impression of greatness… it’s a mistake to be relying on it for anything important right now.”
Authenticity will always stand the test of time. We’ve even seen this as social media has evolved – it has aided tremendously in creating relationships, but it has never replaced the need for and importance of human-to-human interaction. In a similar way that social media has weakened social skills and heightened anxiety and depression, utilizing AI-generation systems has hindered problem-solving, critical thinking and information evaluation skills – especially in young users. We predict an over-dependency may even allow future generations to have no experience with the written word and the art of writing as we know it will be lost. We and other marketing agencies are addressing ChatGPT with this in mind.
We will continue to explore, learn and grow with technology, but for now, our stance is that it is a two-part process: AI for ideas and inspiration but real humans for building meaningful content. At the end of the day, content is only as good as how it makes a person feel and the actions it inspires them to take and only a person can recognize how to nurture that feeling.