Beware of a tsnunami of content

Automated content creation is becoming more commonplace with the help of advanced artificial intelligence (AI) technology. At New Story, from our focus, we are definitely excited about it. The creation of branded content, for example via, is becoming more scalable. And affordable. Although the technology offers marketers many opportunities, it also raises important ethical concerns.

Tsunami in city

In this blog

What is AI-generated content? What does it mean for us as a society? How do we prevent AI-generated content from further increasing the problem of misinformation? And how should your organization deal with it?

AI-generated content, how does it work?

AI-generated content is a product of OpenAI, an artificial intelligence research lab founded in 2015. Its goal is to democratize AI technology and make it accessible to everyone. With Artificial General Intelligence (AGI), it aims to help people address and solve a wide range of complex problems. Currently, everything is gaining momentum and many OpenAI solutions are entering the market including robotics and natural language processing, or AI-generated content. Based on this technology, ChatGPT and, among others, have found the light of day that allows you to easily generate textual content. With the underlying OpenAI technique, a multitude of sources are scanned and converted into a fingerprinted report. Thus, it is always unique content; there is never any question of plagiarism. Based on interactions with people, the system learns and becomes wiser, so to speak - from the content it creates itself.

Jasper AI bot behind laptop
Say hi to Jasper!

Google worries

Google has its reservations about AI-generated content. This stems from the fact that content is increasingly mistaken for the work of a real human being. An even bigger problem arises when the AI-generated information contains false information and misleads users. Therefore, the search giant is taking several measures to ensure the quality and accuracy of AI-generated content on its platform. Among other things, Google is developing technologies to recognize automated content. But as always, a cat-and-mouse game ensues, with technologies trying to cut each other off.

Cat-and-mouse game

To avoid being penalized by Google, you can have your AI content checked to see if it can be detected by Google as AI content. Our experience: adding a little bit of yourself (10 to 20% can be enough), makes it difficult for current (Google) technology to recognize. Immediately, an important disclaimer: what is true today does not guarantee tomorrow. So from that perspective alone, you should not deploy automated content on autopilot.

Democratized truth

For Google it may be a curse, but for inbound marketers it's a blessing: AI-generated content is already almost indistinguishable from human-written work. This is beneficial in terms of efficiency; AI platforms like can generate high-quality content in a fraction of the time it would take to manually write all the content. However, this ease of use also poses potential dangers when the technology, intended or unintended, is misused. Misinformation is already commonplace; will the problem get worse?

Widely accessible OpenAI technology will generate a lot of good content. But irrevocably it will also generate an endless stream of automated misinformation, which by its quality seems legitimate. With the exponential growth of content on the horizon, it is more than predictable that it will increasingly lead to confusion, distrust and entrenched polarization. When an AI platform routinely feeds on automated misinformation - and algorithms that continue to confirm the convenient truth for the reader - the platform will learn these inaccuracies and just serve them out more often. Facts are voted out, so to speak. Truth is democratized, or worse: manipulated.

Regulation is needed! But is it timely?

To ensure that automated content is used ethically and does not become an amplifier of a parroting crowd, regulation is needed. This should focus on two key areas. It should ensure that AI-generated content meets standards of accuracy and legitimacy; it should prevent the automated dissemination of misinformation and manipulation of public opinion. But how do you get that done? Is the harm done before the solution is in place? Finding a solution to this challenge should go beyond regional borders. Our digital reality is transnational. It requires cooperation between countries, regulators, technology developers and the public.

Regulation: how to regulate it?

Will regulation be self-regulated from the free market - as the Advertising Code Committee or STIVA (Foundation for Responsible Alcohol Consumption) are examples of? Or is it up to governments to frame on what terms you can use AI-driven content, and what kind of material you are allowed to generate? What digital watchdogs will stand at the gates of AI platforms, and when can we expect them to catch on? How do we ensure that everyone respects the guidelines in a world where we are multidimensionally diametrically opposed? How do countries like the U.S. and Russia trust the same underlying infrastructure? How can both the environmentalist and the climate denier trust them?

Many questions, one honest answer

It can't be done. Regulation becomes a ratatouille recipe that tastes different all over the world. In a free society, you would expect self-regulation and self-cleansing. In an authoritarian governed country, the doctrine in the algorithms will become steering to rein in public opinion. In either case, concerns remain high because uncertainty prevails.

Getting started yourself with AI-generated content

Be cautious. Let's illustrate that with an example. Say you sell gardening tools and you have generated untold amounts of inspiring content with AI-generated content for anyone with a green thumb. Meanwhile, that inbound content (content that people search for themselves, rather than information forced upon them) accounts for 40% of your sales. You shouldn't think that one day - retroactively - Google will reject, and maybe even penalize, all your content.

Our advice is, don't be deterred. With pure intentions and an eye for the do's and don'ts, you should run with it. If only because your competitor is doing it too and your brand, product or service is being pushed to the caverns of the Internet due to inactivity. But get the help of an expert who strategically aligns your content and underlying digital eco-system. A partner who closely monitors developments, and minimizes the risks for you.

You guessed it: New Story is happy to help. We've even set up a bullet-proof service for it. Learn more about the AI Content Sprint.

Less than half of this blog was generated with First we laid out the skeleton of the story for ourselves, then we built and edited it paragraph by paragraph with the help of Later on we've changed the tone of voice with AIPRM/ChatGPT.

Thanks for contacting us! We'll respond to your message as promptly as possible, typically within one working day at most. In case you don't hear back from us, feel free to drop a friendly reminder at
Oops! Something went wrong while submitting the form.

Dig deeper?
Please reach out to: