Chores to AI, Thinking to Humans
Description
Let AI handle the chores, and humans do the thinking: such should be the future of content marketing. In this piece, I try and debunk a few myths. Firstly, generative AI can be creative — and often is. Secondly, AI doesn’t necessarily make us stupid; we don’t need it for that. And thirdly, becoming a prompting Guru isn’t necessarily the key to producing great content. The question of AI’s role in content marketing is actually more strategic than technical: it’s about why and for whom we create content. This is the major issue at stake for today’s and tomorrow’s marketers. In this presentation, I urge readers not to outsource their thinking to AI, and rather offload the chores of low-value tasks to machines. Unfortunately, it should be noted that they aren’t always doing a good job with that.
Chores to AI, Ideas to Humans
<figure class="wp-caption aligncenter" id="attachment_82907" style="width: 1920px;"><figcaption class="wp-caption-text" id="caption-attachment-82907">Since the machines started thinking, we’ve had more time to do the dishes, wrote Joanna Maciejewska. Like her, I’d rather it were the other way round.</figcaption></figure>
TL; DR
- Ms Bernard is an SEO agency avatar who adds links to Visionary Marketing on “her” website. Her “work” raises some fundamental questions.
- Criticisms aimed at AI often miss the mark and overlook fundamental issues: why we write, for whom, for what purpose…
- We also dismiss a few myths such as ‘AI can’t be creative’, ‘AI makes us stupid’, and ‘mastering prompting is a silver bullet’.
- Hence, the question of AI’s role in content marketing is more about strategy than it is about tech.
- In this presentation, I urge content creators (and readers alike) not to outsource their reasoning and to leave the chores to AI.
<figure class="wp-caption aligncenter" id="attachment_82905" style="width: 1920px;"><figcaption class="wp-caption-text" id="caption-attachment-82905">This piece owes a lot to Ms Joanna Maciejewska</figcaption></figure>
AI and Marie Bernard, the e-commerce Queen
<figure class="wp-caption aligncenter" id="attachment_82908" style="width: 1024px;"><figcaption class="wp-caption-text" id="caption-attachment-82908">Ms Benard is adding links to Visionary Marketing. She is very nice but unfortunately she isn’t a real person.</figcaption></figure>
Let me introduce you to Ms Marie Bernard. This pretty young woman, somewhat artificial in appearance, exists only in Midjourney’s archives and on the website of “her” SEO agency. This supposed e-commerce expert found herself embroiled in a semantic mix-up that was both amusing and revealing.
Taking inspiration from one of my articles, this visionary author mixed up ‘snow globe’, an expression used by one of my expert interviewees as a metaphor, and ‘snowball effect’. Thank God, she inserted a link to Visionary Marketing so that I could correct that fatal mistake. Far from being trivial, this anecdote raises a few fundamental questions. Who is writing? For whom? How? And for what purpose? In fact, it even poses bigger questions such as “what is humans’ place in society, and what sort of society do we want for our children and children’s children?”
AI Information Overload
Content about generative AI is so ubiquitous that we have gone past information overload. AI content analysts are skirmishing via X (formerly Twitter) and LinkedIn posts, mainly on the technical front (this AI is better than that one), creativity (AI produces interesting ideas or rather, is dull and inferior to humans), and usage (“download my ultimate prompting guide!”). Yet all these debates (and sadly others that are less prevalent, like the poorly documented issue of energy consumption) fail to address other key questions: who are we creating for, why, and for whom do we work — or more broadly, what kind of society do we want in the future?
Generative AI at the Heart of the World’s Issues
AI, and in particular generative AI, have generated most of the noise on social media, blogs, newsletters, and chat around the pub. Traditional economy seems to be ignoring the phenomenon or treating it as incidental — a recurring habit when it comes to digital innovations, but online debates live on unabated.
Whether and how we should use generative artificial intelligence is now a central question in our modern societies, and that’s understandable. Machines have been able to play around with text since the 1950s, but computing power and large-scale training on such a vast and decent dataset — despite criticisms — have never been so strong. In recent weeks, engineers in London have even shown how two AI bots can talk to one another. Even if it’s only a demo, we’ve known since the early 2000s that machines can buy and sell stock (algorithmic trading roughly amounts to 60-75% of total trading in the most developed markets, and this was already true back in 2006 when I worked in that field). So, why shouldn’t an AI known as “agentic” buy train tickets?
Hence these legitimate questions.
A machine capable of writing “like” humans?
The fact that a programme — literally a “machine” in the sense of a computer — is capable of writing like humans, or nearly, is disconcerting.
*[Machine] A mechanically, electrically, or electronically operated device for performing a task
first entry from Merriam-Webster
What’s even more unsettling is that humans often write more poorly than machines. This is what Loubna Ben Allal, a researcher at Huggingface and an expert in training generative AIs, describes in a video on the underscore channel, which is worth watching.
She explains how content is filtered during training sequences and, surprise, surprise, she says that good Ai-generated content is often better than bad human content. Sadly, poor human content is everywhere.
Note that there are also texts, 100% AI-generated, aimed at proving that Loubna is right.
<figure class="wp-caption aligncenter" id="attachment_82818" style="width: 1680px;"><figcaption class="wp-caption-text" id="caption-attachment-82818">A text designed to show that separating the wheat from the chaff in content creation is a non-issue. Unfortunately, it was written by an LLM.</figcaption></figure>
Language, an operating system?!
If these mock texts are so disconcerting, it’s because language and the written word are indeed some of the fundamental characteristics of the human species.
In the beginning was the word. Language is the operating system of human culture.
Yuval Harari — NYT March 2023
Yuval Harari, with a kind of reverse anthropomorphic twist, even calls it the “operating system of human culture”. Despite this idiosyncrasy, Harari is zeroing in on the real issue.
The real core problem isn’t technical, but deeply philosophical, especially when the most famous generative AI tools are led by a maverick who’s trying all he can to put us in a Spike Jonze film. Ultimately, philosophy could or should redefine how AI is trained, explain Michael Schrage and David Kiron of MIT Sloan Management Review.