DiscoverThe Audacity to Podcast11 Warnings about Using AI in Content-Creation (including podcasting)
11 Warnings about Using AI in Content-Creation (including podcasting)

11 Warnings about Using AI in Content-Creation (including podcasting)

Update: 2024-04-17
Share

Description

“Artificial intelligence” (“AI”) has made huge leaps in abilities within a very short time. It was only a few years ago that I felt on the cutting edge teaching how to use AI tools like Jasper (originally called “Conversion.ai” and “Jarvis”), even before ChatGPT was released.





Now, AI has become so prominent, that it's almost surprising if a software company of any size is not offering some kind of AI-based solution.





While inflation has skyrocketed the prices of almost everything, the cost for accessing AI has significantly dropped. When I first started using AI, a good plan with access to only one central AI system cost $99 per month. But now, you can use a tool like Magai to use a whole bunch of different language- and image-based AI tools starting at only $19 per month!





(As an affiliate, I earn from qualifying purchases through these links. But I recommend things I truly believe in, regardless of earnings.)





All this potential means we need to quote the line from Spider-Man, “With great power comes great responsibility.”





And thus why I want to share these warnings with you, to advocate for responsible use of generative AI, large language models (LLMs), machine learning, or whatever you want to call it.





This warnings apply to any kind of content-creation, not only podcasting!





(And in case you're wondering, I did not use AI to create any of this content, but I might be using some AI to transcribe or help me market this content.)





Aside: most warnings apply to generative AI, but not repurposing or enhancement AI





Before I get into my list of warnings about using AI, I want to clarify that these are focused using AI to essentially create something from nothing. I still think AI can be a great assistant on your content. For example, processing audio or video, clipping excerpts, suggesting marketing approaches, improving how things communicate, repurposing, and more. All of those things start with your intelligence, and then the AI works from that.





But I see most of these warnings as applying solely to generative AI, or when you start with nothing but a prompt.





Now, on to the warnings!





1. Undisclosed use of generative AI can get you in trouble





YouTube, social networks, and lots of other websites and platforms are starting to require you to disclose whenever you're putting out content generated by AI. And I think this is a good thing to do as it helps the potential audience know what kind of quality to expect.





Even for things like podcast transcripts, it's good to disclose whether AI was used to transcribe the audio. As I mentioned in my previous episode about using podcast transcripts, someone on your podcast might say, “I love two li'l puppies,” but the AI might transcribe it as, “I love to kill puppies.” Sometimes, even omitting a single word can drastically alter the meaning. For example, imagine accidentally omitting the “not” in a sentence like, “I'm not guilty.”





This doesn't necessarily mean you must disclose every time you use AI in any capacity (like you need to disclose whenever you're compensated for anything you talk about), but you should be aware of the requirements of platforms and seek to always be above reproach.





And if you're concerned about how it might affect your reputation if you disclose every time you use AI, then here's a radical thought: maybe don't use AI! (More on this in #11.)





2. AI often “hallucinates” facts and citations





ChatGPT, Claude, Grok, Gemini, and all the text-based AIs we know are also called “large language models” (or “LLMs”). And I think that's a much better term, too, because they're not actually intelligent; they are simply good with language.





This is why you'll often see LLMs write something that grammatically makes sense, but is conceptually nonsense.





In other words, LLMs know how to write sentences.





For example, I sometimes like to ask AI, “Who is Daniel J. Lewis?” Not because of any kind of ego complex, but because I'm an interesting test subject for LLMs since I am partially a public figure, but I also have a name very close to a celebrity: Daniel Day Lewis. Thus, the responses LLMs give me often conflate the two of us (a mistake I wish my bank would make!). I've seen responses that both describe me as a podcasting-industry expert and highlight my roles in There Will Be Blood and The Last of the Mohicans. (And I'm not helping any LLMs scraping my content by just now writing those things together!)





So for anything an AI or LLM writes for you, I urge you to fact-check it! I've even seen some responses completely make up citations that don't exist!





3. AI lacks humanity





From the moment of conception, you have always been a unique being of tremendous value and potential with unique DNA, unique experiences, unique thoughts, unique emotions, and more. Like a snowflake, there will never be someone—or something—exactly like you! Not even an AI trained on all of your content!





AI is not an actual intelligence and I believe it never will be. And AI will never be human.





But you are. You can feel, express, and empathize through emotion. You can question, explore, change your mind, and change others' minds. You can create things of great beauty and originality with no outside prompting.





And it's because of this that I think AI can never replace you. While it might have better skills than you in some areas, it will never beat the quality and personableness that you can offer.





4. AI-created images can be humiliating





AI image models have produced some hilarious or nightmarish results and lots of things that are physically impossible! Like with how AI can hallucinate facts and citations, it can also make images that look real, until you actually pay attention to the details.





I think this teaser for Despicable Me 4 accurately explains it:





<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio">

<iframe title="Despicable Me 4 - Minion Intelligence (Big Game Spot)" width="200" height="113" src="https://www.youtube.com/embed/SJa1oSgs8Gw?feature=oembed&showinfo=0&rel=0" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</figure>



Or The Babylon Bee‘s explanation of ChatGPT:





<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio">

<iframe title="This Is How ChatGPT Actually Works" width="200" height="113" src="https://www.youtube.com/embed/EyjnoksVSL4?feature=oembed&showinfo=0&rel=0" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</figure>



Lest you think this is only outdated models producing bad content, here are some things I've actually seen from current-generation AI image models:






  • Backwards hands




  • Limbs that seamlessly merge into the surroundings




  • Misspelled text that you might not notice unless you try to actually read it




  • Device parts that disappear into nowhere




  • Placements that are physically impossible




  • Broken, slanted, or curvy lines that absolutely should be straight




  • Incorrect size ratios





Watch out for these things! For any image you generate (or that someone else gives you that they might have generated with AI), look at it very carefully to ensure everything about it makes sense and isn't simply a pretty—but embarrassing—combination of pixels.





For this reason, you might actually want your image AI to make artwork that is obviously not photorealistic.





5. AI is biased because it was fed biased content and programmed by biased people





The following is not to push a particular political or moral direction, but just to expose some facts! Most LLMs lean a particular political and moral direction because they were trained with content that leaned that direction. Thus, even if not intentional, the outputs will often have that same leaning.





Imagine it this way. If the majority of content on the In

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

11 Warnings about Using AI in Content-Creation (including podcasting)

11 Warnings about Using AI in Content-Creation (including podcasting)

Daniel J. Lewis