DiscoverAI Literacy for EntrepreneursEP254 Should You Build Custom GPTs or Just Prompt Better
EP254 Should You Build Custom GPTs or Just Prompt Better

EP254 Should You Build Custom GPTs or Just Prompt Better

Update: 2025-12-05
Share

Description

Should you build custom GPTs, agents, digital interns, Gems, and artefacts… or just learn to prompt better? In this roundtable, Susan, social media + AI power user Andrew Jenkins, and GTM + custom GPT builder Dr. Jim Kanichirayil unpack when you actually need a custom build, when a strong prompt is enough, and how to stop treating AI output like a finished product.

In this episode, Susan brings back two favourite guests who sit on different ends of the AI usage spectrum:

  • Andrew Jenkins - multi-tool explorer, author, and agency owner who "puts the chat in ChatGPT" and loves talking with his data.

  • Dr. Jim Kanichirayil - founder of Cascading Leadership, builder of thought leadership custom GPTs for go-to-market, content, and analysis.

Together they break down:

  • How Andrew uses conversation, prompt optimizers, projects, and tools like NotebookLM and Dojo AI to "talk to" his book, podcast, and data.

  • How Dr. Jim uses a simple Role-Task-Output framework to design custom GPTs, train them on his voice (and the voices of his clients), and keep them on track with root-cause analysis when they drift.

  • The messy reality of limits, context windows, and why AI is still terrible at telling you what it can't do.

  • Why using AI on autopilot (especially for outreach and content) is a brand risk, and how to use it as a drafting and analysis system instead.

Key takeaways

You don't have to choose only prompts or only custom GPTs.
Strong prompting is the starting point. Custom GPTs make sense when you see the same task, drift, or "bleed out" happening over and over again.

Start every workflow with three things: Role, Task, Output.
Who is the AI supposed to be?
What exact job is it doing?
What should the output include and exclude?
Then ask the model: "What else do you need to execute this well and in my voice?"

Knowledge bases are just your best examples and instructions in one place.
Transcripts, scripts, PDFs, posts, style packs, platform-specific examples - they're all training material. AI does best when you feed it gold standard samples, not vibes.

Projects and talking to your data are the future of reading and research.
Andrew uses his entire book in Markdown as a project, then has conversations like "find me five governance examples" instead of scrolling a PDF. NotebookLM turns bullet points into decks, mind maps, and videos, then lets you interrogate them.

AI is a 60-70% draft, not a finished product.
If you post straight from the model, it will sound generic, over-written, and slightly robotic. The job is to take that draft and ask: "Does this sound like me? Would I actually say this?"

Automation is good. Autopilot is dangerous.
Using AI to analyze content performance, structure research, or standardise parts of a workflow = smart.
Letting AI write content and outreach you never review = reputation risk and audience fatigue.

More content is not the goal. Better feedback loops are.
Dr. Jim chains GPTs: one for drafting with his voice, one for performance analysis, one for insights. That loop makes the next round of content sharper instead of just… louder.

Episode highlights

[00:13 ] The core question: build digital interns (agents/custom GPTs) or just prompt better?

[01:09 ] Andrew's origin story and why he "puts the chat in ChatGPT."

[03:39 ] How Andrew uses prompt optimizers, multiple models, and Dojo AI as an agentic interface.

[07:24 ] Dr. Jim's world: sticking to GPT, building tightly scoped custom GPTs for repetitive work.

[08:37 ] When "bleed out" in prompts tells you it's time to build a custom GPT.

[09:26 ] Using root-cause analysis inside the GPT configuration when outputs go off the rails.

[10:25 ] Projects, books in Markdown, and "talking to your own material" via AI.

[13:05 ] Case study: using AI to surface case examples from a 3.5-year-old book instead of scrolling PDFs.

[14:27 ] NotebookLM for founders and students: one email of bullet points → infographic, map, slide deck, video.

[19:03 ] The Role–Task–Output framework and the importance of explicitly designing for your voice.

[22:02 ] Platform-specific style packs and use cases (spicy vs informational vs editorial).

[26:29 ] The frustrating reality of token limits and why models rarely warn you before they hit a wall.

[36:54 ] What's happening "in the wild": early-stage founders treating AI output as final product.

[39:01 ] Why "more" isn't better, "better" is better: drafts, polish, and content analysis GPTs.

[42:03 ] Automation vs autopilot in B2B social, and why Andrew refuses to buy from a bot.

[43:29 ] Emerging tools: Google's Pommely, Nano Banana for image creation, and AI browsers like Atlas, Comet, and Neo.

If you've been stuck wondering whether to spend time on custom GPTs or just prompt better, this episode gives you the mental models to decide.

Share it with:

  • The teammate who keeps saying "we should build a GPT" but hasn't defined the workflow.

  • The founder treating AI drafts as finished copy.

  • The ops brain in your org who secretly wants to be a bridge builder.

Then ask as a team: "Where do we actually need great prompts, and where do we need a repeatable GPT or project with a real knowledge base?"

Connect with Susan Diaz on LinkedIn to get a conversation started.

Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.

Comments 
loading
In Channel
loading
00:00
00:00
1.0x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

EP254 Should You Build Custom GPTs or Just Prompt Better

EP254 Should You Build Custom GPTs or Just Prompt Better