DiscoverChristopher S. Penn – Marketing AI Keynote SpeakerMind Readings: How to Vibe Code Well, Part 2
Mind Readings: How to Vibe Code Well, Part 2

Mind Readings: How to Vibe Code Well, Part 2

Update: 2025-05-28
Share

Description

Mind Readings: How to Vibe Code Well, Part 2


In this episode, you will discover how to leverage artificial intelligence for efficient and accurate software requirements gathering. You will learn a step-by-step process for working with AI to create a comprehensive Product Requirements Document for any software project. You will understand how to ask the right questions and prompt AI to identify crucial details, ensuring your project begins with a strong foundation. You will see how getting your requirements correct upfront prevents costly rework and accelerates your development process. Watch this episode to master the art of AI-powered requirements gathering for your next build!


YouTube Video
Watch this video on YouTube.


Can’t see anything? Watch it on YouTube here.


Listen to the audio here:



Download the MP3 audio here.



Machine-Generated Transcript


What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for watching the video.


Welcome back. This is Part Three—wait, Part Two—on requirements gathering for vibe coding. As we said in the first part, there is nothing different about vibe coding when it comes to software development. It’s not some magic thing where software just appears out of thin air. If it was, I wouldn’t use it because there are a whole bunch of problems with that.


In Part One, what we did was ask good questions. The results of those questions are something like this. This is the output from Google Gemini, from the deep research tool, saying what Data Slayer was, which is the package of software that I liked that is gone. It’s been gone for years now and is no longer usable because it no longer meets Chrome manifest requirements. Here is what Data Diarist could incorporate. So this has a lot of useful thinking in it that we might want to build some requirements from.


Let’s go into Gemini, start a new chat here. We’re going to say—actually, I want to do this in AI Studio. The reason I want to do that in AI Studio is because I want to incorporate our overall best practices for Chrome extensions as system instructions that will ensure we are compliant. I’ve pasted those in. It’s a lot—it’s like 4,000 words. I’m going to turn on grounding, a Google search, to allow it to search the web. Then, I’ll say: “Today we’re going to do requirements gathering for the Data Diarist Chrome extension. I’ll provide the predecessor’s code, Data Slayer, which you’ll inspect. After that, I’ll provide—actually, no, I want to do this in the reverse order because stuff that happens earlier on in a model steers the conversation better.”


So, I’ll provide the background research first, summarize it, and then add from my Google Drive my data layer inspection. We’re going to have it summarize all 16,000 or so words of the deep research because it was like 38 pages long. Anytime we’re doing any kind of data work like this in a Generative AI tool, we always ask for a summary. We always ask for a summary to make sure that it’s reading the right things, that it’s doing a good job.


While it’s doing that, I need to get the old code. The old code, because it’s open source, is on GitHub. One of the fun things you can do is change GitHub—the URL from GitHub to Git ingest. It will take the GitHub repository and digest the whole thing down for you into a single file that you can load in, which allows you to use it with Generative AI.


Great. Here is the old code, which I would like you to summarize. Don’t code; just summarize. Let’s find the old code. Great, there’s the old code.


With requirements gathering, the easiest way to do it is to have Generative AI do it for you. However, we still have to provide useful answers. What we’re going to say is this: “Based on the Data Diarist research plan and an understanding of what Data Slayer used to be, let’s build a full Product Requirements Document.”


Product Requirements Documents typically contain user stories, functional requirements, and domain requirements. We have a lot of information so far, but there’s probably additional information you need to fully build out the PRD. Ask me one question at a time until you have enough information to build a PRD.


This is the simplest way to do requirements gathering. We give the model all the background information we’ve gathered, and then we say, “Hey, I’ve given you a lot of information. Let’s have you talk through what you still don’t have, and I will tell you the answers.”


It says: “To build a comprehensive PRD, especially focusing on user stories and stuff, here’s my first question. The research mentions the Data Slayer’s data layer analyzer was termed unreadable, and data layers’ columns appear to match. Could you elaborate what made them unreadable or matched?”


Let’s see. It was actually all of this. All of those were problems, but especially the layout, which didn’t make logical sense, the tiny font, and too much information not in any particular order. We would want Data Diarist’s outputs to be easily readable at a glance, prioritizing the information that’s most important to a marketing analyst trying to diagnose software like Tealium, Adobe Analytics, Adobe Launch, Google Analytics, Google Tag Manager, Salesforce, HubSpot, and many of the most popular MarTech packages. Clearly, it’s all of those issues, but this provides it some extra insight as to why we’re even doing this. We want to get Data Slayer working again.


Next question. “For a marketing analyst diagnosing tool issues, what are the top three most critical pieces of information or states they immediately look for in a data layer debugger?” They look for container names, tool names (such as a GTM container name), whether the container fired, and what data was collected. They then drill down from those critical pieces of information.


“My next question relates to the drill-down part. When an analyst needs to drill down into the specifics, what were the biggest frustrations?” Data Slayer crammed everything into multiple sidebars that were squished and very difficult to see. We would want something that involved hierarchy or nesting in some way so that we could unpack the information. For example, if a Google Tag Manager container fires, we could open up the container to see what tags fired in what order. Chronology, very important, and then dig into each tag to see the settings and the data collected. At every step, we’d want to be able to easily copy and paste what is visible in Data Diarist for external reports and configurations. We’d want to be able to export as JSON, CSV, Markdown, YAML, and plain text.


You can see here, we’re not just trying to blindly replicate Data Slayer. We’re trying to do it the way that I want it because, frankly, this is going to be my piece of software. So when you do the same process, you want to do it the way you want it. What are the things that you wish it had?


Let’s see how we’re doing here. Next question. “When an analyst is debugging Salesforce or HubSpot integrations, what are the most common specific client-side events?” For both, it’s looking for tracking pixels, events in the data layer, and data collection. “What data did the tools collect, s

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Mind Readings: How to Vibe Code Well, Part 2

Mind Readings: How to Vibe Code Well, Part 2

Christopher S Penn