Content Operations

Scriptoriums delivers industry-leading insights for global content operations.

Futureproof your content ops for the coming knowledge collapse

What happens when AI accelerates faster than your content can keep up? In this podcast, host Sarah O’Keefe and guest Michael Iantosca break down the current state of AI in content operations and what it means for documentation teams and executives. Together, they offer a forward-thinking look at how professionals can respond, adapt, and lead in a rapidly shifting landscape. Sarah O’Keefe: How do you talk to executives about this? How do you find that balance between the promise of what these new tool sets can do for us, what automation looks like, and the risk that is introduced by the limitations of the technology? What’s the roadmap for somebody that’s trying to navigate this with people that are all-in on just getting the AI to do it? Michael Iantosca: We need to remind them that the current state of AI still carries with it a probabilistic nature. And no matter what we do, unless we add more deterministic structural methods to guardrail it, things are going to be wrong even when all the input is right. Related links: Scriptorium: AI and content: Avoiding disaster Scriptorium: The cost of knowledge graphs Michael Iantosca: The coming collapse of corporate knowledge: How AI is eating its own brain Michael Iantosca: The Wild West of AI Content Management and Metadata MIT report: 95% of generative AI pilots at companies are failing LinkedIn: Michael Iantosca Sarah O’Keefe Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. SO: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Sarah O’Keefe: Hey everyone, I’m Sarah O’Keefe. In this episode, I’m delighted to welcome Michael Iantosca to the show. Michael is the Senior Director of Content Platforms and Content Engineering at Avalara and one of the leading voices both in content ops and understanding the importance of AI and technical content. He’s had a longish career in this space. And so today we wanted to talk about AI and content. The context for this is that a few weeks ago, Michael published an article entitled The coming collapse of corporate knowledge: How AI is eating its own brain. So perhaps that gives us the theme for the show today. Michael, welcome. Michael Iantosca: Thank you. I’m very honored to be here. Thank you for the opportunity. SO: Well, I appreciate you being here. I would not describe you as anti-technology, and you’ve built out a lot of complex systems, and you’re doing a lot of interesting stuff with AI components. But you have this article out here that’s basically kind of apocalyptic. So what are your concerns with AI? What’s keeping you up at night here?  MI: That’s a loaded question, but we’ll do the best we can to address it. I’m a consummate information developer as we used to call ourselves. I just started my 45th year in the profession. I’ve been fortunate that not only have I been mentored by some of the best people in the industry over the decades, but I was very fortunate to begin with AI in the early 90s when it was called expert systems. And then through the evolution of Watson and when generative AI really hit the mainstream, those of us that had been involved for a long time were… there was no surprise, we were already pretty well-versed. What we didn’t expect was the acceleration of it at this speed. So what I’d like to say sometimes is the thing that is changing fastest is the rate at which the rate of change is changing. And that couldn’t be more true than today. But content and knowledge is not a snapshot in time. It is a living, moving organism, ever evolving. And if you think about it, the large language models, they spent a fortune on chips and systems to train the big large language models on everything that they can possibly get their hands and fingers into. And they did that originally several years ago. And the assumption is that, especially for critical knowledge, is that that knowledge is static. Now they do rescan the sources on the web, but that’s no guarantee that those sources have been updated. Or, you know, the new content conflicts or confuses with the old content. How do they tell the difference between a version of IBM database 2 of its 13 different versions, and how you do different tasks across 13 versions? And can you imagine, especially when it comes to software where most of us, a lot of us work, the thousands and thousands of changes that are made to those programs in the user interfaces and the functionality? MI: And unless that content is kept up-to-date and not only the large language models, reconsume it, but the local vector databases on which a lot of chatbots and agenda workflows are being based. You’re basically dealing with out-of-date and incorrect content, especially in many doc shops. The resources are just not there to keep up with that volume and frequency of change. So we have a pending crisis, in my opinion. And the last thing we need to do is reduce the people that are the knowledge workers to update, not only create new content, but deal with the technical debt, so that we don’t collapse on this, I think, is a house of cards. SO: Yeah, it’s interesting. And as you’re saying that, I’m thinking we’ve talked a lot about content debt and issues of automation. But for the first time, it occurs to me to think about this more in terms of pollution. It’s an ongoing battle to scrub the air, to take out all the gunk that is being introduced that has to, on an ongoing basis, be taken out. Plus, you have this issue that information decays, right? In the sense that when, I published it a month ago, it was up to date. And then a year later, it’s wrong. Like it evolved, entropy happened, the product changed. And now there’s this delta or this gap between the way it was documented versus the way it is. And it seems like that’s what you’re talking about is that gap of not keeping up with the rate of change. MI: Mm-hmm. Yeah. I think it’s even more immediate than that. I think you’re right. But now we need to remember that development cycles have greatly accelerated. Now, when you bring AI for product development into the equation, we’re now looking at 30 and 60-day product cycles. When I started, a product cycle was five years. Now it’s a month or two. And if we start using AI to draft new content, for example, just brand new content, forget about the old content or update the old content. And we’re using AI to do that in the prototyping phase. We’re moving that more left upfront. We know that between then and CodeFreeze that there’s going to be a numerous number of changes to the product, to the function, to the code, to the UI. It’s always been difficult to keep up with it in the first place, but now we’re compressed even more. So we now need to start looking at AI to how does it help us even do that piece of it, let alone what might be a corpus that is years and years old, that’s not ever had enough technical writers to keep up with all the changes. So now we have a dual problem, including new content with this compressed development cycle. SO: So the, I mean, the AI hype says we essentially, we don’t need people anymore and the AI will do everything from coding the thing to documenting the thing to, I guess, buying the thing via some sort of an agentic workflow. But what, I mean, you’re deeper into this than nearly anybody else. What is the promise of the AI hype, and what’s the reality of what it can actually do? MI: That’s just the question of the day. Because those of us that are working in shops that have engineering resources, I have direct engineers that work for me and an extended engineering team. So does the likes of Amazon, other serious, not serious, but sizable shops with resources. We have a lot of shops that are smaller. They don’t have access to either their own dedicated content systems engineers or even their IT team to help them. First, I want to recognize that we’ve got a continuum out there, and the commercial providers are not providing anything to help us at this point. So it’s either you build it yourself today, and that’s happening. People are developing individual tools using AI where the more advanced shops are looking at developing entire agentic workflows.  And what we’re doing is looking at ways to accelerate that compressed timeframe for the content creators. And I want to use content creators a little more loosely because as we move the process left, and we involve our engineers, our programmers in the early, earlier in the phase, like they used to be, by the way, they used to write big specifications in my day. Boy, I want to go into a Gregorian chant. “Oh, in my day!” you know, but, but they don’t do that anymore. And basically the, the role of the content professional today is that of an investigative journalist. And you know what we do, right? We, we scrape and we claw. We test, we use, we interview, we use all of the capabilities of learning, of association, assimilation, synthesis, and of course, communication. And turns out that writing’s only 15% roughly of what the typical writer does in an information developer or technical documentation professional role, which is why we have a lot of different roles, by the way, that if we’re gonna replace or accelerate with people with AI, have to handle all those capabilities of all those roles. So, so where we are today is some of the more leading-edge shops are going ahead, and we’re looking at ways to ingest knowledge, new k

11-17
32:49

The five stages of content debt

Your organization’s content debt costs more than you think. In this podcast, host Sarah O’Keefe and guest Dipo Ajose-Coker unpack the five stages of content debt from denial to action. Sarah and Dipo share how to navigate each stage to position your content—and your AI—for accuracy, scalability, and global growth. The blame stage: “It’s the tools. It’s the process. It’s the people.” Technical writers hear, “We’re going to put you into this department, and we’ll get this person to manage you with this new agile process,” or, “We’ll make you do things this way.” The finger-pointing begins. Tech teams blame the authors. Authors blame the CMS. Leadership questions the ROI of the entire content operations team. This is often where organizations say, “We’ve got to start making a change.” They’re either going to double down and continue building content debt, or they start looking for a scalable solution. — Dipo Ajose-Coker Related links: Scriptorium: Technical debt in content operations Scriptorium: AI and content: Avoiding disaster RWS: Secrets of Successful Enterprise AI Projects: What Market Leaders Know About Structured Content RWS: Maximizing Your CCMS ROI: Why Data Beats Opinion RWS: Accelerating Speed to Market: How Structured Content Drives Competitive Advantage (Medical Devices) RWS: The all-in-one guide to structured content: benefits, technology, and AI readiness LinkedIn: Dipo Ajose-Coker Sarah O’Keefe Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. SO: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Sarah O’Keefe: Hey, everyone. I’m Sarah O’Keefe and I’m here today with Dipo Ajose-Coker. He is a Solutions Architect and Strategy at RWS and based in France. His strategy work is focused on content technology. Hey, Dipo. Dipo Ajose-Coker: Hey there, Sarah. Thanks for having me on. SO: Yeah, how are you doing? DA-C: Hanging in there. It’s a sunny, cold day, but the wind’s blowing. SO: So in this episode, we wanted to talk about moving forward with your content and how you can make improvements to it and address some of the gaps that you have in terms of development and delivery and all the rest of it. And Dipo’s come up with a way of looking at this that is a framework that I think is actually extremely helpful. So Dipo, tell us about how you look at content debt. DA-C: Okay, thanks. First of all, I think before I go into my little thing that I put up, what is content debt? I think it’d be great to talk about that. It’s kind of like technical debt. It refers to that future work that you keep storing up because you’ve been taking shortcuts to try and deliver on time. You’ve let quality slip. You’ve had consultants come in and out every three months, and they’ve just been putting… I mean writing consultants. SO: These consultants. DA-C: And they’ve been basically doing stuff in a rush to try and get your product out on time. And over time, those sort of little errors, those sort of shortcuts will build up and you end up with missing metadata or inconsistent styles. The content is okay for now, but as you go forward, you find you’re building up a big debt of all these little fixes. And these little fixes will eventually add up and then end up as a big debt to pay. SO: And I saw an interesting post just a couple of days ago where somebody said that tech debt or content debt, you could think of it as having principle and interest and the interest accumulates over time. So the less work you do to pay down your content debt, the bigger and bigger and bigger it gets, right? It just keeps snowballing and eventually you find yourself with an enormous problem. So as you were looking at this idea of content debt, you came up with a framework for looking at this that is at once shiny and new and also very familiar. So what was it? DA-C: Yeah, really familiar. I think everyone’s heard of the five stages of grief, and I thought, “Well, how about applying that to content debt?” And so I came up with the five stages of content debt. So let’s go into it. I’m not going to keep referring to the grief part of it. You can all look it up, but the first stage is denial. “Our content is fine. We just need a better search engine. We can actually put it into this shiny new content delivery platform and it’s got this type of search,” and so on and so forth. Basically what you’re doing is you’re ignoring the growing mess. You’re duplicating content. You’ve got outdated docs. You’re building silos, and then you’re ignoring that these silos are actually getting even further and further apart. No one wants to admit that the CMS or whatever system, bespoke system that you’ve put into place, is just a patchwork of workarounds. This quietly builds your content debt until, actually the longer denial lasts, the more expensive that cleanup is. As we said in that first bit, you want to pay off the capital of your debt as quickly as possible. Anyone with a mortgage knows that. You come into a little bit of money, pay off as much capital as you can so that you stop accruing that debt, the interest on the debt. SO: And that is where when we talk about AI-based workflows, I feel like that is firmly situated in denial. Basically, “Yeah, we’ve got some issues, but the AI will fix it. The AI will make it all better.” Now, we painfully know that that’s probably not true, so we move ourselves out of denial. And then what? DA-C: There we go into anger. SO: Of course. DA-C: “Why can’t we find anything? Why does every update take two weeks?” And that was a question we used to get regularly where I used to work at a global medical device manufacturer. We had to change one short sentence because a spec change and it took weeks to do that. Authors are wasting time looking for reusable content if they don’t have an efficient CCMS. Your review cycles drag through because all you’re doing is giving the entire 600-page PDF to the reviewer without highlighting what’s in there. Your translation costs balloon and your project managers or leadership gets angry because, “Well, we only changed one word. Can’t you just use Google Translate? It should only cost like five cents.” Compliance teams then start raising flags. And if you’re in a regulated industry, you don’t want the compliance teams on your back, and especially you don’t want to start having defects out in the field. So eventually, productivity drops, your teams feel like they’re stuck. And the cracks are now starting to show across other departments and you’re putting a bad name on your doc team. SO: Yeah. And a lot of this, what you’ve got here, is the anger that’s focused inward to a certain extent. It’s the authors that are angry at everybody. I’ve also seen this play out as management saying, “Where are our docs? We have this team, we’re spending all this money, and updates take six months.” Or people submit update requests, tickets, something, the content doesn’t get into the docs, the docs don’t get updated. There’s a six-month lag. Now the SOP, the standard operating procedure, is out of sync with what people are actually doing on the factory floor, which it turns out, again, if you’re in medical devices, is extremely bad and will lead to your factory getting shut down, which is not what you want generally. DA-C: Yeah, it’s not a good position to be in. SO: And then there’s anger. DA-C: Yeah. SO: “Why aren’t they doing their job?” And yet you’ve got this group that’s doing the best that they can within their constraints, which are, as you said, in a lot of cases, very inefficient workflows, the wrong tool sets, not a lot of support, etc. Okay, so everybody’s mad. And then what? DA-C: Everyone’s mad, and eventually, actually this is a closed little loop because all you then do is say, “Okay, well, we’re going to take a shortcut,” and you’ve just added to your content debt. So this stage is actually one of the most dangerous of the parts of it because all you end up trying to do without actually solving the problem is just add to the debt. “Let’s take a shortcut here, let’s do this.” The next stage is now the blame stage. “It’s the tools. It’s the process. It’s the people.” These here and then you get calls of technical writers or, “Well, we’re going to put you into this department and we’ll get this person to rule you with this new agile process,” or, “We’ll get you to be doing it in this way.” The finger-pointing begins. Tech teams will blame the authors. Authors will blame the CMS. Leadership questions the ROI of the entire content operations team. This is often where organizations see that we’ve got to start making a change. They’re either going to double down and continue building that content debt or they start looking for a scalable solution. SO: Right. And this is the point at which people look at it and say, “Why can’t we just use AI to fix all of this?” DA-C: Yep, and we all know what happens when you point AI at garbage in. We’ve got the saying, and this saying has been true from the beginning of computing, garbage in, garbage out, GIGO. SO: Time. DA-C: Yeah. I changed that to computing. SO: Yeah. It’s really interesting though because the blame that goes around, I’ve talked to a lot of executives who, and we’re right back to anger too, it is sort of like, “We’ve never had to invest in this before. Why are you telling us that this organization, this group, this tech writers, content ops,” whatever you w

11-03
27:00

Balancing automation, accuracy, and authenticity: AI in localization

How can global brands use AI in localization without losing accuracy, cultural nuance, and brand integrity? In this podcast, host Bill Swallow and guest Steve Maule explore the opportunities, risks, and evolving roles that AI brings to the localization process. The most common workflow shift in translation is to start with AI output, then have a human being review some or all of that output. It’s rare that enterprise-level companies want a fully human translation. However, one of the concerns that a lot of enterprises have about using AI is security and confidentiality. We have some customers where it’s written in our contract that we must not use AI as part of the translation process. Now, that could be for specific content types only, but they don’t want to risk personal data being leaked. In general, though, the default service now for what I’d call regular common translation is post editing or human review of AI content. The biggest change is that’s really become the norm. —Steve Maule, VP of Global Sales at Acclaro Related links: Scriptorium: AI in localization: What could possibly go wrong? Scriptorium: Localization strategy: Your key to global markets Acclaro: Checklist | Get Your Global Content Ready for Fast AI Scaling Acclaro: How a modular approach to AI can help you scale faster and control localization costs Acclaro: How, when, and why to use AI for global content Acclaro: AI in localization for 2025 LinkedIn: Steve Maule Bill Swallow Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. SO: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Bill Swallow: Hi, I’m Bill Swallow, and today I have with me Steve Maule from Acclaro. In this episode, we’ll talk about the benefits and pitfalls of AI in localization. Welcome, Steve. Steve Maule: Thanks, Bill. Pleasure to be here. Thanks for inviting me. BS: Absolutely. Can you tell us a little bit about yourself and your work with Acclaro? SM: Yeah, sure, sure. So I’m Steve Maule, currently the VP of Global Sales at Acclaro, and Acclaro is a fast-growing language services provider. So I’m based in Manchester in the UK, in the northwest of England, and I’ve been now in this industry, and I say this industry, the language industry, the localization industry for about 16 years, always in various sales, business development, or leadership roles. So like I say, we’re a language services provider. And I suppose the way we try and talk about ourselves is we try and be that trusted partner to some of the world’s biggest brands and the world’s fastest growing global companies. And we see it Bill as our mission to harness that powerful combination of human expertise with cutting edge technology, whether it be AI or other technology. And the mission is to put brands in the heads, hearts, and hands of people everywhere. BS: Actually, that’s a good lead in because my first question to you is going to be where do you see AI and localization, especially with a focus of being kind of the trusted partner for human-to-human communication? SM: My first answer to that would be it’s no longer the future. AI is the now. And I think whatever role people play in our industry, whether you’re like Acclaro, you’re a language services provider, offering services to those global brands, whether you are a technology provider, whether you run localization, localized content in an enterprise, or even if you’re what I’d call an individual contributor, maybe you’re a linguist or a language professional. I think AI is already changed what you do and how you go about your business. And I think that’s only going to continue and to develop. So I actually think we’re going to stop talking at some stage relatively soon about AI. It’s just going to be all pervasive and all invasive. BS: It’ll be the norm. Yeah. SM: Absolutely. We don’t talk any more about the internet in many, many industries, and we won’t talk about AI. It’ll just become the norm. And localization, I don’t think is unique in that respect. But I do think that if you think about the genesis of large language models and where they came from, I think localization is probably one of the primary and one of the first use cases for generative AI and for LLMs. BS: Right. The industry started out decades ago with machine translation, which was really born out of pattern matching, and it’s just grown over time. SM: Absolutely. And I remember when I joined the industry, what did I say? So 2009, it would’ve been when I joined the industry. And I had friends asking me, what do you mean people pay you for translation and pay for language services? I’ve just got this new thing on my phone, it’s called Google Translate. Why are we paying any companies for translation? So you’re absolutely right, and I think obviously machine translation had been around for decades before I joined the industry. So yeah, I think that question has come into focus a lot more with every sort of, I was going to say, every year that passes, quite honestly, it’s every three months. BS: If that. SM: Exactly, yeah. Why do companies like Acclaro still exist? And I think there are probably a lot of people in the industry who actually, if you think about the boom in Gen I over the last two, two and a half years, there’s a lot of people who see it as a very real existential threat. But more and more what I’m seeing amongst our client base and our competitors and other actors in the industry, the tech companies, is that there’s a lot more people who are seeing it as an opportunity actually for the language industry and for the localization industry. BS: So about those opportunities, what are you seeing there? SM: I think one of the biggest things, it doesn’t matter what role you play, whether you’re an individual linguist or whether you’re a company like ours, I think there’s a shift in roles and the traditional, I suppose most of what I dealt with 16 years ago was a human being doing translation, another human being doing some editing. There were obviously computers and tools involved, but it was a very human-led process. I think we’re seeing now a lot of those roles changing. Translators are becoming language strategists; they’re becoming quality guardians. Project managers are becoming sort of almost like solutions architects or data owners. So I think that there’s a real change. And personally, I don’t think, and I guess this is what this podcast is all about. I don’t see the roles of a few things going away, but I do see those roles changing and developing. And in some cases, I think it’s going to be for the better. And I think what we’re seeing is a lot of, because there’s all this kind of doubt and uncertainty and sort of threat, people are wanting to be shown the way, and people are wanting companies like our company and other companies like it to sort of lead the way in terms of how people who manage localized content can kind of implement AI. BS: Yeah. We’re seeing something similar in the content space as well. I know there was a big fear, certainly a couple of years ago, or even last year, that, oh, AI is going to take all the writing jobs because everyone saw what ChatGPT could do until they really started peeling back the layers and go, well, this is great. It spit out a bunch of words, it sounds great, but it really doesn’t say anything. It just kind of glosses over a lot of information and kind of presents you with the summary. But what we’re seeing now is that a lot of people, at least on the writing side, yeah, they’re using AI as a tool to automate away a lot of the mechanical bits of the work so that the writers can focus on quality. SM: We’re seeing exactly the same thing. I had a customer say to me she wants AI to do the dishes while she concentrates on writing the poetry. So it is the mundane stuff, the stuff that has to be done, but it’s not that exciting. It’s mundane, it’s repetitive. Those have always been the tasks that have been first in line to be automated, first in line to be removed, first in line, to be improved. And I think that’s what we’re seeing with AI.  BS: So on the plus side, you have AI potentially doing the dishes for you, while you’re writing poetry or learning to play the piano, what are some of the pitfalls that you’re seeing with regard to AI and translation? SM: I think there’s a few, and I think it depends on whereabouts AI is used, Bill, in the workflow. I think the very active translation itself is a very, very common use now of AI. But I think there’s some kind of a, I’m going to call them translation adjacent tasks as well, like we’ve mentioned with the entire workflow. So I think the answer would depend on that. But I think one of the biggest pitfalls of AI, and it was the same again, 2009 when I joined the industry and friends of mine had this new thing in their pocket called Google Translate. One of the pitfalls was, well, it’s not always right. It’s not always accurate. And even though the technology has come on leaps and bounds since then, and you had neural NT before large language models, it still isn’t always accurate. And I think you mentioned it before, it does almost always sound smooth and fluid and almost like it sounds like it’s very polished, and it sounds like it should be, right? I’m thinking, “I’m in sales myself. So it could be a metaphor for a salesperson, couldn’t it? Not always, right? But always sounds confident. But I

10-20
33:51

From classrooms to clicks: the future of training content

AI, self-paced courses, and shifting demand for instructor-led classes—what’s next for the future of training content? In this podcast, Sarah O’Keefe and Kevin Siegel unpack the challenges, opportunities, and what it takes to adapt. There’s probably a training company out there that’d be happy to teach me how to use WordPress. I didn’t have the time, I didn’t have the resources, nothing. So I just did it on my own. That’s one example of how you can use AI to replace some training. And when I don’t know how to do something these days, I go right to YouTube and look for a video to teach me how to do it. But given that, there are some industries where you can’t get away with that. Healthcare is an example—you’re not going to learn how to do brain surgery that someone could rely on with AI or through a YouTube video. — Kevin Siegel Related links: Is live, instructor-led training dying? (Kevin’s LinkedIn post) AI in the content lifecycle (white paper) Overview of structured learning content IconLogic LinkedIn: Kevin Siegel Sarah O’Keefe Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. SO: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction SO: Hi, everyone, I’m Sarah O’Keefe. I’m here today with Kevin Siegel. Hey, Kevin. KS: Hey, Sarah. Great to be here. Thanks for having me. SO: Yeah, it’s great to see you. Kevin and I, for those of you that don’t know, go way back and have some epic stories about a conference in India that we went to together where we had some adventures in shopping and haggling and bartering in the middle of downtown Bangalore, as I recall. KS: I can only tell you that if you want to go shopping in Bangalore, take Sarah. She’s far better at negotiating than I am. I’m absolutely horrible at it. SO: And my advice is to take Alyssa Fox, who was the one that was really doing all the bartering. KS: Really good. Yes, yes. SO: So anyway, we are here today to talk about challenges in instructor-led training, and this came out of a LinkedIn post that Kevin put up a little while ago, which will include in the show notes. So Kevin, tell us a little bit about yourself and IconLogic, your company and what you do over there. KS: So IconLogic, we’ve always considered ourselves to be a three-headed dragon, three-headed beast, where we do computer training, software training, so vendor-specific. We do e-learning development, and I write books for a living as well. So if you go to Amazon, you’ll find me well-represented there. Actually, one of the original micro-publishers on this new platform called Amazon with my very first book posted there called, “All This PageMaker, the Essentials.” Yeah, did I date myself for that reference? Which led to a book on QuarkXPress, which led to Microsoft Office books. But my bread and butter books on Amazon even today are books on Adobe Captivate, Articulate Storyline, and TechSmith Camtasia. I still keep those books updated. So publishing, training, and development. And the post you’re talking about, which got a lot of feedback, I really loved it, was about training and specifically what I see as the demise of our training portion of our business. And it’s pretty terrifying. I thought it was just us, but I spoke with other organizations similar to mine in training, and we’re not talking about a small fall-off of training. 15, 20% could be manageable. You’re talking 90% training fall off, which led me to think originally, “Is it me?” Because I hadn’t talked to the other training companies. “Is it us? I mean, we’re dinosaurs at this point. Is it the consumer? Is it the industry?” But then I talked to a bunch of companies that are similar to mine and they’re all showing the same thing, 90% down. And just as an example of how horrifying that is, some of our classes, we’d expect a decent-sized class, 10, a large class, 15 to 18. Those were the glory days. Now we’re twos and threes, if anyone signs up at all. And what I saw as the demise of training for both training companies and trainers, if you’re a training company and you’re hiring a trainer, one or two people in the room isn’t going to pay the bills. Got to keep the lights on with your overhead running 50%, 60%, you know this as a business person, but you’ve got to have five or six minimum to pay those bills and pay your trainer any kind of a rate. SO: So we’re talking specifically about live instructor-led, in-person or online? KS: Both, but we went more virtual long before the pandemic. So we’ve been teaching more virtual than on-site for 30 years. Well, not virtual 30 years, virtual wasn’t really viable until about 20 years ago. So we’ve been teaching virtual for 20 years. The pandemic made it all the more important. But you would think that training would improve with the pandemic, it actually got even worse and it never recovered. So the pandemic was the genesis of that spiral down. AI has hastened the demise. But this is instructor-led training in both forms, virtual and on-site. I think even worse for on-site. SO: So let’s start with pandemic. You’re already doing virtual classes, along comes COVID and lockdowns and everything goes virtual. And you would think you’d be well-positioned for that, in that you’re good to go. What happened with training during the pandemic era when that first hit? KS: When that pandemic first hit, people panicked and went home and just hugged their families. They weren’t getting trained on anything. So it wasn’t a question of, were we well-positioned to offer training? Nobody wanted training, period. And this was, I think if you pull all training companies, well, there are certain markets where you need training no matter what. Healthcare as an example, they need training. Security, needed training. But for the day-to-day operations of a business, people went home and they didn’t work for a long time. They were just like, “The world is ending.” And then, oh, the world didn’t end. So now they’ve got to go back to work, but they didn’t go back to work for a long time. Eventually people got back to work. Now, are you on-site back to work or are you at home? That’s a whole nother thing to think about. But just from a training perspective, when panic sets in, when the economy goes bad, training is one of the first things, you get rid of it. Go teach yourself. And the teaching yourself part is what has led to the further demise of training, because you realize I can teach myself on YouTube. At least I think I can. And I think when you start teaching yourself on your own and you think you can, it becomes, the training was good enough. So if you said, “Let’s focus on the pandemic.” That’s what started it, the downward spiral. But we even saw the downward spiral before the pandemic, and it was the vendors that started to offer the training that we were offering themselves. SO: So instead of a third-party, certainly a third-party, mostly independent organization offering training on a specific software application, the vendors said, “We’re going to offer official training.” KS: Correct. And it started with some of these vendors rolling out their training at conferences. And I attended these conferences as a speaker. I won’t name the software, I won’t name the vendor, but I would just tell you I would go there and I would say, “Well, what’s this certificate thing you’re running there?” It’s a certificate of participation. But as I saw people walking around, they would say, “I’m now certified.” And I go, “You’re not certified after a three-hour program. You now have some knowledge.” They thought they were certified and experts, but they wouldn’t know they weren’t qualified until told to do a job. And then they would find out, “I’m not qualified to do this job.” But that certificate course, which was just a couple of hours by this particular vendor, morphed into a full day certificate. They were charging now a lot of money for it, which morphed into a multi-day thing, which now has destroyed any opportunity for training that we have. And that’s when I started noticing a downward spiral. Tracking finances, it would be your investments going down, down, down, down this thing. It’s like a plane, head and nose down. SO: And we’ve seen something similar. I mean, back in the day, and I do actually… So for those of you listening at home that are not in this generation, PageMaker was the sort of grandparent of InDesign. I am also familiar with PageMaker and I think my first work in computer stuff was in that space. So now we’ve all dated ourselves. But back in the day we did a decent amount of in-person training. We had a training classroom in one of our offices at one point. Now, we were never as focused on it as you are and were, but we did a decent business of public-facing, scheduled two-day, three-day, “Come to our office and we’ll train you on the things.” And then over time, that kind of dropped off and we got away from doing training because it was so difficult. And this is longer ago than you’re talking about. So the pattern that you’re describing where instructor-led in-person training, a classroom training with everybody in the same room kind of got disrupted a while back. We made a decent living doing that for a long time and there was- KS: Made a great living doing that. Oh, my God. That was the thing. SO: But we got away from it, because it got harder and harder to put the right people in the right classes and

10-06
31:30

From PowerPoint to possibilities: Scaling with structured learning content

What if you could escape copy-and-paste and build dynamic learning experiences at scale? In this podcast, host Sarah O’Keefe and guest Mike Buoy explore the benefits of structured learning content. They share how organizations can break down silos between techcomm and learning content, deliver content across channels, and support personalized learning experiences at scale. The good thing about structured authoring is that you have a structure. If this is the concept that we need to talk about and discuss, here’s all the background information that goes with it. With that structure comes consistency, and with that consistency, you have more of your information and knowledge documented so that it can then be distributed and repackaged in different ways. If all you have is a PowerPoint, you can’t give somebody a PowerPoint in the middle of an oil change and say, “Here’s the bare minimum you need,” when I need to know, “Okay, what do I do if I’ve cross-threaded my oil drain bolt?” That’s probably not in the PowerPoint. That could be an instructor story that’s going to be told if you have a good instructor who’s been down that really rocky road, but again, a consistent structure is going to set you up so that you have robust base content. — Mike Buoy Related links: AEM Guides Overview of structured learning content CompTIA accelerates global content delivery with structured learning content (case study) Structured learning content that’s built to scale (webinar) LinkedIn: Mike Buoy Sarah O’Keefe Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. Sarah O’Keefe: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Sarah O’Keefe: Hi everyone, I’m Sarah O’Keefe. I’m here today with Mike Buoy. Hey, Mike. Mike Buoy: Good morning, Sarah. How are you? SO: I’m doing well, welcome. For those of you who don’t know, Mike Buoy is the Senior Solutions Consultant for AEM Guides at Adobe since the beginning of this year of 2025. And before that had a, we’ll say, long career in learning. MB: Long is accurate, long is accurate. There may have been some gray hair grown along the way, in the about 20-plus years. SO: There might have been. No video for us, no reason in particular. Mike, what else do we need to know about you before we get into today’s topic, which is the intersection of techcomm and learning? MB: Oh gosh, so if I think just quickly about my career, my background’s in instructional design, consulting, instructor, all the things related to what you would consider a corporate L&D, moving into the software side of things into the learning content management space. And so what we call now component content management, we, when I say we, those are all the different organizations I’ve worked for throughout my career, have been focused in on how do you take content that is usually file-based and sitting in a SharePoint drive somewhere, and how do you bring it in, get it organized so it’s actually an asset as opposed to a bunch of files? And how do you take care of that? How do you maintain it? How do you get it out to the right people at the right time and the right combination, all the rights, all the right nows, that’s really the background of where I come from. And that’s not just in learning content; at the end of the day, learning content is often the technical communication-type content with an experience wrapped around it. So it’s really a very fun retrospective when you look back on where both industries have been running in parallel and where they’re really starting to intersect now. SO: Yeah, and I think that’s really the key here. When we start talking about learning content, structured authoring, techcomm, why is it that these things are running in parallel and sitting in different silos? What’s your take on that? Why haven’t they intersected more until maybe now we’re seeing some rumblings of maybe we should consider this, but until now it’s been straight up, we’re learning and your techcomm, or vice versa, and never the twain shall meet, so why? MB: Yeah, and it’s interesting, when you look at most organizations, the two major silos that you’re seeing, one is going to be product. So whether it’s a software product, a hardware product, an insurance or financial product, whatever that product is, technical communication, what is it? How do you do it? What are all the standard operating procedures surrounding it? That all tends to fall under that product umbrella. And then you get to the other side of the other silo, and that’s the hey, we have customers, whether those customers are our customers or the internal customers, our own employees that we need to trade and bring up the speed on products and how to use them, or perhaps even partners that sit there. And so, typically, techcomm is living under the product umbrella, and L&D is either living under HR or customer success or customer service of some sort, depending on where they’re coming from. Now in the learning space you, over the last probably decade or so, seeing where there’s a consolidation between internal and external L&D teams and having them get smarter about, what are we building, how are we building it, who are we delivering it to, and what are all those delivery channels? And then when I think about why are they running in parallel, well, they have different goals in mind, right? techcomm has to ship with the product and service and training ideally is doing that, but is often, there’s a little bit of a lag behind, “Okay, we ship the thing, how long is it before we start having all the educational frameworks around it to support the thing that was shipped?” And so I think leadership-wise, very different philosophies, very different principles on that. techcomm, very much focused on the knowledge side of things. What is it? How do you do it? What are all the SOPs? And L&D leans more towards creating a learning experience around, “Okay, well here’s the knowledge, here’s the information, how do we create that arc going from I’m a complete novice to whatever the next level is?” Or even, I may be an expert and I need to learn how to apply this to get whatever new changes there are in my world and help me get knowledgeable and then skilled in that regard. So I think those are kind the competing mindsets and philosophies as well as, I won’t say competing, but parallel business organization of why we don’t usually see those two. And if we think about from a workflow perspective, you have engineering or whoever’s building the product, handing over documentation of what they’re building to techcomm and techcomm is taking all of that and then building out their documentation, and then that documentation then gets handed to L&D for them to then say, “Well, how do we contextualize this and build all the best practices around it and recommendations and learning experiences?” So there is a little bit of a waterfall effect for how a product moves through the organization. I think those are the things that really contribute to it being siloed and running in parallel. SO: Yeah. And I mean many, many organizations, the presence of engineering documentation or product design documentation is also a big question mark, but we’ll set that aside. And I think the key point here is that learning content, and you’ve said this twice already, learning content in general and delivery of learning content is about experience. What is the learning experience? How does the learner interact with this information and how do we bring them from, they don’t understand anything to they can capably do their job? The techcomm side of things is more of a point of need. You’re capable enough but you need some reference documentation or you need to know how to log into the system or various other things. But techcomm to your point, tends to be focused much less on experience and much more on efficiency. How do we get this out the door as fast as possible to ship it with the product? Because the product’s shipping and if you hold up the product because your documentation isn’t ready, very, very bad things will happen to you. MB: Bad, bad, very bad. SO: Not a good choice. MB: It’s not a good look. It’s not a good look. SO: Now, what’s interesting to me is, and this sort of ties into some of the conversations we have around pre-sales versus post-sales marketing versus techcomm kinds of things, as technical content has moved into a web experience, online environment, and all the rest of it, it has shifted more into pre-sales. People read technical documentation, they read that content to decide whether or not to buy, which means the experience matters more. And conversely, the learning content has fractured into classroom learning and online instructor led and e-learning a bunch of things I’m not even going to get into, and so they have fractured into multi-channel. So they evolved from classroom into lots of different channels for learning where techcomm evolved from print into lots of different channels, but online and so the two are kind of converging where techcomm needs to be more interested in experience and learning content needs to be more interested in efficiency, which brings us then to, can we meet in the middle and what does it look like to apply some of the structured authoring principles to learning content? We’ve talked a lot about making techcomm better and improving the experience. So now let’s f

09-22
32:17

Every click counts: Uncovering the business value of your product content

Every time someone views your product content, it’s a purposeful engagement with direct business value. Are you making the most of that interaction? In this episode of the Content Operations podcast, special guest Patrick Bosek, co-founder and CEO of Heretto, and Sarah O’Keefe, founder and CEO of Scriptorium, explore how your techcomm traffic reduces support costs, improves customer retention, and creates a cohesive user experience. Patrick Bosek: Nobody reads a page in your documentation site for no reason. Everybody that is there has a purpose, and that purpose always has an economic impact on your business. People who are on the documentation site are not using your support, which means they’re saving you a ton of money. It means that they’re learning about your product, either because they’ve just purchased it and they want to utilize it, so they’re onboarding, and we all know that utilization turns into retention and retention is good because people who retain pay us more money, or they’re trying to figure out how to use other aspects of the system and get more value out of it. There’s nobody who goes to a doc site who’s like, “I’m bored. I’m just going to go and see what’s on the doc site today.” Every person, every session on your documentation site is there with a purpose, and it’s a purpose that matters to your business. Related links: Heretto Contact Heretto to walk through their support evaluation sheet with an expert! The business case for content operations (white paper) Curious about the value of structured content operations in your organization? Use our content ops ROI calculator. Get monthly insights on structured content, futureproof content operations, and more with our Illuminations newsletter LinkedIn: Patrick Bosek Sarah O’Keefe Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. Sarah O’Keefe: Change is perceived as being risky, you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and process that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Sarah O’Keefe: Hi, everyone, I’m Sarah O’Keefe and I’m here today with our guest, Patrick Bosek, who is one of the founders and the CEO of Heretto. Welcome. Patrick Bosek: Thanks, Sarah. It’s lovely to be here. I think this is may be my third or fourth time getting to chat with you on the Scriptorium podcast. SO:  Well, we talk all the time. This is talking and then we’re going to publi- no, let’s not go down that road. Of all the things that happen when we’re not being recorded. Okay. Well we’re glad to have you again and looking forward to productive discussion here. The theme that we had for today was actually traffic and I think web traffic and why you want traffic and where this is going to go with your business case for technical documentation. So, Patrick, for those of you that have not heard from you before, give us a little bit of background on who you are and what Heretto is and then just jump right in and tell us about web traffic. PB: No small requests from you, Sarah. SO: Nope. PB: So I’m Patrick Bosek. I am the CEO and one of the co-founders of Heretto. Heretto is a CCMS based on DITA. It’s a full stack that goes from the management and authoring layer all the way up to actually producing help sites. So as you’re moving around the internet and working with technology companies, primarily help_your_product.com or help_your_company.com, it might be powered by Heretto. That’s what we set out to do. We set out to do it as efficiently as possible, and that gives me some insight into traffic, which is what we’re talking about today, and how that can become a really important and powerful point when teams are looking to make a case for better content operations, showing up more, producing more for their customers, and being able to get the funding that allows them to do all those great things that they set out to do every day. SO: So here we are as content ops, CCMS people, and we’re basically saying you should put your content on the internet, which is a fairly unsurprising kind of priority to have. But why specifically are you saying that web traffic and putting that content out there and getting people to use the content helps you with your sort of overall business and your overall business case for tech docs? PB: Yeah. So I want to answer that in a fairly roundabout way because I think it’s more fun to get there by beating around the bush. But I want to start with something that seems really obvious, but for some reason it isn’t in tech pubs. So first of all, if you went to an executive and you said, I can double the traffic to your website, and then you put a number in front of them, probably say a hundred thousand dollars, almost like any executive at any major organization is like a hundred thousand dollars, of course, I’ll double my web traffic. That’s a no-brainer. Right? And when they’re thinking of website, they’re thinking of the marketing site and how important traffic is to it. So intrinsically, everybody pays quite a bit of money and by transference puts a lot of value on the traffic that goes to the website and, as they should. It’s the primary way we interact with organizations asynchronously today. Digital experience is really important. But if you went to an executive and you said, I can double your traffic to your doc site, they would probably be like, wait a second. But that makes no sense because nobody reads the docs for no reason. I want to repeat that because I think that’s a really important thing for us, as technical content creators to not only understand, I think we understand it, but to internalize it and start to represent it more in the marketplace and to our businesses and to the other stakeholders. People might show up at your marketing site, because they misclick an advertisement. They might show up in your marketing site because they Googled something and your market and a blog like caught them and they looked at it. So there’s probably a lot of traffic where people are just curious. They’re just window shopping. Maybe they’re there by mistake. But nobody shows up at your documentation site. Nobody reads a page in your documentation site for no reason. Everybody that is there has a purpose and that purpose always has an economic impact on your business. People who are on the documentation site are either not utilizing your support, which means that they’re saving you a ton of money. It means that they’re learning about your product, either because they’ve just purchased it and they want to utilize it, so they’re onboarding, and we all know that utilization turns into retention and retention is good because people who retain pay us more money, or they’re trying to figure out how to use other aspects of the system and get more value out of it. There’s nobody who goes to a doc site who’s like, I’m bored. I’m just going to go and see what’s on the doc site today. So every person, every session on your documentation site is there with a purpose and it’s a purpose that matters to your business. So that’s why I want to start. That’s why it matters. That’s why I think traffic is important, but you look like you want to contribute here, so. SO: We talk about enabling content. Right? Tech docs are enabling content. They enable people to do a thing, and this is what you’re saying. People don’t read tech docs for fun. I know of, actually, I do know one person. One person I have met in my life who thought it was fun to read tech docs. One. PB: Okay. So to be fair, I also know somebody who loves reading release notes. SO: Okay. So two in the world. PB: But hang on, hang on. But this person, part of the thing is this person is an absolute, can I say fanboy, is that, they’re a huge fan of this product and they talk about this product in the context of the release notes. So even though this person loves the release notes, the release notes are a way that they go and generate word-of-mouth and they’re promoting your product because of the thing they saw in the release notes. The release notes are a marketing piece that goes through this person. All the people who are your biggest fans are going to tell people about that little thing they found in your release notes. Sorry. Anyways. SO: So again, they’re trying to learn. Okay. But, so two people in the universe that we know of read docs for fun. Cool. Everybody else is reading them, as you said, for a purpose. They’re reading them because they are blocked on something or they need information, usually it’s they need information. And then you slid in that when they do this, this is producing, providing value to the organization or saving the organization money. So what’s that all about? PB: Well, I mean there’s a number of ways to look at this. You want to start with the hard numbers, the accounting stuff, the stuff you can take the CFO. That stuff is actually, it’s pretty easy to do. You can do it in just a couple of lines. So every support ticket costs a certain amount of money. Somebody in your organization knows that number, if your organization is sufficiently large and sufficiently large is like 20 people probably. Maybe that’s not that small, but if you’re a couple hundred people, everybody knows what that number is. So it’s very easy to figure out how much it costs when somebody actually goes to the support. SO: Somewhere between $20 and $50 is kind of the industry average per call. You may have better numbers internally in your organization, but if you don’t or you don’t know where to

08-11
30:55

AI in localization: What could possibly go wrong? (podcast)

In this episode of the Content Operations podcast, Sarah O’Keefe and Bill Swallow unpack the promise, pitfalls, and disruptive impact of AI on multilingual content. From pivot languages to content hygiene, they explore what’s next for language service providers and global enterprises alike. Bill Swallow: I think it goes without saying that there’s going to be disruption again. Every single change, whether it’s in the localization industry or not, has resulted in some type of disruption. Something has changed. I’ll be blunt about it. In some cases, jobs were lost, jobs were replaced, new jobs were created. For LSPs, I think AI is going to, again, be another shift, the same that happened when machine translation came out. LSPs had to shift and pivot how they approach their bottom line with people. GenAI is going to take a lot of the heavy lifting off of the translators, for better or for worse, and it’s going to force a copy edit workflow. I think it’s really going to be a model where people are going to be training and cleaning up after AI. Related links: Going global: Getting started with content localization Lessons Japan taught me about content localization strategy Conquering content localization: strategies for success (podcast) The Scriptorium approach to localization strategy Get monthly insights on structured content, futureproof content operations, and more with our Illuminations newsletter LinkedIn: Sarah O’Keefe Bill Swallow Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. Sarah O’Keefe: Change is perceived as being risky, you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and process that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Sarah O’Keefe: Hey, everyone. I’m Sarah O’Keefe, and I’m here today with Bill Swallow. Bill Swallow: Hey there. SO: They have let us out of the basement. Mistakes were made. And we have been asked to talk to you on this podcast about AI in translation and localization. I have subtitled this podcast, What Could Possibly Go Wrong? As always, what could possibly go wrong, both in this topic and also with this particular group of people who have been given microphones. So Bill. BS: They’ll take them away eventually. SO: They will eventually. Bill, what’s your generalized take right now on AI in translation and localization? And I apologize in advance. We will almost certainly use those two terms interchangeably, even though we fully understand that they are not. What’s your thesis? BS: Let’s see. It’s still early. It is promising. It will likely go wrong for a little while, at least. Any new model that translation has taken has first gone wrong before it corrected and went right, but it might be good enough. I think that pretty much sums up where I’m at. SO: Okay. So when we look at this … Let’s start at the end. So generative AI, instead of machine translation. Let’s walk a little bit through the traditional translation process and compare that to what it looks like to employ GenAI or AI in translation. BS: All right. So regardless of how you’re going about traditional translation, there is usually a source language that is authored. It gets passed over to someone who, if they’re doing their job correctly, has tools available to parse that information, essentially stick it in a database, perhaps do some matching against what’s been translated before, fill in the gaps with the translation, and then output the translated product. On the GenAI side, it really does look like you have a bit of information that you’ve written. And it just goes out, and GenAI does its little thing and bingo, you got a translation. And I guess the real key is what’s in that magic little thing that it does. SO: Right. And so when we look at best practices for translation management up until this point, it’s been, as you said, accumulate assets, accumulate language segment pairs, right? This English has been previously translated into German, French, Italian, Spanish, Japanese, Korean, Chinese. I have those pairs, so I can match it up. And keeping track of those assets, which are your intellectual property, you as the company put all this time and money into getting those translations, where are those assets in your GenAI workflow? BS: They’re not there, and that’s the odd part about it. SO: Awesome. So we just throw them away? What? BS: I mean, they might be used to seed the AI at first, just to get an idea of how you’ve talked about things in the past. But generally, AI is going to consume its knowledge, it’s going to store that knowledge, and then it’s going to adapt it over time. When it’s asked for something, it’s going to produce it with the best way it knows how, based on what it was given. And it’s going to learn things along the way that will help it improve or not improve over time. And that part right there, the improve or not improve, is the real catch in why I say it might be good enough but it might go wrong as well, because GenAI tends to … I don’t want to say hallucinate because it’s not really doing that at this stage. It’s taking all the information it has, it’s learning things about that information, and it’s applying it going forward. And if it makes an assumption based on new information that it’s fed, it could go in the wrong direction. SO: Yeah. I think two things here. One is that what we’re describing applies whether you have an AI-driven workflow inside your organization where you’re only allowing the AI to access your, for example, prior translation. So a very limited corpus of knowledge, or if you’re sending it out like all of us are doing, where you’re just shoving it into a public-facing translation engine of some sort and just saying, “Hey, give me a translation.” In the second case, you have no control over the IP, no control over what’s put in there and how it’s used going forward, and no control over what anyone else has put in there, which could cause it to evolve in a direction that you do or do not want it to. So the public-facing engines are very, very powerful because they have so much volume, and at the same time, you’re giving up that control. Whereas if you have an internal system that you’ve set up … And when I say internal, I mean private. It doesn’t have to be internal to your organization, but it might be that your localization vendor has set up something for you. But anyway, gated from the generalized internet and all the other people out there. BS: We hope. SO: Or the other content. You hope. Right. Also, if you don’t know exactly how these large learning models are being employed by your vendors, you should ask some questions, some very pointed questions. Okay, we’ll come back to that, but first I want to talk a little bit about pivot languages. So again, looking at traditional localization, you run into this thing of … Basically many, many, many organizations have a single-language authoring workflow and a multi-language translation workflow. So you write everything in English and then you translate. So all of the translations are target languages, they are downstream, they are derived from the English, et cetera. Now let’s talk a little bit about… First of all, what is a multilingual workflow? Let’s start there. What is that? BS: Okay. So yeah, the traditional model usually is author one language, which maybe 90% of the time is English, whether it’s being authored in an English-speaking country or not, and then it’s being pushed out to multiple different languages. In a multilingual environment, you have people authoring in their own native language, and it should be coming in and being translated out as it needs to be to all the other target languages. Traditionally, that has been done using pivot languages because infrastructures were built. It is just the way it is. It was built on English. English has been used as a pivot language more than any other language out there. There are some outliers that use a different pivot language for a very specific reason, but for the sake of this conversation, English is the predominant pivot language out there. SO: So I have a team of engineers in South Korea. They are writing in Korean. And in order to get from Korean to, let’s say, Italian, we translate from Korean to English and then from English to Italian, and English becomes the pivot language. And the generalized rationale for this is that there are more people collectively that speak Korean and English and then English and Italian than there are people that speak Korean and Italian. BS: With nothing in between, yeah. SO: With nothing in between. Right. Directly. So bilingual in those two languages is a pretty small set of people. And so instead of hiring the four people in the world that know how to do that, you pivot through English. And in a human-driven workflow, that makes an awful lot of sense because you’re looking at the question of where do I find English … Sorry, not English, but rather Italian and Korean speakers that can do translation work for my biotech firm. So I need a PhD in biochemistry that speaks these two languages. I think I’ve just identified a specific human in the universe. So that’s the old way. What is a multilingual workflow then? BS: So yeah, as we were discussing, the multilingual workflow is something where you have two, three, four different language sources that you’re authoring in. So you’re authoring in English, you have people authoring in German, you have people authoring in Korean and, let’s say, Italian. And they’re all workin

08-04
29:19

Help or hype? AI in learning content

Is AI really ready to generate your training materials? In this episode, Sarah O’Keefe and Alan Pringle tackle the trends around AI in learning content. They explore where generative AI adds value—like creating assessments and streamlining translation—and where it falls short. If you’re exploring how AI can fit into your learning content strategy, this episode is for you. Sarah O’Keefe: But what’s actually being said is AI will generate your presentation for you. If your presentation is so not new, if the information in it is so basic that generative AI can successfully generate your presentation for you, that implies to me that you don’t have anything interesting to say. So then, we get to this question of how do we use AI in learning content to make good choices, to make better learning content? How do we advance the cause? Related links: Synthetic audio example: Strategies for AI in technical documentation (podcast, English version) LearningDITA: DITA-based structured learning content in action (podcast) How CompTIA rebuilt its content ecosystem for greater agility and efficiency (webinar) Transform L&D experiences at scale with structured learning content (podcast) Overview of structured learning content Get monthly insights on structured content, futureproof content operations, and more with our Illuminations newsletter LinkedIn: Sarah O’Keefe Alan Pringle Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. Sarah O’Keefe: Change is perceived as being risky, you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and process that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Alan Pringle: Hey everybody, I am Alan Pringle, and today I’m talking to Sarah O’Keefe. Sarah O’Keefe: Hey everybody, how’s it going? AP: And today, Sarah and I want to discuss artificial intelligence and learning content. How can you apply artificial intelligence to learning content? We’ve talked a whole lot, Sarah, about AI and technical communication and product content, let’s talk more about learning and development and how AI can help or maybe not help putting together learning content. So how is it being used right now? Let’s start with that. Do you know of cases? I know of one or two, and I’m sure you do too. SO: Yeah. So the big news, the big push, is AI in presentations. So how can I use AI to generate my presentation? How can it help me put together my slides? Now, the problem with that from our point of view, for those of you that have been listening to what we’re saying about AI, this will be no surprise whatsoever, I think this is all wrong. It’s the wrong strategy, it’s the wrong approach. If you want to take AI and generate an outline of your presentation and then fill in that outline with your knowledge, that’s great, I think that’s a great idea. Also, if you have existing really good content and you want to take that content and generate slides from it, I don’t have a problem with that. But what’s actually being said is AI will generate your presentation for you. If your presentation is so not new, if the information in it is so basic that generative AI can successfully generate your presentation for you, that implies to me that you don’t have anything interesting to say. AP: And you’re going to say it with very pretty generated images and a level of authority that makes it sound like there’s something that’s actually there when it’s not. SO: Oh, yeah. It’ll look very plausible and authoritative and it will be wrong, because that’s how this generative stuff- AP: Or not even wrong, surface-skimmy, just nothing of any real value there. SO: Yeah. So then, we go into this question of, how do we use AI in learning content to make good choices, to make better learning content, how do we advance the cause? AP: Well, there’s that one case where we have done it, because we have our own learning site, LearningDITA.com, and we were trying to think about ways to apply AI to our efforts to create courses, to tell people how to use the DITA standard for content. And I think you and I both agree, one of the strengths of artificial intelligence is its ability to summarize and synthesize things, I don’t think that’s controversial. So if you think about writing assessments from existing content in a way that’s summarizing, so one of us suggested to our team, why don’t y’all try that and see what these AI engines can do to generate questions from our existing lesson content. And then, of course, we suggested that they—the people who were creating the courses—review them. So our folks reviewed them, and I think some of the questions were actually quite usable, decent. SO: And some of them were not. AP: True, this is true. SO: But the net of it was they saved a bunch of time, because they said, “Generate a bunch of assessment questions,” they went through them, they fixed the ones that were wrong, they improved the ones that were maybe not the greatest, they got a couple that were actually pretty usable. And so, it took less time to write the assessments than it would’ve taken to do that process by hand, to slowly go through the entire corpus to say, “Okay, what are the key objectives and how do I map that to the assessments?” So that’s a pretty good example, I think, of using generative AI, as you said, to summarize down, to synthesize existing content. On the LMS side, so when we start looking at learning management systems and how the learning content goes into the LMS and then is given or delivered to the learner, there are some big opportunities there, because if you think about what it means for me as a learner, as a person taking the course, to work my way through course material, maybe the assumptions that the course developer made about my expertise were too optimistic. I’m really struggling with this content, it’s trying to teach me how to use Photoshop and I am just not good at Photoshop. There’s this idea of adaptive learning, this is not an AI concept, the idea behind adaptive learning is that if you’re doing really well, it goes faster. If you’re struggling, it goes deeper, or maybe you do better with videos than you do with text, or vice versa. It’s that adapt to the learner and to the learner’s needs in order to make the learning more effective. Now, if you think about that, that is a matter of uncovering patterns in how the learner learns and then delivering a better fit for those patterns. Well, that’s AI. AI and machine learning do a great job of saying, “Oh, you seem to be preferring video, so I’m going to feed you more video.” Now, we can do this by hand or we can build it in with personalization logic, but you can also do this at scale with AI and machine learning. So there are definitely some opportunities to improve adaptive learning with an AI backbone. AP: I think it’s worth noting at this point, when you’re talking about gathering the data to make, I hate to, I’m going to personalize AI, so it can make these decisions or do the synthesis, there’s got to be intelligence that’s built into your content, and that goes all the way back to the content creation, going back from the presentation layer, back to how you’re creating your content. And again, this loops back, in my mind, to the idea of building in that intelligence with structured content, that is your baseline. SO: Yeah. I know we’re just relentless on this drum of you need structured content for learning content, but it’s because of all these use cases, because as you try to scale this stuff, this is what you’re going to run into. I also see a huge opportunity for translation workflows specifically for learning content. So if you look at translation and multilingual delivery, there’s a lot of AI and machine learning going on in machine translation. So now, we think a little bit about what that means for learning content, and of course, all of the benefits that you get just in general from machine translation still apply, but the one that I’m looking at that I think would be really, really interesting to apply to learning is learning has a lot of audio in it, audio and video, but specifically audio, and audio typically is going to be bound to a language. You’re going to have a voiceover, you’re going to have a person saying, “Here’s what you need to know, and I’m going to show you this screenshot,” or, “I’m going to show you how to operate this machine.” And so, you’ve got audio and potentially captions that are giving you the text or the audio that goes with that video. Okay, well, we can translate the captions, that’s relatively easy, but what about the voiceover? And the answer could be that you do synthetic voiceovers. So you take your original, let’s say, English audio and you turn it into French, Italian, German, Spanish or whatever else you need, but you synthesize the voice instead of re-recording. Now, is it going to be as good as a human, an actual human person who has expression and emotion in their delivery? No. Is it better than the alternative where you don’t provide it in the target language at all? Probably, yes. And when we start talking about machines, “Here is how to safely operate this machine,” the pretty good synthetic voice in target language is probably better than, “Here it is in English, deal with it,” or, “Here it is in English with a translated caption in German, but no audio.” I think that’s what we’re looking at is, is the synthetic audio good enough that it will improve the learner experience, and I think the answer is yes. AP: I’m turning this ov

07-21
17:48

Tool or trap? Find the problem, then the platform

Tempted to jump straight to a new tool to solve your content problems? In this episode, Alan Pringle and Bill Swallow share real-world stories that show how premature solutioning without proper analysis can lead to costly misalignment, poor adoption, and missed opportunities for company-wide operational improvement. Bill Swallow: On paper, it looked like a perfect solution. But everyone, including the people who greenlit the project, hated it. Absolutely hated it. Why? It was difficult to use, very slow, and very buggy. Sometimes it would crash and leave processes running, so you couldn’t relaunch it. There was no easy way to use it. So everyone bypassed using it at every opportunity. Alan Pringle: It sounds to me like there was a bit of a fixation. This product checked all the boxes without actually doing any in-depth analysis of what was needed, much less actually thinking about what users needed and how that product could fill those needs. Related links: How humans drive content operations (recorded webinar & transcript) Brewing a better content strategy through single sourcing (podcast) The Scriptorium approach to content strategy Get monthly insights on structured content, futureproof content operations, and more with our Illuminations newsletter LinkedIn: Alan Pringle Bill Swallow Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. Sarah O’Keefe: Change is perceived as being risky, you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and process that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Bill Swallow: Hi, I’m Bill Swallow Alan Pringle: And I’m Alan Pringle. BS: And in this episode we’re going to talk about the pitfalls of putting solutioning before doing proper analysis. And Alan, I’m going to kick this right off to you. Why should you not put solutioning before doing proper analysis? AP: Well, it’s very shortsighted and oftentimes it means you’re not going to get the funding that you need to do the project to solve the problems that you have. And with that, we can wrap this podcast up because there’s not a whole lot more to talk about here, really. But no, seriously, we do need to dive into this. It is very easy to fall into the trap of taking a tool’s first point of view. You’ve got a problem, it’s really weighing on you. So it’s not unusual for a mind to go, this tool will fix this problem, but it’s really not the way to go. You need to go back many steps, shut that part of your brain off and start doing analysis. And Bill, you’ve got an example, I believe, of how taking a tool’s first point of view didn’t help back in a previous job you had. BS: I do, and I’m not going to bury the lead here, but they didn’t do their homework upfront to see how people would use the system. So I worked for a company many, many, many years ago that decided to roll out and I will name the product. They rolled out Lotus Notes. AP: You’re killing me. That’s also very old, but we won’t discuss that angle. BS: But they did so because it checked every single box, every single box on the needs list, it did email, it had calendar entries, it did messaging, notes, documents, linking, sharing, robust permissions, and you even had the ability to create mini portals for different departments and projects. So on paper, it looked like a perfect solution. And everyone, including the people who greenlit the implementation of Lotus Notes, hated it. Absolutely hated it. Why did they hate it? It was difficult to use. It was very slow. It was very buggy. Sometimes it would crash and leave processes running, so you couldn’t relaunch it. There was no easy way to use it. Back at that point, we had PDAs, personal digital assistants, and very soon after that we had the birth of the smartphone. There was no easy way to use it in these mobile devices except for maybe hooking up to email. It didn’t fit how we were working at all. While it shouldn’t count, it really wasn’t very pretty to look at either. So everyone bypassed using it at every opportunity. They would set up a Wiki instead of using the Lotus Notes document or notes portal that they had. They would use other messaging services. This is back during Yahoo Messenger and ICQ. But yes, we had that going on and in the end it was discontinued after its initial three-year maintenance period ended because nobody liked it. AP: Yeah, so sounds to me like there was a bit of a fixation. This product checks all the boxes without actually doing any in-depth analysis of what you needed, much less actually thinking about what users needed and how that product could fill those needs. And I think it’s worth noting too, think about this from an IT department point of view, because they’re often a partner on any kind of technology project, especially if new software is going to be involved because they’re going to be the ones a lot of times that say yay or nay, this tool is a duplicate of what we already have. Or no, you have some special requirements and we do need to buy a new system. So if I as an IT person, the person who vets tools hears from someone, and let’s get back into the content world, I need a way to do content management and I need to have a single source of truth and I need to be able to take the content that is my single source of truth and then publish to a bunch of different formats. This is a very common use case. I would be more interested as an IT person in hearing that than hearing I have to have a component content management system. There’s a subtle difference there. And I think, and this is possibly unfair and grouchy of me, but that is me, grouchy and unfair. If I hear someone come to me, I need this tool instead of I have these issues and I have these requirements. It sounds selfish and half-baked. BS: It does. AP: And again, I am thinking about this from the receiving end of these queries, of these requests, but I also want to step back into the shoes of the person making a request. You can be so frustrated by your inefficiency and your problems, you latch onto the tools. So I completely understand why you want to do that, but you are basically punching yourself in the face when you go and make a request that is, I need this tool instead of I have these issues, these requirements, and I need to address these things. It’s subtle, but it’s different. BS: It’s very different. And also if you do take that approach of looking at your needs, you find that there’s more to uncover than just fixing the technological problem itself. AP: Yes. BS: There might be a workflow problem in your company that you may acknowledge, you may not know it’s quite there. Once you start looking at the requirements and looking at the flow of how you need to work, and how you need any type of new system to work, you start seeing where the holes are in your organization. Who does what? What does a handoff look like? Is it recorded? What does the review process look like? When does it go out for formal review? What does the translation workflow look like? And you start seeing that there may be a lot of ad hoc processes in place currently that could be fixed as well. AP: True. And I also think when you’re talking about solving problems and developing your requirements from that problem solving, you are potentially opening up the solution to more than just your department, your group. It can possibly be a wider situation there, too. And also by presenting it as a set of problems and requirements to address those problems, there may be already a tool in-house at your company that you don’t know about or there may be part of a suite of tools, and if you add another component to it will address your problem instead of just buying something completely outright. And we’ve seen this before, where it turned out there was an incumbent vendor that had some related tools already at the company, and that company also had a tool that could solve the problems that our client had or our prospect had. We’ve had both prospects and clients have this issue, so it doesn’t make sense, therefore, to go and say, I need this tool, which is essentially a competitor of what’s already in place. You’re going to have a very uphill battle trying to get that in place. It is also very easy, as someone who has already done a content ops improvement project, to understand this tool is good. It saves me at this company, but you’ve got to be careful of thinking just because it helped you over at company A. Now you’re at company B, it may not be a fit for company B culturally, there may be already something in-house. So you’ve got to let go of those preconceived notions. I am not saying that the tool you used before was bad. It may be the greatest thing ever, but there may be cultural issues, political issues, and even IT tech issues that mean you cannot pick that tool. So why are you pushing on it when you have got all of these things against you? Again, it is easy to fall into these traps. Don’t do it. BS: Yep. On the flip side of that, we had a situation where a customer of ours years ago was looking for a particular system, a CCMS, component content management system, and they had what they perceived to be a very hard requirement of being able to connect to another very specific system. AP: Yes, I remember this. It was about 10 or 11 years ago. BS: And it was such a hard requirement that it basically threw out all of their options except for one. And we got the system working the way they needed it to. It needed quite a bit of customization, especiall

06-02
13:25

Deliver content dynamically with a content delivery platform

Struggling to get the right content to the right people, exactly when and where they need it? In this podcast, Scriptorium CEO Sarah O’Keefe and Fluid Topics CEO Fabrice Lacroix explore dynamic content delivery—pushing content beyond static PDFs into flexible platforms that power search, personalization, and multi-channel distribution. When we deliver the content, whether it’s through the APIs or the portal that you’ve built that is served by the platform, we render the content in a way that we can dynamically remove or hide parts of the content that would not apply to the context, the profile of the user. That’s the magic of a CDP. It’s delivering that content dynamically. — Fabrice Lacroix Related links: Scriptorium: Personalized content: Steps to success (white paper) Scriptorium: AI in the content lifecycle (white paper) Fluid Topics, an AI-powered content delivery platform Fluid Topics: What is Content Operations and Why is it Important? Get monthly insights on structured content, futureproof content operations, and more with our Illuminations newsletter LinkedIn: Sarah O’Keefe Fabrice Lacroix Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. Sarah O’Keefe: Change is perceived as being risky, you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and process that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Sarah O’Keefe: Hi everyone, I’m Sarah O’Keefe and I’m here today with the CEO of Fluid Topics. Fabrice, Lacroix. Fabrice, welcome. Fabrice Lacroix: Hey. Hi Sarah. Nice being with you today. Thanks for welcoming me. SO: It’s nice to see you. So as many of you probably know, Fluid Topics is a content delivery portal or possibly a content delivery platform. And we’re going to talk about the difference between those two things as we get into this. So Fabrice, tell us a little bit about Fluid Topics and what that content delivery portal or maybe platform. Which one is it? What do you prefer? FL: For us, it’s platform definitely. But you’re right, depends on where people are in this evolution process, on how they deliver content. And for many, many customers, the piece stands for a portal. You’re right, because that is the first need. That’s how they come to us, because they need a portal. SO: Okay, so in your view, the portal is a front end, an access point for content, and then what makes it a platform rather than a portal? FL: Probably because the goal that many companies have to achieve is delivering that content where it’s needed. It’s many places most of the time. So it’s not just the portal itself, and that’s where solving the problem of being able to disseminate this content to many touch points, you need a platform for that. The portal is one touch point only, but when you start having multiple touch points like doing in-product help or you want to feed your helpdesk tool or field service application or whatever sort of chatbot somewhere else, whatever use case you have that is not just the portal itself, then that becomes a platform thing. SO: So looking at this from our point of view, so many of our projects start with component content management systems, CCMSs, which are the back end. This is where you’re authoring and managing and taking care of all your information, and then you have to deliver it. And one of the ways that you could solve your delivery front-end would be with a content delivery platform such as Fluid Topics. Okay. So then, what are the prerequisites, when you start thinking about this? So our hypothetical customer has content obviously, and they have, we’re going to say probably a back-end content management system of some sort, probably. FL: Most of the time. SO: Most of the time. FL: Depends where you go, depends on the maturity and the industry. If you go to some manufacturing somewhere, they mostly still are maybe on the word and FrameMaker or something like that in design, and then they generate PDFs. SO: So maybe we have a backend authoring, well, we have an authoring environment of some sort on the back-end. Maybe it’s a CCMS, maybe it’s something not like that. And now we’re going to say, all right, we’re going to take all this content that we’ve created and we’re going to put it into the CDP, the content delivery platform. Now, what does success look like? What do you need from that content or from the project to make sure that your CDP can succeed in doing what it needs to do? FL: The first answer to that question that comes to my mind is no PDFs. I mean, if you look at it, don’t laugh at me. If you look at it from an evolutionary perspective, it’s like regardless how people were writing before, it was not CCMS, mostly unstructured. And at the end of the day, people were pressing a button and generating PDFs and putting the PDF somewhere, CRM, USB key, website for download. But managing the content unstructured was painful. That’s where you start working with the CCMS, because you have multiple versions, variants, you want to work in parallel, you want to avoid copy paste, translation, so the story around that. So then companies start and they start moving their content into CCMS. All of the content, part of the content, but they start investing in a modern way of managing, creating their content. But again, if you look at it once they have made that move, most of those companies 10, 15 years ago probably were still pressing a button and still generating PDFs. And then they realized that they had solved one problem for themselves, which is streamlining the production capability and managing the content in a better way. But from a conception perspective, regardless whether you work with word FrameMaker or in DITA with the most advanced CCMS of the market, if you still deliver PDF, you are not improving the life of your customers. And then people started realizing that, oh yeah, so we should do better. So let’s try to output that content in another way than PDFs. And then say, “What else than PDF, do we have? HTML.” And was like, okay, and let’s output HTML. But HTML that is pretty much the same as the PDF. You see what I mean? It’s like static document. Each document was a set of HTML pages. And then they started realizing that they need to reassemble the set of HTML pages into a website, which is even more painful than just putting PDFs on the website is reassembling zip files of HTML pages on the website, and then it’s like static HTML. And then you have to put a search on top and have to create consistency. And that’s why CDP have emerged. That’s solving this need, which is, how do we transition from PDF to static HTML to something that is easier, that ingest all this content, comes with search capabilities, comes with configuration capabilities, and as well at the same time as API, so that back to the platform thing, it’s not just a portal, but can serve other touch points. So that’s really because we are in the detail world, DITA is the Darwin Information Typing Architecture. So that’s a very Darwinian process that led to this creation of the CDP and the need of a CDP is the next step in the process. And many companies really follow that process of, I have to go from my old ways of writing, which are not working painful, move to a CCMS, but in fact realize that they don’t solve the real problem of the company, which is how can I help my customer, my support agent, my field technicians better find the content better use my content? And that’s where this T, oh, okay. That’s where we need a CDP. SO: Yeah, and I think, I mean, we’ve talked for 20 years about PDFs and all the issues around them, but it’s probably worth remembering that PDF in the beginning was a replacement for a shelf of books, paper books that went out the door. And the improvement was that, instead of shipping 10 pounds, or I’m sorry, what four kilos of books you were shipping as you said, a CD-ROM or this was before USB, a zip drive. Remember those? FL: Zip drive. SO: A zip drive. But you were shipping electronic copies of your books and all you were really doing was shifting the process of printing from the creator, the software, hardware, the product company to the consumer. So the consumer gets a PDF, they print it, and then that’s what they use. Then we evolved into, oh, we can use the PDF online, we can do full-text search, that’s kind of cool, that was a big step forward. But now to your point, the way that we consume that information is not printed and it’s for the most part, and it’s not big PDFs, but rather small chunks of information like a website. So how do we evolve our content into those websites? So then what does it look like to have a, and I think here we’re talking about the portal specifically, but what does it look like to have a portal for the end user that allows them to get a really good experience in accessing and using and consuming the content that they need to use the product, whatever it may be. What are some of the key things that you need to do or that you can do? FL: Yeah. I would say that the main thing that a CDP is achieving compared to static HTML, because now we have to compare not with PDFs that are probably still needed if you want to print as well, I’m not saying that PDF is dead and we should get rid of all PDFs. Just said that it’s just when you need to print, then you can get the PDF version of a document. But if we compare static HTML with what a CDP brings, we’re trying to make content personalized and contextual. If you pre-generate static HTML pages, it’s one size fit

05-19
32:58

LearningDITA: DITA-based structured learning content in action

Are you considering a structured approach to creating your learning content? We built LearningDITA.com as an example of what DITA and structured learning content can do! In this episode, Sarah O’Keefe and Allison Beatty unpack the architecture of LearningDITA to provide a pattern for other learning content initiatives. Because we used DITA XML for the content instead of the actual authoring in Moodle, we actually saved a lot of pain for ourselves. With Moodle, the name of the game is low-code/no-code. They want you to manually build out these courses, but we wanted to automate that for obvious reasons. SCORM allowed us to do that by having a transform that would take our DITA XML, put it in SCORM, and then we just upload the SCORM package to Moodle and don’t have to do all the painful things of, you know, “Let’s put a heading two here with this little piece of content.” And the key thing is that allowed us to reuse content. — Allison Beatty Related links: Self-paced, online DITA training with LearningDITA.com Structured authoring and XML (white paper), which is also included in our book, Content Transformation Confronting the horror of modernizing content The benefits of structured content for learning & development content Get monthly insights on structured learning content, content operations, and more with our Illuminations newsletter LinkedIn: Sarah O’Keefe Allison Beatty Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. Sarah O’Keefe: Change is perceived as being risky, you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and process that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Sarah O’Keefe: Hi everyone, I’m Sarah O’Keefe. Allison Beatty: And I’m Allison Beatty. SO: And in this episode, we’re focusing in on the LearningDITA architecture and how it might provide a pattern for other learning content initiatives, including maybe the one that you, the listener, are working on. We have a couple of major components in the learningDITA.com site architecture. We have learner records for the users. We have e-commerce, the way we actually sell the courses and monetize them. That is my personal favorite. And then we have the content itself and assorted relationships and connectors amongst all those pieces. So I’m here with Allison Beatty today, and her job is to explain all those things to us because Allison did all the actual work. So Allison, talk us through these things. Let’s start with Moodle. What is Moodle and what’s it doing in the site architecture? AB: Okay. So Moodle is an open-source LMS that we- SO: What’s an LMS? AB: Learning management system, Sarah. SO: Thank you. AB: And we installed Moodle, our own instance of Moodle and customized it as we saw fit for our needs. And that is the component that acts as the layer between the content and the learning experience. So without the Moodle part, it’s just a big chunk of content that you can’t really interact with. And Moodle gives that a place to live. SO: And then Moodle has the learner records, right? AB: Yes. SO: And what about groups? What does that look like? AB: In Moodle, there’s a cohort functionality which allows us to use groups so that a manager can buy multiple seats and assign them to individuals and keep track of their course progress through group registration rather than individual self-service signups. SO: So if I were a manager of a group that needs to learn DITA, instead of having to send five or 10 or 50 people individually to our site, I could just sign up once and buy five or 10 or 50 seats in a given course and then assign those via email addresses to all of my people, right? AB: Exactly. SO: Okay. So then speaking of buying things, we had to build out this e-commerce layer, which I was apparently traveling the entire time that this was going on, but I heard a lot of discussion about this in our Slack. So what does it look like? What does the commerce piece look like? AB: Yeah. So it is a site outside of the actual learningDITA.com Moodle site that has a connector into Moodle so that you can buy a course or a group registration in the store, and then you get access to that content in Moodle. SO: So we have this site, this actually separate site, and if you’re in there, you can do things like buy a course or buy a collection of courses or a number of seats. And then what were some of the fun complications that we ran into there? AB: Oh yeah. So the fun complications there were figuring out how to set up an commerce site that A, connected to Moodle so that we could sell the courses, and B was able to process taxes and payments and all of that fun stuff. So Moodle has PayPal as a feature just out of the box and the base Moodle source code. But we wanted to accept credit cards directly and so that meant some additional layers, which is how we ended up with the store.scriptorium.com site, which is built on WordPress and uses a connector, the aforementioned connector, to make those two sites talk to each other. So they’re actually, the LMS and the e-commerce piece are totally separate websites, but exist within the same system environment. SO: And most of you listening to this probably don’t care, but one of the things we learned was that digital training, downloadable training content is sometimes subject to sales tax and sometimes not, depending on the particular state or the particular jurisdiction. So it’s not just, what is sales tax in North Carolina versus what is sales tax in Washington state versus what is it in Oregon? But additionally, in each jurisdiction is this type of training subject to sales tax or not. So we spent a more than optimal amount of time on figuring out all of those things and making sure we get it right, because I’m extremely interested in making sure that those taxes are done correctly and keep us out of trouble. AB: And the basic PayPal and Moodle wasn’t going to give us that level of granular control and specification. SO: And typically our customers are looking to pay via credit card. So we’ve got the LMS piece with the learner experience, the actual learning platform. We’ve got the e-commerce piece with the Let’s Take Money piece. And then finally we have the content piece. So what does it look like to actually create these courses and create and manage the content that then eventually goes into Moodle? AB: Yeah. So the content does have a single source of truth. It is all authored in DITA XML and stored in a central repository. You can see that content in GitHub. It’s open source. We took the DIT XML and we developed a SCORM transform that we could use to hook the content up into Moodle and be able to use all of the grading and progress and prerequisite type things that we needed to flush out the actual learning platform. We had learned a fun lesson along the way that Moodle does not support SCORM 2004. So that required a little bit of backtracking to make sure that we were getting the data into the correct SCORM to get into Moodle. And so because we used it XML for the content instead of the actual authoring in Moodle, we actually saved a lot of pain for ourselves with Moodle. The name of the game with Moodle is low-code/no-code, and they want you to manually build out these courses. But we wanted to automate that for obvious reasons, and SCORM allowed us to do that by having a transform that would take our DITA XML, put it in SCORM, and then we just upload the SCORM package to Moodle and don’t have to do all the painful things of let’s put a heading to here with this little piece of content. And the key thing is that allowed us to reuse content as well. And then if we need to update the content, all we have to do is replace the SCORM package in Moodle. SO: So currently we have DITA 1.3 content out there. The DITA 2.0 content is under development, and I would say mostly done. We’re mainly waiting for the actual release of the those two chunks of content, although those courses are going to be in GitHub in the DITA training, or I think it’s called Learning DITA now, the Learning DITA project. AB: Yep. SO: Separately from that, we’re working on some new courses which are not going to be open sourced, but will be available on Moodle or… Sorry, on learningDITA.com. And so for those of you that are wondering, we’ve got a number of things on our roadmap. I’d love to hear more from people listening to this about what they need out of this. What more advanced courses are you looking for? One thing that we’ve heard a lot of requests for is a DITA open toolkit plugins 101. How do I build a plugin? How do I use best practices? How do I make this all happen? So we have this, I don’t know, DITA inception thing happening because we’re training people on how to do DITA using DITA inside DITA, building out the stuff. AB: It’s all very meta. SO: It’s extremely meta. Hypothetically, what would it look like to localize this? So what we’ve delivered right now is in English, and in the past we have had people put together both, let’s see, German, Chinese, and I think French versions of the Learning DITA content. But what does it look like in this new architecture to localize? AB: Yeah. So much like the tool chain for this new architecture, there are a couple of different components, and if you would like to localize the Learning DITA content, what you’ll want to look at is the content itself, translating and localizing the source content, but you’ll also need to localize Moodle some. So what you would do is make a, basically clone

04-21
14:15

The benefits of structured content for learning & development content

In this episode, Alan Pringle, Bill Swallow, and Christine Cuellar explore how structured learning content supports the learning experience. They also discuss the similarities and differences between structured content for learning content and technical (techcomm) content. Even if you are significantly reusing your learning content, you’re not just putting the same text everywhere. You can add personalization layers to the content and tailor certain parts of the content that are specific to your audience’s needs. If you were in a copy-and-paste scenario, you’d have to manually update it every single time you want to make a change. That scenario also makes it a lot more difficult to update content as you modify it for specific audiences over time, because you may not find everywhere a piece of information has been used and modified when you need to update it. — Bill Swallow Related links: Structured authoring and XML (white paper), which is also included in our book, Content Transformation Confronting the horror of modernizing content The challenges of structured learning content (podcast) Self-paced, online DITA training with LearningDITA.com Get monthly insights on structured learning content, content operations, and more with our Illuminations newsletter LinkedIn: Alan Pringle Bill Swallow Christine Cuellar Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. Sarah O’Keefe: Change is perceived as being risky, you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and process that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Christine Cuellar: Hey, everybody, and welcome to today’s show. I’m Christine Cuellar, and with me today I have Alan Pringle and Bill Swallow. Alan and Bill, thanks for being here. Alan Pringle: Sure. Hello, everybody. Bill Swallow: Hey, there. CC: Today, Alan, Bill, and I are going to be talking about structured content for learning content. Before we get too far in the weeds, let’s kick it off with a intro question. Alan, what is structured content? AP: Structured content is a content workflow that lets you define and enforce consistent organization of your information. Let’s give a quick example in the learning space. For example, you could say that all learning overviews contain information about the audience for that content, the duration, prerequisites, and the learning objectives for that lesson or learning module. And by the way, that structure that I just mentioned … It actually comes from a structured content standard called the Darwin Information Typing Architecture, DITA for short. That is an open-source standard that has a set of elements that are expressly for learning content, including lessons and assessments. And I think it’s also worth noting, another big part of the whole idea of structured content is that you are creating content in a format agnostic way. You are not formatting your content specifically for, let’s say, a study guide, a lesson that’s in a learning management system, or even a slide deck. Instead, what a content creator instructional designer does … They are going to develop content that follows the predefined structure, and then an automated publishing process is going to apply the correct kind of formatting depending on how you’re delivering the content. That way, as a content creator and instructional designer, you’re not having to copy and paste your learning content into a bunch of different tools. And I know for a fact a lot of instructional designers are doing that right now. Instead of doing all that copying and pasting, you write it one time, and then you say, “I want to deliver it for these different delivery targets, whether it’s for online purposes, whether it’s for in-person training or maybe a combination of both.” You set up publishing processes to apply the formatting for whatever your delivery targets are so you, as a human being, don’t have to mess with that. CC: Which is awesome. Part of the reason that we’re talking about this today is that structured content has been a part of the techcomm world for over 30 years, for a really long time, and now we’re starting to see it make inroads in the learning and development space. We’ve been doing a lot of work for structured content in the learning space, but how is it different from the techcomm space? And Bill, I’m going to kick this over to you for that. BS: I think I’m going to take a higher-level view on this because there is a lot of overlap between techcomm and learning content. Where they really start to diverge is in delivery. Techcomm is pretty uniform in how it delivers content to people. There’s personalization involved and so forth, but essentially everyone’s getting the same thing. The experience is going to be the same. Everyone’s going to get a manual. Everyone’s going to get online help. Everyone’s going to get a web resource, what have you. It might be tailored to their specific needs, but it’s a pretty candid delivery experience. For training, the focus is on the learning experience itself, and it’s usually tailored to a very specific need, whether it’s a very specific type of audience that needs information, or it’s very specific information that needs to be delivered in a very specific way for those people. Beyond that, we start looking at the content itself under the hood, and the information starts to, I would say, broaden with learning content because it can consume all the different types of information you have with technical content. And generally in a structured world, we think of that as conceptual information, how-to information, and reference information, for the most part. With learning content, now you have a completely new set of content in addition to that where you have learning objectives. You have assessments. You have overviews, reviews, all sorts of different content that essentially expands on the wealth of information you have from your technical resources. CC: That’s great. Typically, the arguments for structured content, and the reason it’s really valuable for organizations, is it introduces consistency in your content, consistency for your brand across wherever you’re delivering content. It also helps you build some scalable content processes, that kind of thing. What are some of the arguments for structured content for the learning environment specifically, if there are any other new ones? AP: Some of the reasons that you want to do structured content for learning content are really similar to other types of content. We’ve already talked about one of them. I touched on this earlier in regard to automated formatting. You are not having to do all of the work as a human being, applying formatting to ever how many delivery formats that you have. That is a huge win that you’re not having to do that. And especially in the training space, I have seen so many organizations copying content from one platform to another because the platforms don’t play well together, so you’ve got multiple versions of what should be the same exact content to maintain. That is another huge reason to consider structure. You want a single source of truth for your content regardless of where that information is being delivered because if you’re looking at the overall learning experience and the excellence and quality of that learning experience, if you were telling learners slightly different things in different places in your content, you are not providing an optimal learning experience. Therefore, having that single source of truth for a particular bit of information gives your learners a consistent piece of information regardless of what channel they consume it for. That’s a really important win for a solid, dependable learning experience. CC: Gotcha. No, that definitely makes sense. It sounds like it would take some of the effort off of the subject-matter experts who are creating these trainings so that they can … They, I’m assuming, would rather focus on the work of helping train people. Getting some of the manual formatting and copy and pasting off of their workload sounds pretty nice. What are the complications that it might introduce or the change management issues that might need to be tackled when you’re bringing structured content into a learning environment? AP: It’s true anytime you bring in structure. When people are used to working in an environment where you are doing manual formatting, and you’re seeing what things look like as you kind of develop the content, the idea of developing content in a format agnostic way where you’re not thinking about what does this slide look like, or how is this assessment going to work in the learning management system, it’s very easy to get focused on the delivery angle because you want it to be good, and you want it to be done in a way that makes that learning experience useful for the people who are trying to learn whatever it is they’re trying to learn. You don’t want those impediments of bad formatting or a not great way that your assessments behave in your learning management system, but you kind of get to offload all of those concerns, which are very valid. I’m not saying they’re not valid. They are, but you want an automated process. Basically, you want computers to do that work for you. You want programming to apply that formatting so you can really focus on getting that information as solid as it can be, and you let technology handle the rest. You do set up the standards for how you deliver that content, whether it’s in print, online, in

04-07
23:22

LearningDITA: What’s new and how it enhances your learning experience

In this episode, Alan Pringle, Gretyl Kinsey, and Allison Beatty discuss LearningDITA, a hub for training on the Darwin Information Typing Architecture (DITA). They dive into the story behind LearningDITA, explore our course topics, and more. Gretyl Kinsey: Over time that user base grew and grew. And now it boggles my mind that it got all the way up to 16,000 users. I never expected it to grow to that size. Alan Pringle: Well, we didn’t really either, nor did our infrastructure. Because as of late 2024, things started to go a little sideways, and it became clear our tech stack was not going to be able to sustain more students. It was very creaky. The site wasn’t performing well. So we made a decision that we needed to take the site offline, and we did, to basically redo it on a new platform. Related links: Check out our self-paced online DITA 1.3 training. Open-source DITA training project GitHub files LinkedIn: Alan Pringle Gretyl Kinsey Allison Beatty Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. Sarah O’Keefe: Change is perceived as being risky, you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and process that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Alan Pringle: Hey, everyone, I am Alan Pringle, and today I am here with Gretyl Kinsey and Allison Beatty. Say hello, you two. Gretyl Kinsey: Hello. Allison Beatty: Hello. AP: We are together here today because we want to talk about LearningDITA, our e-learning site for the DITA specification because we have just moved it to a new platform. So we want to give you a little background on what went on with that decision. So first of all, Gretyl, you and I were at Scriptorium when we kicked off this site, and I just went back and looked at blog posts. We announced it via blog post I wrote in July of 2015. So we have had this site up and running for 10 years, which absolutely blows my mind. GK: It blows my mind too. It’s hard to believe that it’s been that long because it does seem like it got launched pretty recently in my memory, but it has been through a lot of changes and so has the entire landscape of content creation as well. So yeah, it’s really cool that now we can look back and say it has been 10 years of LearningDITA being on the web. AP: For those who may not be familiar with the site, give us a little summary of what it is. GK: Sure. So LearningDITA is a training resource on DITA XML and it’s developed by Scriptorium, and it covers a lot of the main fundamentals of DITA. So we have some courses on basic authoring and publishing. We also have a couple of courses on reuse and one course on the DITA learning and training specialization. So you get a good overview of a lot of different areas of DITA XML. And all of the courses are self-guided e-learning. So you can go through and take them at your own pace. You can go back and take the courses again if you want a memory refresher. And they all come with a lot of examples and exercises. So you get a download of sample files that you can work your way through. There’s some of that practice that’s guided, and then there’s others that you do on your own. And then there are also assessments throughout each course that help you test your knowledge. So you get a really nice hands-on approach to LearningDITA. So that’s why we called the site that in the first place. And it really helps to get those basics, those fundamentals in place if you are coming at it as a beginner who is unfamiliar with DITA or maybe you have some familiarity, but you want to just reinforce what you know. AP: So we went along with this site and kept adding courses over the years. I think we got to nine, is that right? I think it’s nine. GK: That’s right. So we really started this out, like I was mentioning earlier, that we needed something that was beginner-friendly, something for people who were unfamiliar with DITA because we saw a gap in the information that was available at the time 10 years ago. A lot of the DITA resources, documentation, guides and things like that out there were something that assumed some prior knowledge or prior expertise, and there wasn’t really anything that filled that gap. So we came up with these courses. And the nine courses that we have, the first one is just an introduction to DITA. So that was the first one that launched back in July of 2015. And then shortly after that, we added a few courses on topic authoring. So that covers the main topic types, concept, task reference and glossary entry. And then we just added more courses over time. So we’ve got one that covers the use of maps and book maps. We’ve got one that covers publishing basics. We have, like I mentioned, the two courses on reuse. So there’s a more introductory basic reuse course and then a more advanced reuse course, and then learning and training. So those are the nine courses that we have, and they’ve been up there pretty much the entire time. The earliest ones where that introduction, the authoring, and then we added the others as the demand increased over time. AP: And that demand, I’m glad you mentioned that, really did increase because as of late 2024, we had over 16,000 students in the database for LearningDITA, which also completely blows my mind. GK: Yeah, it does for me too, because I think in the early days we saw a lot more individuals using it, and then over time we would see more large groups of users sign up. So an entire class whose professor might’ve recommended taking the LearningDITA courses or sometimes an organization, whether it was one of our clients or just another organization, would have a lot of employees sign up all at once. And so yeah, over time that user base grew and grew. And now it does boggle my mind as well that it got all the way up to 16,000 users. I never expected it to grow to that size. AP: Well, we didn’t really either, nor did our infrastructure. Because as of late last year, things started to go a little sideways and it became clear our tech stack was not going to be able to sustain more students. It was very creaky. The site wasn’t performing well. So we made a decision that we needed to take the site offline and we did to basically redo it on a new platform. And Allison, this is where I want you to come in because you are one of the, shall we say, victims on the Scriptorium side who got to dive into what our requirements were, what we needed to do. Essentially, I mean, we really became consultants for ourselves and turned our consultant eye at our problem to figure out what it was. And Allison, if you don’t mind, tell us a little bit about that process and where we landed. AB: Yeah, so the platform was the first big choice that we knew we had to make, and things started out pretty fuzzy because we didn’t really know what we were doing and just had to figure out what was going to work to solve these pain points. And so as a starting place, we knew we needed a new LMS, learning management system. And so we did some research on what learning management systems were out there and thought about what we could use that would fit our needs. And we ended up choosing Moodle, which is an open source LMS that is very widely used within colleges and universities and higher education settings. And we knew it could be very powerful and probably suit our needs with some custom work. But the thing about Moodle is it’s known for having a high barrier to entry in terms of the installation, and that made us a little nervous. But the more we kept looking at LMS options, both open source and commercial, we realized that Moodle is so popular and industry standard almost for a reason and that it was worth taking on that challenge. AP: And I even had someone in the learning space because I asked her advice, what LMS would you use? She pretty much said run away from Moodle because for a lot of the reasons that you just mentioned. But I think it’s worth noting, it does have… There are a lot of people using it, especially in educational settings, schools, universities. It’s also the open source angle was appealing because that way it didn’t look like we were picking “favorites” by picking a particular proprietary LMS. AB: Yeah, definitely. And then the other piece of the puzzle there as far as how we’re going to display and host the learning content was the DITA transform for the content itself and how we were going to get the LearningDITA content into our LMS. And so we knew that Moodle is compatible with both SCORM and xAPI and we ended up deciding that we wanted to develop a DITA to SCORM transform because SCORM is something that we have discussed and worked on with other clients as we’ve been seeing this trend in learning and training content pickup. I don’t know if Gretyl wants to talk a little bit about how she’s seen SCORM throughout various projects and why we decided it was something we wanted to pursue and learn more about ourselves. AP: And what is it while you’re at it? That too. AB: That’s a good question. I’ll just go ahead and talk a little about what it is without getting too deep technically. Basically it’s a standard for e-learning content and it provides communication that can do things like track grades within your LMS. In the LearningDITA, the previous site and the current site, you had to pass assessments to get to the next lesson. And so SCORM can handle things like tracking assessment completion and scores. It’s pretty flexible and widely used. It’s more or less just a standard, but it requires a pretty s

03-10
19:21

Building your futureproof taxonomy for learning content (podcast, part 2)

In our last episode, you learned how a taxonomy helps you simplify search, create consistency, and deliver personalized learning experiences at scale. In part two of this two-part series, Gretyl Kinsey and Allison Beatty discuss how to start developing your futureproof taxonomy from assessing your content needs to lessons learned from past projects. Gretyl Kinsey: The ultimate end goal of a taxonomy is to make information easier to find, particularly for your user base because that’s who you’re creating this content for. With learning material, the learner is who you’re creating your courses for. Make sure to keep that end goal in mind when you’re building your taxonomy. Related links: Taxonomy: Simplify search, create consistency, and more (podcast, part 1) The challenges of structured learning content (podcast) DITA and learning content Metadata and taxonomy in your spice rack Transform L&D experiences at scale with structured learning content LinkedIn: Gretyl Kinsey Allison Beatty Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. Sarah O’Keefe: Change is perceived as being risky, you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and process that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Allison Beatty: I am Allison Beatty. Gretyl Kinsey: I’m Gretyl Kinsey. AB: And in this episode, Gretyl and I continue our discussion about taxonomy. GK: This is part two of a two-part podcast. AB: So if you don’t have a taxonomy for your learning content, but you know need one, what are some things to keep in mind about developing one? GK: Yeah, so there are all kinds of interesting lessons we’ve learned along the way from working with organizations who don’t have a taxonomy and need one. And I want to talk about some of the high-level things to keep in mind, and then we can dive in and think about some examples there. One thing I also want to just say upfront is that it is very common for learning content in particular to be developed in unstructured environments and tools like Microsoft Word or Excel. It’s also really common that if you are working within a learning management system or LMS for there to be a lack of overall consistency because the trade-off there is you want flexibility, right? You want to be able to design your courses in whatever way is best suited for that specific subject or that set of material. But that’s where you do have that trade-off between how consistent is the information and the way it’s organized versus how flexible is it to give your instructional designers that maximum creativity. And so when you’ve got those kinds of considerations, then that can make the information harder for your students to find or to use and even for your content creators. So we’ve seen organizations where they’ve said, “We’ve got all of our learning materials stuck in hundreds of different Word files or spreadsheets or in sometimes different LMS’ or sometimes different areas in the same LMS.” And when they have all of those contributors, like we talked about with multiple authors contributing, or sometimes lots and lots of subject matter experts part-time contributing, that really creates these siloed environments where you’ve got different little pieces of learning material all over the place and no one overarching organizational system. And so that’s typically the driving point that see where that organization will say, “We don’t have a taxonomy. We know that we need one.” But I think that is the first consideration is if you don’t have one and you know you need one, the first question to ask is why? Because so often it is those pain points that I mentioned, that lack of one cohesive system, one cohesive organization for your content, and sometimes also one cohesive repository or storage mechanism. So that’s typically where you’ll have an organization saying, “We don’t have a good way to kind of connect all of our content and have that interoperability that you were talking about earlier, and we need some kind of a taxonomy so that even if we do still have it created in a whole bunch of different ways by a bunch of different people, that when it gets served to the students who are going to be taking these courses, it’s consistent, it’s well-organized, it’s easy for people to find what they need.” So I think that’s the first consideration is that if you’ve got that demand for taxonomy developing, think about where that’s coming from and then use that as the starting point to actually create your taxonomy. And then I think one other thing that can help is to think about how your content is created. So if you do have those disparate environments or you’ve got a lot of unstructured material, then take that into account and think about building a taxonomy in a way that’s going to benefit rather than hinder your creation process. And that is especially important the more people that you have contributing to your learning material. It’s really helpful to try to gather information and metrics from all of your authors and contributors, as well as from your learners. So any kind of a feedback form that, if you’ve got some kind of an e-learning or training website where you can assess information that your learners tell you about, what was good or bad about the experience, what was difficult or what would make their lives easier, that’s really great information for you to have. But also from your contributors, your authors, your subject matter experts, your instructional designers, if they have a way to collect feedback or information on a regular basis that will help enhance the next round of course design, then all of that can contribute to taxonomy creation as well. When you start building a taxonomy from the ground up, you can look at all the metrics that you’ve been collecting and say, “Here’s what people are searching for. We should make sure that we have some categories that reflect that. Here are difficulties that our authors are encountering with being able to find certain information and keep it up to date or with being able to associate things with learning objectives. So let’s build out categories for that.” So really making sure that you use those metrics. And if you’re not collecting them already, it’s never too late to start. I think the biggest thing to keep in mind also is to plan ahead very carefully and to make sure that you’re thinking about the future, that you’re doing futureproofing before you actually build and implement your taxonomy. And I know we both can probably speak to examples of how that’s been done well versus not so well. AB: Yeah, maintenance is so important. GK: Yeah, and I think the more that you think about it upfront before you ever build or put a taxonomy in place, the easier that maintenance is going to be, right? Because we’ve seen a lot of situations where an organization will just start with a taxonomy, but maybe it’s not broad enough. So maybe it only starts in one department. Like they have it for just the technical docs, but they don’t have it for the learning material. And then down the road it’s a lot more difficult to go in and have to rework that taxonomy for new information that came out of the learning department. That if they had had that upfront, it could have served both training and technical docs at the same time. So thinking about that and doing that planning is one of the best ways to avoid having to do rework on a taxonomy. AB: And I’m glad you brought up the gathering of feedback and insight from users before diving into building out a taxonomy. Because at the end of the day, you want it to be usable to the people who need that classification system. That is the most important part. GK: Yeah, that’s absolutely the end goal. AB: Usability. GK: Yeah, and I think a big part of that, like I’ve mentioned, planning ahead carefully and futureproofing, is looking at metrics that you’ve gathered over time because that can help you to see whether something in those metrics or in that feedback is a one-off fluke or whether it’s an ongoing persistent trend or something that you need to always take into consideration from your end users. If you’ve got a lot of people saying the same things, a lot of people using the same search terms over time, that can really help you with your planning. And yeah, like you said, I think the ultimate end goal of a taxonomy is to make information easier to find, and in particular for your user base because that’s who you’re creating this content for. And with learning material, that’s who you’re creating your courses for. So you want to make sure that when you’re building that taxonomy, that that end goal is something you always keep in mind. How can we make this content easier for people to find and to use? AB: Definitely. Something else that I am curious to get your take on is in this planning stage. So in my experience, I feel like there’s never nothing to start with. Even if there’s not any formalized standards or anything around classification of content, there’s like a colloquial system, right? GK: Yes, very much so. AB: Of how content creators or users think about an organized content, even if they’re not necessarily using a taxonomy. GK: Yeah. A lot of times it’s very similar to when we just talked about content structure itself. That if you’re in something like Microsoft Word or Unstructured FrameMaker, even if there’s not an underlying structure, a set of tags under that content, there is still an implied structure. You can still look at

02-10
22:12

Taxonomy: Simplify search, create consistency, and more (podcast, part 1)

Can your learners find critical content when they need it? How do you deliver personalized learning experiences at scale? A learning content taxonomy might be your solution! In part one of this two-part series, Gretyl Kinsey and Allison Beatty share what a taxonomy is, the nuances of taxonomies for learning content, and how a taxonomy supports improved learner experiences in self-paced e-learning environments, instructor-led training, and more. Allison Beatty: I know we’ve made taxonomies through all sorts of different frames, whether it’s structuring learning content, or we’ve made product taxonomies. It’s really a very flexible and useful thing to be able to implement in your organization. Gretyl Kinsey: And it not only helps with that user experience for things like learning objectives, but it can also help your learners find the right courses to take. If you have some information in your taxonomy that’s designed to narrow it down to a learner saying, “I need to learn about this specific subject.” And that could have several layers of hierarchy to it. It could also help your learners understand what to go back and review based on the learning objectives. It can help them make some decisions around how they need to take a course. Related links: The challenges of structured learning content (podcast) DITA and learning content Metadata and taxonomy in your spice rack Transform L&D experiences at scale with structured learning content Rise of the learning content ecosystem with Phylise Banner (podcast) LinkedIn: Gretyl Kinsey Allison Beatty Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. Sarah O’Keefe: Change is perceived as being risky, you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and process that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Gretyl Kinsey: Hello and welcome. I’m Gretyl Kinsey. Allison Beatty: And I’m Allison Beatty. GK: And in this episode, we’re going to be talking about taxonomy, particularly for learning content. This is part one of a two-part podcast. AB: So first things first, Gretyl, what is a taxonomy? GK: Sure. A taxonomy is essentially just a system for putting things into categories. Whether that is something concrete like physical objects or whether it’s just information. A taxonomy is going to help you collect all of that into specific categories that help people find what they’re looking for. And if you’ve ever been shopping before, you have encountered a taxonomy. So I like to think about online shopping, in particular, to explain this because you’ve got categories for the type of item that you’re buying at a broad level that might look something like you’ve got clothing, household goods, electronics, maybe food. And then within that you also have more specific categories. So if we start with clothing, you typically will have categories for things like the type of garment. So whether you are looking for shirts, pants, skirts, coats, shoes, whatever. And then you also might have categories for the size, for the color, for the material. They’re typically categories for the intended audience. So whether it’s for adults or kids. And then within that may be for gender. So all these different ways that you can sort and filter through the massive number of clothing results that you would get if you just go to a store and look at clothing. You’ve got all of these different pieces of information, these categories that come from a taxonomy where you can narrow it down. And that typically looks like things on a website, like search boxes, checkboxes, drop-down menus, and those contain the assets or the pieces of information from that taxonomy that are used to categorize that clothing. So then you can go in and check off exactly what you’re looking for and narrow down those results to the specific garment that you were trying to find. So the ability to go on a website and do all of that is supported by an underlying taxonomy. AB: So that’s an example of online shopping. I’m sure a lot of people are familiar with taxonomies in the sense of biology, but how can taxonomies be applied to content? GK: Sure. So we talk about taxonomy in terms of content for how it can be used to find the information that you need. So when you think about that online shopping example, instead of looking for a physical product like clothing. When it comes to content, you’re just looking for specific information. So it’s kind of like the content itself is the product. So if you are an organization that produces any kind of content, you can put a taxonomy in place so that your users can search through that content. They can sort and filter the results that they get according to those categories and your taxonomy. And that way they can narrow it down to the exact piece of information that they’re looking for instead of having to skim through a long website with a lot of pages, or especially if you’re dealing with any kind of manuals or books or more publications that you’re delivering. Not forcing them to read through all of that instead of being able to search and find exactly what they’re looking for. So some of the ways that taxonomies can help you categorize your content would be things like what type of information it is. So whether it is more of a piece of technical documentation, something like a user manual or a quick start guide or a data sheet, or whether it is marketing material, training material. You could put that as one of the categories in your taxonomy. You could also put a lot of information about your intended audience. So that could be things like their experience level. It could be things like the regions they live in or the languages they speak. Anything about that audience that’s going to help you serve up the content that those particular people need. It can also be things like what platform your audience uses or what platform is relevant for the material that you’re producing. It can be things like the product or product line that your content is documenting. There are all kinds of different ways that you can categorize that information. And I know that both of us have a lot of experience with putting these kinds of things together. So I don’t know if you’ve got any examples that you can think of for how you’ve seen information get categorized. AB: So a lot of the way I think about taxonomies is a library classification system or MARC records so in the same way that if you wanted to find a particular information resource and you went to your library’s online catalog and could filter down to something that fits your needs. You can think of treating your organization’s body of content like a corpus of information that you can further refine and assign metadata values to. Or in the case of a taxonomy hierarchy in the clothing example, choosing that you want a shirt would be a step above choosing that you want a tank top or a long sleeve shirt or a blouse. So a lot of my mindset around taxonomies for content is framed like libraries. The Library of Congress subject headings are generally a good starting off point for a library. But sometimes if your library has specific information needs, like the National Health Library has its own subject scheme that is further specialized than the broader categories that you get in Library of Congress subject headings, because they know that everything in that corpus is going to be health or medicine related information. And in the same way you and I have developed taxonomies for clients that are particular to their needs, you’re never going to start off knowing nothing when you build a taxonomy, right? GK: Exactly. And with the example that you were talking about of kind of looking at information in a library catalog, we see that with a lot of documentation. So if you’re thinking about technical content and things like product documentation, user guides, user manuals, we see that similar kind of functionality. If you have that content available through a website or an app or some other kind of digital online experience, back to the online shopping example. Your user base can in all of those different cases, go to those facets and filters, those check boxes, drop down menus, search boxes, and start narrowing down the information to what exactly they’re looking for. So that really helps to enhance the user experience to have that taxonomy in place underlying the information and making it easier to narrow down. I’ve also seen it really helpful on the authoring side. So if you have a large body of content, maybe you have it in something like a content management system. And more content that you have, the harder it becomes to find the specific information that you’re looking for. In particular, we deal with a lot of DITA XML. And so there will be a component content management system that that’s typically housed in. And when you’ve got it in there, those systems typically have some kind of underlying taxonomy in place as well that can capture all kinds of information about how and when the content was created. So that can help you find it. And then of course, you could have your own taxonomy for the kinds of things I named earlier, what type of information it is, what the intended audience is in case that can help you as the author find and narrow down something in your system. And it can also help you as an author to put together collections of content for personalized delivery. So maybe you have a general version of your user guide, but then you’ve also got audience

02-03
22:56

Transform L&D experiences at scale with structured learning content

Ready to deliver consistent and personalized learning content at scale for your learners? In this episode of the Content Operations podcast, Alan Pringle and Bill Swallow share how structured content can transform your L&D content processes. They also address challenges and opportunities for creating structured learning content. There are other people in the content creation world who have had problems with content duplication, having to copy from one platform or tool to another. But I will tell you, from what I have seen, the people in the learning development space have it the worst in that regard—the worst. — Alan Pringle Related links: The challenges of structured learning content (podcast) DITA and learning content Rise of the learning content ecosystem with Phylise Banner (podcast) Flexible learning content with the DITA Learning and Training specialization Building an effective content strategy is no small task. The latest edition of our book, Content Transformation is your guidebook for getting started. LinkedIn: Alan Pringle Bill Swallow Transcript: Disclaimer: This is a machine-generated transcript with edits. Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. Sarah O’Keefe: Change is perceived as being risky, you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and process that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction AP: Hey, everybody, I’m Alan Pringle. BS: I’m Bill Swallow. AP: And today, Bill and I want to talk about structured content in the learning and development space. I would say, the past two years or so, we have seen a significantly increased demand of organizations who want to apply structured content to their learning and development processes, and we want to share some of the things those organizations have been through and what we’ve learned over the past few months, because I suspect there are other people out there who could benefit from this information. BS: Oh, absolutely. AP: So let’s talk about, really, the drivers, what are the things that people, content creators in the learning development space, what’s driving them to it? One of them off the bat is so much content, so, so very much content, on so many different delivery platforms. That’s one that I know of immediately, what are some of the other ones? BS: Oh, yeah, you have just the core amount of content, the number of deliverables, and the duplication of content across all of them. AP: That is really the huge one, and I know there are other people in the content creation world who have had problems with content duplication, having to copy from one platform or tool to another. But I will tell you, from what I have seen, the people in the learning development space have it the worst in that regard—the worst. BS: Didn’t they applaud you when you showed up at a conference with a banner that said end copy, paste? AP: Pretty much, it’s true. That very succinct message raised a lot of eyebrows, because they are in the position, unfortunately, in learning and development, having to do a lot of copying and pasting, and part of the reason for that copying and pasting is, a lot of times, the different platforms that we’ve mentioned, also, different audiences. I need to create this version for this region, or this particular type of student at this location, so they’re copying and pasting over and over again to create all these variants for different audiences, which becomes unmanageable very quickly. BS: Yeah, copy, pasting, and then, reworking. And then, of course, when they update it, they have to copy, paste, and rework again to all the other places it belongs, and then, they have to handle it in however many languages they’re delivering the training in. AP: So now, everything is just blown up. I mean, how many layers of crap, and I’m just going to say it, do these people have to put up with? And there are many, many, many. BS: Worst parfait ever. AP: Yeah, no, that is not a parfait I want to share, I agree with you on that. So let’s talk about the differences between, say, the techcomm world and the learning and development world and their expectations for content. Let’s talk about that, too, because it is a different focus, and we have to address that. BS: So techcomm really is about efficiency and production, so being able to amass quite a wide mass of content and put it out there as quickly as possible, or put it out there as efficiently as possible. Learning content kind of flips that on its head, and it wants to take quality content and build a quality experience around it, because it’s focused on enabling people to learn something directly. AP: And techcomm people, we’re not saying you’re putting out stuff that is wrong or half ass. That is not what we mean, I want to be real clear here. What we mean is, there is a tendency to focus on efficiency gains, and getting that help set, getting that PDF, getting that wiki, whatever thing that it is that you’re producing, getting that stood up as quickly as possible, whereas on the learning side, speed is not usually the thing that you’re trying to use to sell the idea of structured content. I don’t think that’s going to win a lot of converts in the learning space. I do think, however, you can make the argument, if you create this single source of truth so you can reuse content for different audiences, different locations, different delivery platforms, and you’re using the same consistent information across all of that, you are going to provide better learning outcomes, because everybody’s getting the same information. Regardless of what audience they are or what platform that they’re learning, whether it’s live instructor-led training, something online, whatever else, you’re still getting the correct same information, whereas if you were copying and pasting all that, you might’ve forgot to update it in one place as a content creator, and then, someone ends up getting the wrong information, a student, a learner, and that’s when you’re not in the optimal learning experience situation. BS: Right, and it’s not to say that every single deliverable gets the exact same content, but they get a slice from the same shared centralized repository of content so that they’re not rewriting things over and over and over again. And they’re still able to do a lot of high-quality animations, build their interactives, put together their slide presentations, everything like that, but use the content that’s stored centrally rather than having to copy and paste it again and again and again. AP: Yeah, and let’s talk about, really, the primary goals for moving to structure content for learning and development folks. We’ve already talked about reuse quite a bit, that’s a big one. Write it one time, use it everywhere, and that also leads to creating profiling, different audiences, content for different audiences. BS: Right, I mean, these goals really are no different than what you see in techcomm, and what techcomm has been using for the past 15, 20, 25 years. It is that reuse, that smart reuse, so write it once, use it everywhere, no copy paste, having those profiling attributes and capabilities built in so that you can produce those variants for beginner learners versus expert learners versus people in different regional areas where the procedure might be a little bit different, producing instructor guides as well as learner guides. All of these different ways of mixing and matching, but using the same content set to do that. AP: Yeah, it’s like one of our clients said, and I have to thank them forever for bringing this up, they were bogged down in a world of continuous copying and pasting over and over and over again, and maintaining multiple versions of what should’ve been the same content, and they said, quote, “We want to get off the hamster wheel.” And that is so true and so fitting, and we probably owe them royalties for saying this over and over again, because such a good phrase. But it really did capture, I think, a big frustration that a lot of people in the learning and development space have creating content, because they do have to maintain so many versions of content. BS: And those versions likely are stored in a decentralized manner, so they could be on multiple different servers, they could be on multiple different laptops or PCs, they could be on thumb drives in some random drawer that are updated maybe once every two, three years. So being able to pull everything together into a central repository and structure it so that it can be intelligently reused and remixed, there’s so many benefits to that. AP: Yeah, and in regard to the remixing, the bottom line is, you want the ability to publish to all your different platforms. I believe the term people like to use is omnichannel publishing, so you basically can do push-button publishing to basically any delivery need that you have, whether it’s an instructor versus student guide for training you’re having live, e-learning, even scripts for video. Even when you’re dealing with a lot of multimedia content, there is still text involved, underpinnings of that content, audio and video, there’s still probably bits and pieces of that, that can come from your single source of content, because at the core of it, it’s text-based, even though if the delivery of it is a video or audio. BS: Now, we’ve had structured content for a good couple decades, at least- AP: At least, yeah. BS: … but there really is a reason why the learning world really hasn’t latched onto it completely,

01-13
20:27

Creating content ops RFPs: Strategies for success

In episode 179 of the Content Strategy Experts podcast, Sarah O’Keefe and Alan Pringle share the inside scoop on how to write an effective request for a proposal (RFP) for content operations. They’ll discuss how RFPs are constructed and evaluated, strategies for aligning your proposal with organizational goals, how to get buy-in from procurement and legal teams, and more. When it comes time to write the RFP, rely on your procurement team, your legal team, and so on. They have that expertise. They know that process. It’s a matter of pairing what you know about your requirements and what you need with their processes to get the better result. — Alan Pringle Related links: Survive the descent: planning your content ops exit strategy (podcast) The business case for content operations (white paper) Content accounting: Calculating value of content in the enterprise (white paper) Building the business case for content operations (webinar) LinkedIn: Sarah O’Keefe Alan Pringle Transcript: Disclaimer: This is a machine-generated transcript with edits. Alan Pringle: Welcome to the Content Strategy Experts Podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about writing effective RFPs. A request for a proposal, RFP, approach is common for enterprise software purchases, such as a component content management system, which can be expensive and perhaps risky. Hey everybody, I am Alan Pringle. Sarah O’Keefe: And I’m Sarah O’Keefe, hi. AP: So Sarah, we don’t sell software at Scriptorium, so why are we talking about buying software? SO: Well, we’re talking about you, the client buying software, which is not always, but in many cases, the prerequisite before we get involved on the services side to configure and integrate and stand up the system that you have just purchased to get you up and running. And so, because many of our customers, many most, nearly all of our customers are very, very large, many of those organizations do have processes in place for enterprise software purchases that typically either strongly recommend or require an RFP, a request for proposal. AP: Which let’s be very candid here. Nobody likes them. Nobody.  SO: No, they’re horrible. AP: Vendors don’t like them. People who have to put them together don’t like them, but they’re a necessary evil. But there things you can do to make that necessary evil work for you. And that’s what we want to talk about today. AP: So the first thing you need to do is do some homework. And part of that homework, I think, is talking with a bunch of stakeholders for this project or this purchase and teasing out requirements. So let’s start with that. And this is even before you get to the RFP itself. There’s some stuff you need to do in the background. And let’s talk about that a little bit right now. SO: Right, so I think, you know, what you’re looking to get to before you go to RFP is a short list of viable candidates, probably in the two to three range. I would prefer two, your procurement people probably prefer three to four. So, okay, two to three. And in order to get to that list of these look like viable candidates, as Alan’s saying, you have to do some homework. Step one, what are your hard, requirements that IT or your sort of IT structure is going to impose. Does the software have to be on premises or does it have to be software as a service? Nearly always these days organizations are hell bent on one or the other and it is not negotiable. Maybe you have a particular type of single sign-on and you have some requirements around that. Maybe you have a particular regulatory environment that requires a particular kind of software support. You can use those kinds of constraints to easily, relatively easily, rule out some of the systems that simply are not a fit for what your operating environment needs to look like. AP: And by doing that, you are going to reduce the amount of work in the RFP itself by doing this now. So you’re going to streamline things because you’ve already figured out, this candidate is not a good fit. So why bother them and why make work for ourselves having to work and correspond with the vendor that ends up not being a good fit. SO: Right, and if we’re involved in a process like this, which we typically do on the client side, so we engage with our customers to help them figure out how to organize an RFP process, right, we’re going to be strongly encouraging you to narrow down the candidate list to something manageable because the process of evaluating the candidates is actually quite time consuming on the client side. And additionally, it’s quite time consuming for the candidates, the candidate software companies to write RFP responses. So if you know for a fact that they’re not a viable candidate, you know, just do everybody a favor and leave them out. It’s not fair to make them do the work. AP: No, it’s not. And we’ve seen this happen before where a organization will keep a vendor in the process kind of as a straw man to strike down fairly quickly. It would be kinder and maybe more efficient to do that before you even get to the RFP response process, perhaps. SO: Yeah, and of course, again, the level of control that you have over this process may vary depending on where you work and what the procurement RFP process looks like. There are also some differences between public and private sector and some other things like that. But broadly, before you go to RFP, you want to get down to a couple of viable candidates, and that’s who should get your request for proposal. AP: Yeah, and when it does come time to write that RFP, do rely on your procurement team, your legal team. They have that expertise. They know that process. It’s a matter of pairing what you know about your requirements and what you need with that process to get the better result. And I think one of the key parts of this communication between you and your procurement team is about use case scenarios. So let’s talk about those a little bit because they’re fundamental here. SO: Yeah, so your legal team, your procurement team is going to write a document that gives you all the guardrails around what the requirements are and you have to be this kind of company and our contract needs to look a certain way and various things like that. We’re going to set all of that aside because A, we don’t have that expertise and B, you almost certainly as a content person don’t have any control over that. You’re just going to go along with what they are going to give you as the rules of the road in doing RFPs. However, somewhere inside that RFP it says, these are the criteria upon which we will evaluate the software that we are talking about here. And I think a lot of our examples here are focused on component content management systems, but this could apply to other systems whether it’s translation management, terminology, metadata, you know, all these things, all these content-related systems that we’re focused on. So, somewhere inside the RFP, it says, we need this translation management system to manage all of these languages, or we need this component content management system to work in these certain ways. And your goal as the content professional is to write scenarios that reflect your real world requirements that are unique to your organization. So if you are in heavy industry, then almost certainly you have some concerns around parts, about referencing parts and part IDs and maybe there’s a parts database somewhere and maybe there are 3D images and you have some concerns around how to put all of that into your content. That is a use case that is unique to you versus a software vendor who is going to have some sort of, we have 80 different variants of this one piece of software depending on which pieces and parts you license, and then that’s gonna change the screenshots and all sorts of things. So what you wanna do is write a small number of use cases. We’re talking about maybe a dozen. And those dozen use cases should explain, you know, as a user inside the system, I need to do these kinds of things. You might give them some sample content and say, here is a typical procedure and we have some weird requirements in our procedures and this is what they are. Show us how that will work in your system. Show us how authoring works. Show us how I would inject a part number and link it over to the parts database. Show us, you know, those kinds of things. So, the use case scenarios typically should not be, “I need the ability to author in XML,” right? AP: Or, “I need the ability to have file versioning,” things that every CCMS on the planet does, basically. SO: Right, somewhere there’s a really annoying and really long spreadsheet that has all those things in it, fine. But ultimately, that’s table stakes, right? They should not get to the short list unless you’ve already had this conversation about file versioning and the right class of system. The question now becomes, how do you provide a template for authors and what does it look like for authors to start from a template and do the authoring that they need to do? Is that a good match for how your authors need to or want to or like to work. So the key here from my point of view is don’t worry too much about the legalese and the process around the RFP, but worry a whole bunch about these use case scenarios and how you are going to evaluate all the different tools that you’re assessing against the use case scenarios. AP: Be sure you communicate those use case scenarios to your procurement team in a way they understand so they have a better handle on what you need because if everybody is kind of on the same page as far as those use cases go the clearer it’s going to be to communicate those things to the candidate vendors when they do get their hands on the RFP. SO: And I think as we’re going in or talking about going into a p

12-09
22:17

Pulse check on AI: December, 2024 (podcast)

In episode 178 of the Content Strategy Experts podcast, Sarah O’Keefe and Christine Cuellar perform a pulse check on the state of AI as of December 2024. They discuss unresolved complex content problems and share key considerations for entering 2025 and beyond. The truth that we’re finding our way towards appears to be that you can use AI as a tool and it is very, very good at patterns and synthesis and condensing content. And it is very, very bad at creating useful, accurate, net new content. That appears to be the bottom line as we exit 2024. — Sarah O’Keefe Related links: Pulse check on AI: May, 2024 (podcast) AI in the content lifecycle (white paper) The future of AI: structured content is key (webinar) Savor the season with Scriptorium: Our favorite holiday recipes LinkedIn: Sarah O’Keefe Christine Cuellar Transcript: Disclaimer: This is a machine-generated transcript with edits. Christine Cuellar: Welcome to the Content Strategy Experts Podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, it’s time for another pulse check on AI. So our last check-in was in May, which in AI terms is ancient history, so today, Sarah O’Keefe and I are gonna be talking about what’s changed and how it can affect your content operations. Sarah, welcome to the show. Sarah O’Keefe: Hey Christine, thanks. CC: Yeah. So 2024, as we’re currently recording this 2024 is winding down. People are preparing for 2025. Throughout this year, we went to a lot of different conferences and events. Of course, everybody’s talking about AI. So Sarah, based on the events that you like just recently got back from, you finally get to be in your own house. What are your thoughts about what’s going on with AI in the industry right now? SO: There’s, still a huge topic of conversation. Lots of people are talking about AI, a huge percentage of presentations, you know, had AI in the title or referenced it or talked about it. With that said, it seems like we’re seeing a little more sort of real world, hey, here’s some things we tried, here’s what’s working, here’s what’s not working.  CC: Mm-hmm. SO: And I’ll also say that we’re starting to see a really big split between the AI in regulatory environments, which would include the entire EU plus certain kinds of industries and the sort of wild, wild west of we can do anything. CC: Yeah. So do you feel like it sounds like, know, when AI first came onto the scene, there was mostly, you know, let’s just all adopt this right now. Let’s go for it full steam ahead, especially marketers as a marketer. can I can say that because we’re definitely gung-ho about stuff like that. It sounds like, the perspective has shifted to being more balanced overall. Is that what you would say? SO: Yeah, I mean, that’s the typical technology adoption curve, right? You know, have your your peak of inflated expectations, and then you have the I think it’s the valley. It’s not the valley of despair, but it’s something like that. But you know, you sort of go from this can do anything. This thing is so cool. Go, go, go, go, go to a more realistic. Okay, what can it actually do? And what you know, does the and this is true for AI or anything else? What can it do? What can’t it do? What does it do well? CC: Mm. SO: Where do we need to put some guardrails around it? What are some surprises in terms of things that are and are not working? CC: Yeah. And at some of the conferences we were at this year, our team had some things to say about AI as well. So we will link some of the recap blog posts we have in the show notes. Sarah, what are some of the things AI can’t do right now? are the still, what are, Sarah, what are some of the big concerns about AI that are still unanswered, unresolved? SO: So in the big picture, as we’re starting to see people roll out AI-based things in the real world, whether it’s tool sets or content ops or anything else, we’re starting to see some really interesting developments and some really interesting assessments. Number one is that when you look at those little AI snippets that you get now when you do a search and it returns a bunch of search, well, actually it returns a page of ads. CC: Yes. SO: And then some real results under the ads. And then above that, it returns an AI overview snippet. So those are surprisingly bad. You do a search on something that you know a little bit of something about and see what you get. And you will see content in there that is just flat wrong. I’m not saying it’s not the best summary. I’m saying it is factually incorrect, right? CC: Yeah, I hate them right now. SO: So those are surprisingly bad. And talking about search for a minute, which ties into your question about marketing, there’s some real problems now with SEO, with search engine optimization, because if I’m optimizing my content to be included in an AI overview that is A, wrong, and B, doesn’t actually give me credit, Pre-AI, those snippets that showed up would say, I sourced it from over here. CC: Mm-hmm. SO: And in many cases now, the AI overview is just like the sort of summary paragraph with no particular, there’s no citation. It doesn’t say where it came from. So what’s in it for me as a content creator? Why am I creating content that’s going to get taken over by the AI overview and then not lead to people going to my webpage, right? How’s that helped me?  CC: Yeah. Yeah. SO: So there’s some real issues there, there’s a move in the direction of thinking about levels of information. So thinking about very superficial information. How much does a cup of flour weigh? That type of thing. That’s just a fact and you can get it pretty much anywhere, we hope. And then there’s deeper information. Why is it better to weigh flour than to measure it? By volume, if you’re a baker. CC: Yeah. SO: And what does it look like to use weights? And are there differences among different kinds of flours? And what are some of the things I should consider when I’m going in that direction? So one of those, know, flours, a cup of flour weighs 120, sorry, a cup of all-purpose flour weighs 120 grams is a useful fact. And I don’t know if I really care if people peruse that further or come to my website for more about flour. The deeper information, the more detailed discussion of, you know, whole wheat versus all-purpose versus European flours versus American flours and all these other kinds of things, that requires more in-depth information and that is not so subject to being condensed into an AI summary. So that distinction between, you know, quick and dirty information versus deeper information, information that goes into a topic, CC: Mm-hmm. SO: We have a huge problem with disinformation and misinformation with information that is just flat out not either not correct or because of the way AI tools work, is trivially easy to generate content at scale. Tons and tons and tons and tons and tons of content. And because it’s trivially easy, CC: Mm-hmm. SO: That means it’s also trivially easy for me to generate, for example, a couple thousand fake reviews for my new product or a couple thousand websites for my fake products. It we can fractionalize down the generation of content.  CC: Yeah. SO: And the you know, the interesting part of this is that it implies that you could potentially, you know, we talk about doing A/B testing and marketing. You could do A/B/C/D/E/F/G testing pretty easily because you can generate lots and lots of variants and kind of throw a bunch of stuff against the wall and see what works. But the bad side of this is that you can generate fake news, fake information, fake content that is going to be highly, highly problematic from a content consumer trust point of view. And so that I think is the third piece that we’re looking at now that is going to be critical going forward. And that is information trust, content reputation or the reputation of content creators and credibility.  CC: Mm-hmm. SO: So for those of you listening to this podcast, how do you know it’s really us? Do you know these are live humans actually recording this podcast versus you know there’s now the ability to generate synthetic audio and you can create a perfectly plausible podcast which is really hard to say unless probably your AI and then it can probably do it perfectly but our perfectly plausible podcasts are you know how do you know that what that what you’re receiving in terms of content, digital content in particular, is actually trustworthy. And so I think ultimately there’s going to be some, need to be some tooling around verification, around authenticity, around, you know, this was not edited. You know, in the same way that you want to be able to verify that a photo, for example, is an accurate record of what happened when that photo was taken. CC: Yeah. SO: And if I went in and photoshopped it and cleaned it up, then that’s something that should be acknowledged. By the way, for the record, we do record these things and we do edit them. We try to stay on the right side of just editing out dumb mistakes and not editing it in a misleading way.  CC: Yeah, ums and ahs and yeah. SO: So it’s not like we record the whole thing from soup to nuts and never, you know, never break in and never edit things out because believe me, I’ve said some stuff that needed to be taken away. If you ever get the raw files, they are full of, I didn’t mean to say that. you might want to take that out.  CC: Me too, so many times. Let me start over, that’s me a lot all the time. SO: Yeah, sorry. Starting over. OK, but the point is that when we put out a podcast, we are saying this is our opinion, this is our content, and we’re gonna stand behind it. Whereas if it’s synthetic or AI generated or AI generated by these non-humans, you can do these weird, let’s make a podcast out of a blog post, well, okay, but what’s the value of that and why would I tr

12-02
19:41

Do enterprise content operations exist?

Is it really possible to configure enterprise content—technical, support, learning & training, marketing, and more—to create a seamless experience for your end users? In episode 177 of the Content Strategy Experts podcast, Sarah O’Keefe and Bill Swallow discuss the reality of enterprise content operations: do they truly exist in the current content landscape? What obstacles hold the industry back? How can organizations move forward? Sarah: You’ve got to get your terminology and your taxonomy in alignment. Most of the industry I am confident in saying have gone with option D, which is give up. “We have silos. Our silos are great. We’re going to be in our silos, and I don’t like those people over in learning content anyway. I don’t like those people in techcomm anyway. They’re weird. They’re focused on the wrong things,” says everybody, and so they’re just not doing it. I think that does a great disservice to the end users, but that’s the reality of where most people are right now. Bill: Right, because the end user is left holding the bag trying to find information using terminology from one set of content and not finding it in another and just having a completely different experience. Related links: The business case for content operations (white paper) Replatforming an early DITA implementation (case study) Hear Sarah speak about The reality of enterprise customer content at tcworld 2024! Hear Bill speak about the The challenges of replatforming, also at tcworld 2024. LinkedIn: Sarah O’Keefe Bill Swallow Transcript: Disclaimer: This is a machine-generated transcript with edits. Bill Swallow: Welcome to The Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about enterprise content operations. Does it actually exist? And if so, what does it look like? And if not, how can we get there? Hi, everyone. I’m Bill Swallow. Sarah O’Keefe: And I’m Sarah O’Keefe. BS: And Sarah, they let us do another podcast together. SO: Mistakes were made. BS: So today we’re talking a little bit about enterprise content operations. If it exists, what it looks like. If it doesn’t, why doesn’t it exist? What can people do to get there? SO: So enterprise content ops, I guess first we have to define our terms a little bit. Content operations, content ops is the system that you use to manage your content. And manage not the software, but how do you develop it, how do you author it, how do you control it, how do you deliver it, how do you retire it, all that stuff. So content ops is the overarching system that manages your content lifecycle. And when we look at content ops from that perspective, and of course we’re generally focused on technical content, but when we talk enterprise content ops, it’s customer-facing content, which includes techcomm, but also learning content, support content, product data potentially, and some other things like that. And ultimately, when I look at this, again bringing the lens back or going back to the 10,000-foot view, we have some enterprise solutions but only on the delivery side. The authoring side of this is basically a wasteland. So I have the capability of creating technical content, learning content, support content, and putting them all into what appears to be some sort of a unified delivery system. But what I don’t really have is the ability to manage them on the back end in a unified way, and that’s what I want to talk about today. BS: So those who are delivering in that fashion, so being able to provide customer-facing information in a unified way, as far as their system for content ops goes, it’s more, I would say, human-based. So it’s a lot of workflow. It’s a lot of actual management of content and management of content processes outside of a unified system. SO: So almost certainly they don’t have a unified system for all the content, and we’ll talk about why that is I think in a minute. It’s not necessarily human-based, it’s more that it’s fragmented. So the techcomm group has their system, and the learning group has their system, and the support team has their system, et cetera. And then what we’re doing is we’re saying, okay, well once you’ve authored all this stuff in your Snowflake system, then we’ll bring it over to the delivery side where we have some sort of a portal, website portal, content delivery CDP that puts it all together and makes it appear to the end user that those things are all in some sort of a, it puts it in a unified presentation. But they’re not coming from the same place, and that causes some problems on the backend. BS: Right, and ultimately the user of that content doesn’t really care if it’s a unified presentation. They just want their stuff. They don’t want to have a disjointed experience, and they want to be able to find what they’re looking for regardless of what type of content it is. SO: Right, and the cliche is “don’t ship your org chart,” which is 100% what we’re doing. And so let’s talk a little bit about what does that mean, what are the pre-reqs? So in order to have something that appears to me as the content consumer to be unified, well for starters, you mentioned search. I have to have search that performs across all the different content types and returns the relevant information. And what that usually means is that I have to have unified terminology. I’m using the same words for the same things in all the different systems. And I need unified taxonomy, classification system metadata so that when I do a search, everything, and maybe I’m categorizing or I’m classifying things down and filtering, that when I do that filtering, that the filtering works the same way across all the content that I’ve put into the magic portal. So taxonomy and terminology are the things that’ll make your search, relatively speaking, perform better. So we have this on the delivery side and that’s okay-ish, or it can be, but then let’s look at what we’re doing on the authoring side of things because that’s where these problems start. BS: So what do they start looking like? SO: Well, maybe let’s focus in on techcomm and learning content specifically. We’ll just take those two because if I try and talk about all of them, we’re going to be here for days and nobody wants that. All right, so I have technical content, user guides, online help, quick snippets, how-tos. And I have learning, training content, e-learning, which is enabling content, I’m going to try and teach you how to do the thing in the system so that you can get your job done. Now, let’s go all the way back to the world where we have an instructional designer or a learning content developer and a technical content developer. So for starters, almost always those are two different people, just right off the bat. And instructional designers tend to be more concerned with the learning experience, how am I going to deliver learning and performance support to the learner? And the technical writers, technical content people tend to be more interested in how do I cover the universe of what’s in this tool set, or this product, and cover all the possible reasonable tasks that you might need to perform, the reference information you need, the concepts that you need? It’s a lot of the same information. It’s there’s a slightly different lens on it. And in the big picture, we should be able to take a procedure out of the technical content, step one, step two, step three, step four, and pretty much use that in a learning context. In a learning context, it’s going to be, hey, when you arrive for your job at the bank every morning you need to do things with cash that I don’t understand. And here’s a procedure, and this is what you’re going to do, steps 1, 2, 3, 4, 5, and you need to do them this way and you need to write them down, and it tends to be a little more policy and governance focused, but broadly it’s the same procedure. So there should be the opportunity to reuse that content. And big picture, high-level estimate is probably something like 50% content overlap. So 50% of the learning content can or should be sourced from the technical content. The technical content is probably a superset in the sense that the technical content covers, or should cover, all the things you can do, and training covers the most common things or the most important things that you need to do. It probably doesn’t cover a hundred percent of your use cases. Okay, so now let’s talk about tools. BS: Right because I was going to say these two people, the technical writer and the training developer, they are using, at least historically, two very different sets of tools to get their job done. SO: Right. So unified content solutions, without getting into too many of the specifics, which will get me in big trouble, basically the vendors are working on it, but they’re not there yet. There’s a lot of point solutions. There’s a lot of, oh yes, we have a solution for techcomm and we have a solution for learning and we have a delivery solution, but there’s not a unified back end where you can do all this work. And some of the vendors have some of these tools in their stable, some of them don’t. But from my point of view, it doesn’t really make a whole lot of difference whether you buy two-point solutions from separate vendors or from the same vendor because right now they’re disconnected.   BS: They’re two-point solutions. SO: Yeah, they’re all point solutions. So it’s not good. And then that brings us to how can we unify this today? What can we do and what kind of solutions are our customers building or are we building with our customers? So a couple of things here. Option A is you take your structured content solution and you say, “Okay, learning content people, we’re going to put you in structured content. We’re going to move you into the component content management system. We’re going to topi

10-21
24:34

Survive the descent: planning your content ops exit strategy

Whether you’re surviving a content operations project or a journey through treacherous caverns, it’s crucial to plan your way out before you begin. In episode 176 of the Content Strategy Experts podcast, Alan Pringle and Christine Cuellar unpack the parallels between navigating horror-filled caves and building a content ops exit strategy. Alan Pringle: When you’re choosing tools, if you end up something that is super proprietary, has its own file formats, and so on, that means it’s probably gonna be harder to extract your content from that system. A good example of this is those of you with Samsung Android phones. You have got this proprietary layer where it may even insert things into your source code that is very particular to that product line. So look at how proprietary your tool or toolchain is and how hard it’s going to be to export. That should be an early question you ask during even the RFP process. How do people get out of your system? I realize that sounds absolutely bat-you-know-what to be telling people to be thinking about something like that when you’re just getting rolling– Christine Cuellar: Appropriate for a cave analogy, right? Alan Pringle: Yes, true. But you should be, you absolutely should be. Related links: Nightmare on ContentOps Street (podcast) Enterprise content operations in action at NetApp (podcast) Content creature feature LinkedIn: Alan Pringle Christine Cuellar Transcript: Disclaimer: This is a machine-generated transcript with edits. Christine Cuellar: Welcome to the content strategy experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way. this episode, we’re talking about setting your ContentOps project up for success by starting with the end in mind, or in other words, planning your exit strategy at the beginning of your project. So I’m Christine Cuellar, with me today is Alan Pringle. Hey, Alan.  Alan Pringle: Hey there. CC: And I know it can probably sound a bit defeatist to start a project by thinking about the end of the project and getting out of a new process that maybe you’re building from the beginning. So let’s talk a little bit more about that. Why are we talking about exit strategy today? AP: Because everything comes to an end. Every technology, every tool, and we as human beings, we all come to an end. And at some point, you are going to have tools, you’re gonna have technology and process that no longer supports your needs. So if you think about that ahead of time, and you’re ready for that inevitable thing, which will happen, you’re gonna be much better off. CC: Yeah. So this conversation started around the news of the DocBook Technical Committee closing, and that’s kind of a big deal for a lot of people, and it kind of sparked this internal conversation about like, you know, what if that happened to you? How can people avoid getting caught by surprise? And of course, as Alan just mentioned, the answer to that is really to begin with the end in mind, to have an exit strategy because everything does end at some point. So this got me thinking about, you know, I don’t know, Alan, you’ve seen the horror movie The Descent, right? You’ve seen that movie? Yes, because it’s amazing and it’s a horror movie and it’s awesome. So it me kind of think of that because, you know, this group, and I’m not going to spoil it, no spoilers for people who haven’t seen it yet, but, if you haven’t, go watch it. The first one’s my favorite. I haven’t seen the second one, so I’m biased. Anyways, that’s not the point. This group plans to go along one path, you know, down these caves which are definitely in North Carolina, right Alan? That’s definitely where they take place. AP: Well, they say it is in North Carolina, but it is quite clearly not filmed in North Carolina. As someone who is familiar with Western North Carolina, I had to laugh at this movie trying to pass off somewhere in the UK as like the Appalachian Mountains, but that’s just a quibble. So go ahead with your story. CC: Anyways, yeah, they got a mountain in there, right? And then there’s a path into the mountain. Of course, they’re going to explore this deep, dark cave. So they’re descending as the name implies. And so they’re planning to go along one path. think someone maybe tricked someone else along the way. I can’t remember. But they’re planning on going down one path. And there’s a lot of things that begin to happen that they didn’t plan on. And one scene in particular, there’s a cave that collapses and of course that means they have to pivot, right. AP: Yeah. CC: So when you’re thinking about building an exit strategy and trying to plan for things that you can’t anticipate, how do you anticipate things you can’t anticipate? AP: Well, first of all, let’s be clear. All the things that happened in that movie happened in a period of like two hours or an hour and a half. And part of the issue with any kind of process and operations is things can slowly start to go badly and you just kind of keep on trucking and really don’t pay attention to it. But… CC: Yes. AP: It’s not just about fine tuning your operations. That’s a whole other conversation. You your process is going to require updating every once in a while. There going to be new requirements and you need to address them in your content ops by changing your process, updating your tools, maybe adding something new. What we’re talking about here is when those tools and that process, they’re coming to an end, for example, because a particular piece of software is being defecated. It is end of life. What are you going to do? CC: Mm-hmm. AP:  What if there is a merger? You have a merger and there are two systems doing the same thing. One of those systems is going to lose and go away. Why are you going to maintain two of the same systems? So you’re going to have to figure out how to pivot to get to that. CC: Mm-hmm. AP: So there are all of these things that can happen that mean you have got to exit whatever you were doing and move into something new, something different. And the reasons are many, like I just mentioned, but the end result is, are you ready for when that happens? In a lot of cases, frankly, people aren’t. CC: Yeah. So if you could give listeners three pieces of advice on how to be less dependent on a particular system, if you had to narrow it down to three, what would you suggest to help them not be just dependent on one particular system or maybe a set of systems? AP: One thing is when you’re choosing tools, if you end up something that is super proprietary, has its own file formats, et cetera, that means it’s probably gonna be harder to extract your content from that system because it is proprietary. Even if your content is in a standard, and in a lot of cases, of course, I’m talking about DITA, the Darwin Information Typing Architecture and XML standard. Even with DITA, even though it’s open source and a standard, some of the systems that can manage DITA content put their own proprietary layer on top. A good example of this is, for example, those of you with Samsung Android phones. I’ve had one in the past. CC: Yeah, that’s me. AP: Samsung puts their own proprietary layer on top of the Android operating system and a lot of that stuff frankly I hate, but that’s not the point of this conversation, but it’s the same issue. You have got this proprietary layer where it may even insert things into your source code that is very particular to that product line. So look at how proprietary your tool is or your toolchain is and how hard is it going to be to export? That should be an early question you ask during even the RFP process. How do people get out of your system? And I realize that sounds absolutely bat, you know what, to be telling people to be thinking about something like that when you’re just getting rolling– CC: Appropriate for a cave analogy, right? AP: Yes, true. But you should be, you absolutely should be. CC: And how do you know you are going to get onto the other two things to think about in just a second, but question there, how do, what are some maybe green flags for how that question should be received or how you want that question to be received if it’s going to maybe be the right fit? AP: I would hope some variation of the answer would be you can export to this standard, although that often is probably not the answer that you’re going to get. CC: Okay, as standard. What are some other things people need to keep in mind in order to not be system-dependent? AP: I don’t know if it’s so much system-dependent, but you need to think culturally about what this means. People become very attached to their tools because they become very adept. They become experts in how to manipulate and do whatever with a certain tool set. And they feel like, you know, I am in total control here. I know what I’m doing. Things are running well.  CC: Yeah. AP: And when it turns out that tool is going to have to go away, their entire process and their focus on being an expert, it’s blown. It’s just blown away. And that can be very hard to deal with from a person level, a people level, having to tell people, yeah, this is a shock to your system. You’ve been using this tool forever. You’re really good at it. Unfortunately, that tool is being discontinued. We’re gonna have to move to something else. That can be very hard for people to swallow and it’s understandable. CC: Mm-hmm. AP: It’s completely understandable. One other thing that I will mention is if you can get your source content, not the actual delivery points I’m talking about here, but wherever you’re storing your source in some kind of format neutral, file format and again, talking mostly about XML content, extensible markup language, because when you create that content, you are not building in the formatting. You were creating it as a markup language. And the minute your content is in a markup language

10-07
18:06

Recommend Channels