Digital Asset Management - DAM
Description
Michael Wells, Founder and MD of "Third Light" explains the ins and outs of what DAM (Digital Asset Management)
actually is. We'll find out how it works, how it can benefit an
organisation, how AI helps to classify digital assets, why digital assets are far more than just photographs, and what a company can do if they can't afford a "full
blown" system (at 25 mins).
Michael also talks about how he has grown his technology business over nearly 20 years and the challenges that brings.
TRANSCRIPT
Kiran Kapur (00:13 ):
My guest this week is Michael Wells, who is the Founder of Third Light, which is a Cambridge-based software company and they do digital asset management. Now, if your idea of digital assets is having a whole load of things piled onto the computer with no real structure to them, Michael will have the answers. So, Michael, welcome. Can we start with what digital assets actually are?
Michael Wells (00:36 ):
Hi, Kiran. It's nice to talk to you again as well. Thanks for inviting me. Digital assets, well, let's just look at what we all know about, which is files that we need for our projects, particularly in marketing. So things like photography, or videos, logos, brochures, promotional, art, everything that you do in marketing creates that kind of content. And so you've got lots of files, but what you don't have in a single store, most of the time is information about the files and we call that metadata. So when you add information, metadata to the files and you organise them in a particular way, maybe using that, then you create digital assets. And the reason we call them digital assets is partly just to recognise that they're usually digital visual content, but also that they are assets. In other words, they do have some value, and that value is from having organised them so that they can be reused.
Kiran Kapur (01:33 ):
Okay. So as soon as anyone uses the word meta, I start to panic. So what is metadata? Can you give me an example?
Michael Wells (01:40 ):
I can. Yeah, sure. So let me describe to you. Now, we're on a radio-style interview, so I can't show you the photo I have in front of me, but I've got a beautiful picture of a scene not far from here in the West Suffolk alps. And I need to put it into text for you, so I'll describe it as rolling fields with hedge rows. And as I continue to do that, you can see that really I'm describing it to you. So the file might be the JPEG of this photograph and the metadata in this case is me captioning it for you. So I'm storing that text with the image and that's the metadata about the image.
Kiran Kapur (02:22 ):
And how detailed would that be? Because you've just done a lovely description of it being a green area with a path that goes through it, et cetera. Would I capture all of that in my metadata?
Michael Wells (02:28 ):
Actually I would say you can capture even more than that. That particular caption will be great for a photograph, but if you're working in a project environment around marketing assets, then your metadata could be something much more to do with the project. It could be who took the photograph, who owns the project, the dates when it goes live, maybe website addresses where the content will be stored. So metadata, you can think of as a really broad concept. It's actually any information at all that you'd like to store that relates to something that you've put into a library that's your digital asset.
Kiran Kapur (03:04 ):
Okay. So there's various things I want to unpick there, but let's carry on with the data. And then I'll come back to the concept of a library, which is also slightly scary. So I can put in lots and lots and lots of data. Doesn't that just get confusing?
Michael Wells (03:17 ):
Well, good point. I mean, yes, you could argue aren't we just storing lots of stuff in a big heap? And of course that's the whole point of software to prevent that from happening and to make it feel organised and useful. So if I log into our system and I've got lots of photographs of scenery, then maybe I want to find things which are actually in a particular county in the UK. So I might click on a keyword for Cambridgeshire. And when I click it, all of the non-Cambridgeshire content is filtered out automatically. And I would go further and say, actually, I shouldn't have to type in the word Cambridgeshire. It should be presented to me as one of the possible ways of filtering the data. So putting a useful and friendly user interface on front of metadata is a great thing to do. And without it, yes, you can run the risk of producing a glorified spreadsheet.
Michael Wells (04:12 ):
The other thing I would say, something that lots of people would find a bit daunting is that there seems like lots of things to type in, and that's true, but lots of content these days can be set by artificial intelligence, particularly photographs. We can scan them and automatically tag them with what they are with pretty good accuracy.
Kiran Kapur (04:35 ):
Whoa, hang on. So I can literally sit it in front of some form of computer, the computer scans it and it tells me what's in the photo.
Michael Wells (04:42 ):
That's right. And there's a huge market for that of course. It's a massive labor-saving tool. But yeah, if we upload a thousand new photos in less than a minute, we can have them all described in a rich way with particularly keywords. That's a very useful feature of AI tagging these days.
Kiran Kapur (05:04 ):
So does the user set the keywords or does the AI work out what those are?
Michael Wells (05:09 ):
So the AI will probably do the keywords. It finds the dominant colours and it may even recognise things like if it's an animal, that could tell you the species, the genus, all sorts of really domain-specific knowledge. It's ever so good at this stuff. They'd been trained with a huge amount of knowledge about a huge amount of content types. So they can really do a great job. And if it is too specific and the AI doesn't get it, you can actually train them as well so that they know about something really unique about your business or maybe the actual people who work at your company. You can train them by looking at their faces.
Kiran Kapur (05:48 ):
I love the way you're sounding very calm about this. I have to say my head's just exploded that the machines can do this to that extent. Okay. I can understand it on photos because it's an image and it's static. Does that also work on other things?
Michael Wells (06:03 ):
Yeah. I mean, there are tools that can do speech to text and they're getting quite good. Actually, stepping out of the world of DAM for a moment. Before we really understood artificial intelligence and machine learning, there were quite sophisticated tools that could do dictation, but you had to be very, very careful about how you speak to them. And they wouldn't be able to keep up with a conversation like we're having.but in the advent of AI, all of that suddenly became incredibly smart and really much more adaptable. And we benefit from that in lots of ways. I mean, if you have a smart speaker and you mumble, "Hey Siri," this, that, and the other, even in an echo-y room, across the room, they understand you now, and the reason they're good at this is because of AI.
Michael Wells (06:52 ):
So yep, we can extract dialogue from videos, from television broadcast, for example. Lots of captioning is now possible using software instead of people. And yes, you can extract transcripts from audio files as well,
Kiran Kapur (07:08 ):
So having looked at the metadata and possibly using AI to organise that, what then happens with it? So you use the phrase library.
Michael Wells (07:17 ):
So the product that we are suggesting is useful here is actually a web-based application, which is important because it means everybody can access it from anywhere. So you're obtaining an application that you can reach through your web browser. And when you log into it, obviously with your own credentials, you get a personalized space inside it where you can store and manage, organise things. And it may be part of it is also access to shared areas of content, which we call spaces. And so you might have a team for marketing, a team for sales, a team for R&D, and they all have their own management ideas about what they'll store and how they'll organise it, but they can share things across between each other.
Michael Wells (08:05 ):
And things like the metadata, of course, might reflect their particular needs to do with what they're trying to store. So whether or not something is top secret could be relevant to the R&D team, but whether or not something actually has model rights could be more relevant to the marketing team. So metadata is the supporting structure that makes it valuable as content, because it tells you about the files that you've got. And then the digital asset management system also provides this nice and convenient way of getting in and using the files in a shared place.
Kiran Kapur (08:40 ):
And digital assets, we've talked about it being photos and logos of videos, but I noticed one of the case studies you have on your website actually was somebody that was dealing with... It was a university talking about approval forms that they'd had to get and they had to hold all the GDPR forms. So it can be literally anything.
Michael Wells (08:58 ):
It can. Yeah. In fact, a kind of metadata is when you relate files to each other. So if you've got a photograph, and it might be the universities and schools is a very good example, you've got pictures of say a sports day, but there are children in the photo and you need to have consent from the parents. So you obviously have some content, the photo, but you also have metadata about that photo, which in























