interview with Emily Kolvitz interview with Emily Kolvitz interview with Emily Kolvitz

Update: 2017-07-17

Description presents an audio interview with Emily Kolvitz on image recognition

Listen and subscribe to on Apple PodcastsAudioBoom, CastBox, Google Play, RadioPublic or TuneIn.


Keywording Now: Practical Advice on using Image Recognition and Keywording Services

Now available


Henrik de Gyor:  This is I’m Henrik de Gyor. Today, I’m speaking with Emily Klovitz.

Emily, how are you?

Emily Klovitz:  I’m doing great. How are you, Henrik?

Henrik:  Good. Emily, who are you and what do you do?

Emily:  I’m a DAM consultant, marketer, and digital asset manager for Bynder. We’re an award-winning digital asset management software that allows brands to create, find, and use content such as documents, graphics, and videos. Before joining Bynder, I worked as a digital asset manager for JC Penny. I have MLIS, my masters in library information studies from the University of Oklahoma. I’ve worked with hundreds of different clients on their DAM implementations, providing best practice and consultation. Because I work with clients, I’m often able to see the very real world implications of what AI tagging can actually be like with live collections of content. The successes and challenges are very real, very tangible, and that’s not always something that you see when you’re watching a webinar or a product demo.

Henrik:  Emily, what are the biggest challenges and successes you’ve seen with image recognition?

Emily:  For challenges, of course, there are some challenges and opportunities for improvement when it comes to AI tagging. I think many of them have to do with the application and configuration of the AI, not necessarily the technology itself. Today, once specific limitation currently in our own implementation of AI, we only have US American English tags at this time, so we wanted to make a claim on the AI space very quickly, so English to start with was part of our MVP for AI features. Obviously, there’s more to come in the future. I think some other limitations include things like only certain file types are scanned, such as JPEG and IMG, so there’s an opportunity to extend this out to things like video, documents, etc. Many other companies are already doing this, companies like for example or even DocumentCloud, which scans your documents through Thomas Reuters Open Calais to extract entities, topic codes, events, relations, social tags. In addition, there’s a full list of AWS limitations on the recognition site as well, which is what we use. But in terms of what more general things I think need to be considered challenges are things like mistakenly tagging something in a way that’s hurtful or harming in some manner. Those are things that don’t usually become apparent until after the fact. I think that AI tagging is very much in its infancy in terms of its application and that we’ll see it greatly grow and mature in the coming years where we may start to see challenges like information and privacy concerns pertaining to facial recognition. Being able to opt out of these things will basically be a big need for clients.

As far as successes go, AI tagging detects objects, scenes, and can identify thousands of objects such as vehicles, pets, furniture, and it provides the confidence for, which simply tells you how confident the AI is that that tag is relevant and accurate. It’ll detect scenes within an image, so things like a sunset or a beach. This has really big implications for search filtering and curating very large image libraries. From my perspective alone, the time-saving factor for DAM managers, digital asset librarians, content managers, and admins of the system is probably one of the biggest successes for AI tagging. They spend an enormous amount of time and resources on metadata application alone. It’s tedious thankless work, but absolutely necessary so that people can find the assets they need.

In terms of other things, I think it’s also helping to put a minimum viable metadata on a very large digital asset collection that may otherwise remain untagged. For DAM, it means that uploaded images get auto-tagged, helping with categorization, identification, and searchability of assets that could possibly otherwise be buried in the depths of your collection without metadata.

Henrik:  Emily, as of July 2017, how do you see image recognition changing?

Emily:  Becoming a defacto feature of digital asset management systems and less of a fun/nice to have feature, like more of a novelty feature, it’s becoming something you have to have.

Henrik:  What advice would you like to share with people looking into image recognition?

Emily:  This is a good one. If you can, provide a sample of your assets to different vendors and ask for results. It’s very easy to see a webinar or a product video showing 100% accuracy and it’s really neat, but it’s also really important to try out a wide variety of image assets to see where the real limitations are for each image type and the associated algorithms.

Henrik:  Where can we find more information?

Emily:  There’s lots of places on the internet you can find more information about AI tagging. You can find information from us specifically on our blog, Amazon’s recognition website has a great FAQ that you can check out. We also did a presentation at the photo metadata conference in Germany, the IPTC Metadata Conference on image recognition and AI. There’s a PDF and a video available of this presentation on

Henrik:  Great. Well, thanks Emily.

Emily:  Thank you, Henrik.

Henrik:  For more on this, visit

Thanks again.

For a book about this, visit








3.0x interview with Emily Kolvitz interview with Emily Kolvitz

Henrik de Gyor