Tim's Weblog
Tim Strehle’s links and thoughts on Web apps, software development and Digital Asset Management, since 2002.
2015-10-15

Digital Asset Management.com: DAM Champ: Tim Strehle, Part 2

This article was written by Laurel Norris and published on the Digital Asset Management.com site (owned by DAM vendor Widen) on October 15, 2015, under the URL http://digitalassetmanagement.com/blog/dam-champ-tim-strehle-part-2/. Because that site’s blog posts are not available anymore, I’m publishing a copy on my blog.


Welcome to part two of Tim Strehle’s DAM Champ interview. In the first part, he covered the basics of digital asset management (DAM) and sharing his perspective as a developer, DAM media site manager, and degreed information professional. Yes, it is as impressive as it sounds.

In this second part of his profile, he gets into more detail on DAM, custom development, and misunderstanding metadata.

Do you remember when you first heard the term “digital asset management?” It’s a question I’ve been wanting to ask, since I know information science professionals are not always familiar with DAM systems.

I don’t, but it must have been long after 1997, the year I started working as a DAM software developer. It took me years to figure out which market we were operating in and who our competitors were. Mind you, I was a junior developer still learning to write code, with little insight into marketing and sales. And there was almost no information about DAM on the Web.

What is popular in DAM these days? Anywhere you see a lot of activity?

Custom development is a hot topic, and it is worth talking about. As a developer, I know that every single feature adds complexity that comes at a cost for users, administrators, maintenance and support.

Developers have made a habit of asking “do we really need to code this?” whenever we start implementing a feature. And sometimes it turns out that we don’t, because getting creative with an existing feature solves the problem just as well.

If you do need custom features, keep these two points in mind:

  1. Work closely with your DAM provider on custom features. Make sure both your staff and the vendor are prepared to go through multiple iterations: a working subset delivered early for you to evaluate, which you give feedback on for the next version. Keep feedback cycles short, maybe even ask the vendor to send a developer to work in your office for a while. Test thoroughly, ask all the questions, take nothing for granted.
  2. Verify that all custom development is well-documented. Someone on your end needs to fully understand what has been built, including the impact on support and on future upgrades of the core software. Things that you know to change from time to time should be configurable by you, not require code changes by the vendor.

Do you think there are any misunderstandings about digital asset management?

I think we have a too-simplistic view of what metadata is. Our asset-centric perspective makes us treat it like “just some flat fields to help us find a file”.

That’s not always true: During my internship, I was tasked with adding metadata to newspaper articles about crime cases. Each case had a few properties – type of crime, weapon used, who, where, when – which I diligently added to every article. Of course, multiple articles were published about each case over time, so I was entering the same data over and over again. When important new facts came to light about cases over time, I didn’t have the time to dig up previous articles and update their metadata, so the database contained incomplete or wrong information. It was also hard to search for cases; a search for murders in Hamburg returned thousands of articles, not the few dozen cases I was interested in.

The system should have let me create a small database of crime cases instead, link the articles to cases and have them inherit case metadata dynamically so that an updated case would automatically reindex the article metadata. Even better: I should have had the option to link the articles in the DAM system to the existing database of crime cases maintained by journalists working in the same building. Either change would have improved data quality, enabled new ways of searching, and sped up manual indexing.

There was no Linked Data and semantic database back in 1994, but now that we have it, let’s adopt Eric Barroca’s “deep content” moniker and stop duplicating and dumbing down important data.

You must follow a lot of DAM news sources. Do you see any interesting trends in the industry?

I could just point to the DAM innovation debate from the beginning of this year: There’s more and more DAM systems, implementing the same features over and over again. To be honest, I’m getting a bit tired of this arms race. By now, I guess each of the 100+ DAM products does cloud and “social” one way or the other, has a redesigned, mobile-friendly HTML5 user interface, APIs and portal functionality, and switched to a Lucene-powered search engine (Solr or Elasticsearch).

The interesting stuff is hardly trending…I’m seeing a tiny trend towards Semantic Web / Linked Data technology, which I believe can help connect DAM metadata to other information silos. I’m hoping for innovation in manual metadata entry – smart interfaces backed by machine learning could make this so much easier for the librarian. Picturepark’s adaptive metadata is only the first step. And you may have heard of Contentful, an “API-first, headless Web CMS”. I wonder whether there’s a market for a “headless DAM system”?