Ralph Windsor discusses my previous blog post on DAM News – Applying Linked Data Concepts To Derive A Global Image Search Protocol. He finds better words than I did, rephrasing my suggestion as “a universal protocol where images get described like web pages (HTML) so you can crawl them using search engine techniques”.
Ralph points out that large commercial image sellers might not want to participate in an open network: “Allowing their media out into the open for some third party to index – who they probably regard with wary suspicion (e.g. Google) is likely to be a step too far.” Maybe. Although they’ll go where the customers are – a Google Images search for “airport hamburg 92980935” turns up Getty Images image #92980935, so I assume that Getty Images wants Google to crawl their database. If an open image network emerges on the public Web, the commercial platforms will want to become a part of it once it reaches critical mass. What’s more, one of them could even embrace the change and start building the best image search engine that crawls the Web! (A bit like the Getty Images Flickr cooperation but without the need to copy the images over into their database.)
But “out in the open” is an important point: Many images (and other content types) will always be restricted to limited groups of users. Still, this is no reason to invent a complicated API for accessing them: In intranets, lots of non-public documents are available as HTML, allowing users and internal search engines to easily access them. You can do the same for image metadata – restrict access to the local network, require username and password (or API key, authorization token etc.) as you see fit, but serve it to authenticated search engines (and users) as HTML + RDFa anyway.
A Web of images (to paraphrase Mike Eisenberg) with rich metadata that’s easy to read for machines and humans? I have no idea whether we’ll actually get there in the near future, but that’s what we should aim for!