Very interesting webinar recording by Demian Hess: Managing Digital Rights Metadata with Semantic Technologies. (You need to supply your e-mail address to view the recording, and playback requires Windows Media Player, but it’s worth it if you’re into Digital Asset Management and rights metadata.)
Demian explains how complex licenses are, and how people try to simplify them because the complexity / variability doesn’t fit into their DAM systems. And why dumbing down doesn’t work too well in the long term.
His approach is to losslessly store all the licensing terms in a separate RDF database, which is integrated with the DAM system so that terms can be displayed along with the DAM asset information in the user interface. Special licensing reports (using SPARQL in the backend) can list all the different terms for a set of assets.
I only wonder why there’s no mention of RightsML? RightsML, hopefully, is going to become the standard for rights metadata, and it’s built on semantic technology. See my blog post Rights Management in the DC-X DAM – and RightsML.
Demian also wrote an article on Digital Rights and the Cost of "Lousy Record Keeping".
Tue, 27 Jan 2015 10:23:59 +0100
As a Semantic Web / Linked Data newbie, I’m struggling with finding the right URIs for properties and values.
Say I have a screenshot as an PNG image file.
If I were to describe it in the Atom feed format, I’d make an “entry” for it, write the file size into the “link/@length” attribute, the “image/png” MIME type into the “link/@type” attribute, and a short textual description into “content” (with “@xml:lang” set to “en”). Very easy for me to produce, and the semantics would be clear to everyone reading the Atom standard.
Now I want to take part in the “SemWeb” and describe my screenshot in RDFa instead. (In order to allow highly extensible data exchange between different vendors’ Digital Asset Management systems, for example.) But suddenly life is hard: For each property (“file size”, “MIME type”, “description”) and some values (“type: file”, “MIME type: image/png”, “language: English”) I’ve got to provide a URL (or URI).
I could make up URLs on my own domain – how about http://strehle.de/schema/fileSize ? But that would be missing the point and prevent interoperability. How to Publish Linked Data on the Web puts it like this: “A set of well-known vocabularies has evolved in the Semantic Web community. Please check whether your data can be represented using terms from these vocabularies before defining any new terms.”
The previous link lists about a dozen of vocabularies. There’s a longer list in the State of the LOD Cloud report. And a W3C VocabularyMarket page. These all seem a bit dated and incomplete: None of them link to schema.org, one of the more important vocabularies in my opinion. (Browsing Semantic Web resources in general is no fun, you run into lots of outdated stuff and broken links.) And I haven’t found a good search engine that covers these vocabularies: I don’t want to browse twenty different sites to find out which one defines a “file size” term.
I’m pretty sure the Semantic Web pros know where to look, and how to do this best. Please drop me a line (e-mail or Twitter) if you can help :-)
For the record, here’s what I found so far for my screenshot example:
“file size”: https://schema.org/contentSize
“MIME type: http://en.wikipedia.org/wiki/Internet_media_type or http://www.wikidata.org/wiki/Q1667978
“description”: http://purl.org/dc/terms/description or https://schema.org/text
“type: file”: http://en.wikipedia.org/wiki/Computer_file or http://www.wikidata.org/wiki/Q82753, or more specific: http://schema.org/MediaObject or http://schema.org/ImageObject or even http://schema.org/screenshot
“MIME type: image/png”: http://purl.org/NET/mediatypes/image/png or http://www.iana.org/assignments/media-types/image/png
“language: English”: http://en.wikipedia.org/wiki/English_language or http://www.lingvoj.org/languages/tag-en.html or https://www.wikidata.org/wiki/Q1860
Mon, 22 Dec 2014 11:38:13 +0100
All the content-based software I know (WCMS, DAM and editorial systems) is built the same way: It stashes its data (content, metadata, workflow definitions, permissions) in a private, jealously guarded database. Which is great for control, consistency, performance, simpler development. But when you’re running multiple systems – each of which is an isolated data silo – what are the drawbacks of this approach?
First, you’ve got to copy data back and forth between systems all the time. We’re doing that for our DAM customers, and it’s painful: Copying newspaper articles from the editorial system into the DAM. Then copying them from the DAM into the WCMS, and WCMS data back into the DAM. Developers say “the truth is in the database”, but there’s lots of databases which are slightly out of sync most of the time.
You’re also stuck with the user interfaces offered by each vendor. There’s no way you can use the nice WordPress editor to edit articles that are stored inside your DAM. You’d first have to copy the data over, then back again. User interface, application logic and the content store are tightly coupled.
And your precious content suffers from data lock-in: Want to switch to another product? Good luck migrating your data from one silo into the other without losing any of it (and spending too much time and money)! Few vendors care about your freedom to leave.
I don’t believe in a “central content repository” in the sense of one application which all other systems just read off and write to (that’s how I understand CaaS = Content as a Service). No single software is versatile enough to fulfill all other application’s needs. If we really want to share content (unstructured and structured) between applications without having to copy it, we need a layer that isn’t owned by any application, a shared content store. Think of it like a file system: The file system represents a layer that applications can build on top of, and (if they want to) share directories and files with other software.
Of course, content (media files and text) and metadata are an order of magnitude more complex than hierarchical folders and named files. I’m not sure a generally useful “content layer” can be built in such a way that software developers and vendors start adopting it. Maybe this is just a dream. But at least in part, that’s what the Semantic Web folks are trying to do with Linked Data: Sharing machine-readable data without having to copy it.
P.S.: You don’t want to boil the ocean? For fellow developers, maybe I can frame it differently: Why should the UI that displays search results care where the displayed content items are stored? (Google’s search engine certainly doesn’t.) The assumption that all your data lives in the same local (MySQL / Oracle / NoSQL) database is the enemy of a true service-oriented architecture. Split your code and data structures into self-contained, standalone services that can co-exist in a common database but can be moved out at the flip of a switch. Then open up these data structures to third party data, and try to get other software developers to make use of them. If you can replace one of your microservices with someone else’s better one (more mature, broadly adopted), do so. (We got rid of our USERS table and built on LDAP instead.) How about that?
Related posts: Web of information vs DAM, DM, CM, KM silos. Cloud software, local files: A hybrid DAM approach. Linked Data for better image search on the Web.
Wed, 10 Dec 2014 12:40:16 +0100
Deborah Fanslow – Who Needs a DAM Librarian? Part II: Information Professionals: A Field Guide:
“Information professional specimens often manifest the following dispositions: perpetual curiosity, creativity, technical fluency, a compulsive need to create order out of chaos, and an intense passion for connecting people with information.
[…] Originating around the turn of the 19th century (and known initially as the field of “documentation”), information science research was initially focused on scientific, technical, and medical information due to its base of practitioners within science and industry who were looking for ways to manage large amounts of data and resources.”
Wonderful in-depth article. Great to see the “documentation” roots included; my German university degree is “Diplom-Dokumentar (FH)” – and no-one understands what that means. Now I can point people to Deb’s explanation!
Thu, 04 Dec 2014 09:21:48 +0100
In software, the thing I’m most excited about at the moment is schema flexibility. (I first saw that term in a tweet by Emily Ann Kolvitz.) I think we’re losing a lot of valuable metadata, and business value, because the software we keep our structured data in makes it so hard to change the data model.
Example #1: Your system stores each customer’s e-mail address. Now you want to extend this to allow multiple addresses per customer, each with a label (“work e-mail”, “personal e-mail” etc.)
Example #2: Your archival system knows the publication date for each of your newspaper articles. Now you want to archive Web articles as well, but their publication date includes the time of day whereas print articles only have the day.
Example #3: Users can already add simple custom fields (say, “Photographer name”), but sometimes they really need to add custom structures and relations (i.e. a separate “Photographer” record with its own fields, and links to to these records).
Sounds simple? Well, you’ll need a developer and database administrator for all of the above. And it might be a lot of work for them.
Most structured data still lives in relational (SQL) databases. They’re wonderful, but they make it especially hard to change your data model. Demian Hess illustrates this in the first part of his excellent DAM and the Need for Flexible Metadata Models series: “As new asset types are discovered, you need to restructure the database by adding new tables or new columns. Database restructuring requires expensive and disruptive changes in queries and application-layer logic. […] The fundamental flaw is that we are attempting to define all the attributes for every type of digital asset in our data model in advance. In other words, we are imposing an inflexible data model.”
This rigidity is one reason for the current wave of NoSQL databases. There’s document databases like MongoDB, way more flexible but they “tend to suffer in supporting relationships between documents” (Demian Hess – DAM and Flexible Data Models Using Document Databases). Graph databases or RDF triple stores like BrightstarDB also fall into the NoSQL category. I don’t like their data model, but they do give you schema flexibility.
To be exact, these NoSQL products give your developers schema flexibility… In my opinion, the real game-changer is when power users can extend the data model. Of course this isn’t for everyone. But why can’t the librarian, a skilled user in marketing or sales, or your IT support staff enhance the database schema? And not just with a simplistic custom field, but any structure that makes sense? Having to wait for your developer (or worse, for a vendor) costs time and money, and kills many sensible ideas. Yes, developers may be needed to add polish or use the new data in integrations with other software. But power users should be able to model the data exactly as your business needs it.
This vision is why I’ve started to experiment with a user-friendly Topic Maps engine, TopicBank. It’s in a very early stage right now, but I’ll have something for you to play with sometime in 2015 :-)
P.S.: See what I mean in the Sourcefabric Superdesk description: “co-ordinated, managed and configured by journalists to suit their normal workflow — and for them to change that on the fly to cope with events needing a non-standard workflow.”
P.P.S.: Loosening your database schema has its disadvantages, of course. See Martin Fowler’s slide deck on Schemaless Data Structures. But I’m siding with one of his conclusions: “Custom fields and non-uniform types are both good reasons to use a schemaless approach.”
Tue, 02 Dec 2014 21:17:23 +0100
Laurence Hart – Chaos Reigns at Content Management Vendors:
“Cloud had been dismissed before, as clients hadn’t been asking for the cloud. Customers hadn’t asked because they determined that the legacy vendors were the wrong people to ask.
[…] It wasn’t until last year that they started to realize that a SaaS service was what was truly needed. […] Customers view these offerings as “me too” capabilities that validate the approach of the EFSS vendors.”
Mon, 24 Nov 2014 20:00:34 +0100
1988: As a teen, I wanted to learn to code but couldn’t afford to buy Turbo Pascal for my Atari ST. Most programming languages (interpreters / compilers) were distributed commercially. There was no Web to download from, someone had to ship floppy disks.
1994: As a student, I loved to have Pascal (commercial but cheap) and Microsoft Access 2.0 (expensive, paid for by my first client) on my Windows 3.1 PC. When I was stuck, I had to consult a book or search CompuServe (a commercial pre-Web community) for answers.
1998: At my job, a few years later, we had Web access (my first Windows 2000 PC was actually permanently connected to the Internet, with a public IP address and no firewall). We built Web-based software, running Apache, PHP, and a database with a search engine on Unix servers. Most database software was commercial, and extremely expensive (we used Oracle). Search engines were hard to come by (we went with Oracle’s ConText). I got my own development server, which means the company bought me a second PC – it took days to install Linux, Oracle and all the other stuff on it.
2014: Today we have virtual machines (easy to clone) we can run on our Mac or PC. Or we run servers “in the cloud”. Databases, search engines, programming languages, editors, you name it – there’s enterprise-grade open source software for almost everything. Plus a vast array of tools and libraries. And people are writing tutorials and posting the answers to almost all our questions on the Web, for free.
With no investment besides Mac or PC hardware and an Internet connection, and a lot of time, I have everything I need to build professional software. An amazing opportunity for learners and independent developers. (Or hobby projects: I’m currently working on a topic maps engine.) No wonder there’s so many Web CMS and DAM products/projects out there.
I keep being amazed by the huge opportunities and the low barriers to entry. (And I keep wondering why so few young people learn programming. You don’t know what you’re missing out on!)
Wed, 19 Nov 2014 08:20:54 +0100
A big question at the recent IPTC Machine Readable Rights Workshop was how to get people to adopt a standard for rights metadata. A “business case” would certainly help: There will often be no budget if machine readable rights don’t save money (or even earn it).
Below is a short list of possible business cases. I don’t have any numbers, but these cases are all coming from our customers – it’s what they considered when deciding whether to invest in rights management software:
Save time when selecting and using digital assets: no need to read the editor’s notes, users see at a glance which assets can be used – or they don’t see unusable assets in the first place.
Buy cheaper assets or reuse the ones you already bought: You can encourage users to choose a less costly, or free, or broader-licensed asset.
Avoid buying the same asset twice. It does happen; users sometimes don’t know that someone else already licensed it.
Save time when processing royalties. This is usually a time-consuming manual process that could be optimized with the help of metadata.
Allow for budgeting – knowing during production, in “real time”, how many royalties you’re currently paying can help avoid excessive spending.
Reduce legal costs. License or copyright infringement can be expensive, and damage your reputation.
Repurpose content more easily, hopefully earning you money via new publishing channels.
License out content to others and earn money. You really need to know an asset’s rights before you can resell it.
Note that it takes more than just “rights” to make this work: The digital assets have to be uniquely identified (duplicate copies with conflicting metadata are bad), their usage must be fully documented (“when did we publish this image?”), and legal issues and contracts must be encoded in metadata (permissions, restrictions, and costs).
Tue, 11 Nov 2014 23:08:04 +0100
I had the honor to attend the “Machine Readable Rights Workshop” at the IPTC Autumn 2014 meeting in Frankfurt, Germany today. And to hold a short presentation [PDF] on “Rights Management in the DC-X DAM”. Here’s what I intended to say (the actual talk was a bit shorter):
“I’ve been following the IPTC’s work for many years and I think you’re doing a great job, and you keep changing the news industry for the better. Thanks for that! And I’m also excited to meet some of the people I follow on Twitter in real life. Thanks a lot for the invitation to this workshop!
Digital Collections is a rather small DAM system vendor, but has lots of experience in the publishing industry. 23 years ago, we were one of the first companies in the world to build digital newspaper archives, and to import digital text and photos from news agencies into a full-text searchable database.
Our product DC-X is a pretty normal DAM system: It provides a database and search engine, at which you can throw any kind of file or text. Our customers usually keep their editorial newspaper or magazine content in it, and input from news agencies and photographers: Images, videos, article text, PDF pages and so on. The largest installations store tens of millions of documents and receive tens of thousands images per day. We extract text and metadata, and make it searchable and editable. And then we’re integrating that with other software: editorial systems, Web CMS, syndication and so on. Our customers are calling DC-X their “content hub”.
I have to admit that I haven’t heard our customers ask for RightsML support so far. I’d love to play with RightsML, but that’s why we haven’t started work on an implementation yet. But I really hope this is going to change, and maybe you can provide me with some selling points today.
But what our customers are asking for is rights management inside our application. They need to know whether they’re allowed to use an image online – or in new output channels, like an app. Costs are important, too; expensive images need to be marked as such. And customers who open up their historical newspaper archives might have to set rights for old content.
We started working on that feature four years ago, and maybe a third or half of our customers have already started using it. We’re focusing on what we call “rights profiles”, which is a set of rights metadata that is identical for multiple digital assets. We’re trying to model the actual contract between a content provider and the content user in a rights profile.
For example, there’s a contract between a newspaper publisher and the German news agency dpa that permits the newspaper to use images online and in print without paying royalties per image (covered by a yearly fee). One contract means one rights profile in our software, which is linked to the thousands of images from the dpa within the DAM. We’re storing the rights metadata in a structured, machine readable way. You can see a textual description on the right hand, and two icons below the image that mean “Online usage OK” and “Print usage OK”.
Then there are rights that are valid only for a single image – we’re calling these “special agreements”. Special agreements take precedence over rights profiles. So when the news agency revokes an image, you don’t have to remove the rights profile for the general contract – you add a special agreement for that image, whose properties then selectively override the rights profile’s. In the screenshot, you can see me adding a special agreement with “Usage permitted: None”.
Look how the print and online usage are now grayed out, and a red warning sign is displayed.
The rights metadata form can be customized, of course. Here’s a complex real-life example.
Now how could RightsML help us? One of the biggest hurdles for our customers to adopt our rights management features is that they have to manually define all these rights profiles, and configure our software to link to the correct rights profile on import. That’s a lot of work. It would be great if content providers could do that work instead and provide the correct rights profile. But because we have built our own proprietary and simplistic rights engine, we’re stuck. Implementing RightsML would enable interoperability.
Two more things I’m currently thinking about with regards to RightsML:
First: The spec says that RightsML need not be embedded, it can be “communicated separately”. That seems to be an attractive option for a couple of use cases: A) When many content items share the same rights. We don’t want to store the same set of rights again and again for each item. Not just to save space, but to make it easier to edit rights in our system. And B) we often have images in our systems that must be bought before they can be used, and the exact rights are being negotiated on the phone or during an online purchasing process when the image file is already in the production process.
And the second thing: In my eyes, rights are pretty different from other kinds of metadata. And the data structures and algorithms for storing and evaluating RightsML or PLUS expressions are complex and hard to implement. It starts with having to create several database tables just to store the rules. I don’t see too many software vendors doing a proper implementation soon.
But that rights are so different, and essentially independent of the content, is also an opportunity: It allows us to handle rights outside of the existing applications. Ideally, there would be an application that specializes in displaying, editing and evaluating machine readable rights. A Web CMS or DAM could call its API to store or retrieve rights, and to evaluate whether a specific usage is allowed. Later you could add contract, usage and royalties information. (A part of that functionality seems to be covered by the PLUS Registry, by the way. But personally, I’d favor a local registry over a central one – I’m not sure about performance and security, and we’d want to use the registry for article text as well which isn’t the PLUS use case.) I’m convinced that open source, easily-integrated “machine readable rights hub” software would help drive RightsML adoption.
I’m looking forward to the discussion. Thanks for your time!”
Update: See my follow-up post The business case for machine readable rights.
Wed, 22 Oct 2014 20:08:15 +0200
Deborah Fanslow – Who Needs a DAM Librarian? Part I: Come Out, Come Out, Wherever You Are:
“After reading David’s article, I was inspired to do a little informal research. I was curious…when did the topic of librarians enter the DAM conversation? Have any vendors other than Picturepark published any advocacy on behalf of librarians in DAM? What do DAM consultants and DAM practitioners have to say about information professionals? As it turns out, there has been quite a bit of advocacy published.”
She’s linking to lots of good articles. Great work, looking forward to the next parts!
Mon, 13 Oct 2014 22:50:55 +0200
Sun, 05 Oct 2014 00:07:33 +0200
Dave Winer has been blogging for almost 20 years now. His blog is on my reading list for programmers. I love the way he writes, making software development feel simple and personal. (Much appreciated especially when I’m building “enterprise software” that often seems to be the opposite.)
Two of his pieces that I can’t get out of my head (both from 2012):
A message from developers to users: “Users do more of some things than others. […] You make that way easy.” A wonderfully simple definition of the essence of software development, and a welcome change of perspective.
What you think matters: “I would send the message to all 15-year-olds, not just me.” I remember thinking during the first years in my job: “Why are they doing things this way? This seems wrong. I must be missing something.” I often wasn’t – they were making a lot of mistakes, but as the newbie I wasn’t supposed to know. Trust your instincts and keep asking questions…
Thanks Dave, and keep digging!
Fri, 26 Sep 2014 21:10:33 +0200
It must be terrible to shop for a Digital Asset Management system. While the Web empowers cheap smartphone, fashion or book buyers – with independent coverage from press and bloggers, and customer reviews on Amazon – it’s not very helpful when you’re planning to spend tens (or hundreds) of thousands on DAM software and need to compare products.
The DAM market is a highly fragmented assortment (148 items on my list) of complex products, most of which can be complicated even more by customizing. It’s a niche market with few household names and a very long tail.
Even mentions of DAM products by technology journalists are rare; in-depth reviews in the tech press don’t seem to exist at all. And there’s no Consumer Reports issue on DAM systems.
The only solid in-depth comparison seems to come from analysts: The Real Story Group sells a 570-page Digital & Media Asset Management Research Report covering 36 products, prices (for the report, not the products) "starting at $2,950". (The DAM vendor I work for is not included, by the way.)
There’s no vendor-specific but independent user groups for DAM like the large and powerful ones for SAP or Oracle customers.
Reviews from customers are also hard to find. On Capterra’s DAM software list, most products have no reviews at all.
And this is not just because DAM customers are few and not too vocal. On the LinkedIn discussion “I've outgrown my DAM” (asking for honest feedback from DAM administrators), expert Ralph Windsor of Daydream comments:
“I know I can't say 'x provider is great, y are not' in a forum like this (even though I might think it) as that would generate all kinds of complex political problems when/if I have to deal with them elsewhere.
Even people at the sharp-end who use a given DAM system for their regular day job might not be keen to tell you it's not up to scratch on a public discussion group. It's not like buying some lower cost commodity item such as computer or even a car etc where there is limited comeback from the manufacturer.”
I understand all of this. But can’t we do better? Do the customers really benefit when all criticism happens behind closed doors, mostly off the record? This seems broken to me.
Update: Naresh Sarwan’s Review of Available Open Source DAM Software is short but quite nice. Being more open, better documented, and openly reviewed, can be an important advantage of open source projects. I think there’s a real chance for well-run open source DAM software to eat proprietary DAM vendors’ lunch.
Update 2: TopTenReviews’ Digital Asset Management Software Review compares ten DAM products.
Update 3: G2 Crowd’s Digital Asset Management Software section currently has about 100 user reviews/ratings for 24 DAM products, a lot more than Capterra.
Sun, 03 Aug 2014 23:24:34 +0200
I've been working as a software developer since leaving university, but that hadn't been my plan: I had set out to be an information professional. My German university degree is called “Diplom-Dokumentar (FH)”, later renamed to Information Manager – equivalent to a bachelor in Library and Information Sciences.
As a student, I was hoping to get a job in the media industry. My three-month internship at a large and well-known magazine publisher had been terrific: They had a huge archives department with dozens of librarians feeding a database of press clippings (one of the world's largest), adding metadata using a highly sophisticated thesaurus and keywording system, and doing research for both their own journalists and external customers. It was an amazing combination of resources and experts, doing a great job of “organizing the world's information”. Which is the first half of Google's mission statement (I'll come back to the second half later).
But once I had my degree, it was clear that there would be no new jobs in German press archives. Even back in 1997, publishers feared declining revenues and had started to cut librarian's jobs. Digitization meant less manual work (I hope the software I wrote didn't make people lose their job), and the bosses figured that the value added by librarians wasn't entirely appreciated by their audience: A little less quality would save the publisher a reasonable amount of money, without losing them many readers. (DAM ROI is not a new topic.)
I sent my resumé to the publishing house I had interned at, asking for a librarian's job. Their response: “We won't hire librarians anytime soon, but we see that you can program – how about working for us as a software developer?” They wanted me to write software for their press archives. Which was an okay compromise; even if I wouldn't be one of the librarians, at least I'd help them do their work. And I hoped that once they started hiring librarians again, I could switch over.
Well, I moved to a DAM software company a year later, and have been a developer ever since. German press archival departments have continued to shrink year after year. The Hamburg university closed their “media librarian” program. Some of our DAM company's customers shut down their archival departments completely. The librarians who remain in press archives – and who once were the primary stakeholders, together with whom we designed our DAM installations – have lost influence. Today we're mostly talking to management, marketing, editors and IT. Like Erik Hartman writes, “most librarians are quite invisible […] and on the verge of being fired due to budget cuts”. (It seems to me that the librarian crisis is worse in Germany, with less appreciation for information professionals than in the English speaking world.)
Running an information centric business in the information age, yet getting rid of information specialists? That sounds like a bad joke to me. Digital data will keep growing, so taming and structuring information is more important than ever. “Information curation, in-depth research, digital preservation […] and coaching” (Rob Corrao) get thrown out with the bath water when librarians are fired. Real information retrieval is powered by librarians. I've met wonderful information professionals at our customers, and it breaks my heart when their knowledge and skills are ignored. What has gone wrong?
The second part of the Google mission statement is part of the answer, I think: “... and make it [the world's information] universally accessible and useful”. In many cases, librarians designed “their” DAM systems to help themselves offer the best professional services. Tons of search fields and features, complex metadata that led to great search results as long as you were able to formulate complex queries. But the future clearly was in self-service: Everyone else in the company wanted direct, easy to understand access to the archives. Librarians didn’t always embrace and encourage that – appalled at how ineffective these non-professionals would search, and knowing that self-service wasn’t good for the librarian’s job security.
Huge opportunities were being missed at that point, I’m afraid. “For the right librarian, this is the chance of a lifetime,” wrote Seth Godin. The new digital tools – search engines, automated alerts, semi-automatic categorization, visualization – can be learnt and then used by archivists and librarians to vastly improve their services. They can get proactive, create topic pages, deliver dossiers, own the intranet and Wikis. Bring in new stuff where it makes sense (geo tagging, social sharing, rights management, video archiving). Track and visualize metrics that show how much value they add. And optimize the search engine and metadata for self-service. (Did you know that Google’s huge “search quality” team constantly keeps tweaking its search engine? Tell that to the customer who wants it “to work just like Google search”.) Hard work with a simple goal: “Make it accessible and useful!”
I hope that the tide will be turning soon – that information professionals will start to own the information age, and get the appreciation they deserve. Especially from their employers. “Make them understand what you do and why it's important,” tells David Diamond the librarians. I’m happy that the DAM community is already well aware; keep spreading the word!
Update: Make sure to read David Diamond’s Library Science, Not Library Silence.
Wed, 23 Jul 2014 23:05:41 +0200
Picturepark CEO Ramon Forster – Diving Head-first into a Suicide Sale:
“The other vendor in the bid is known for promising everything you ever want to hear, including substantial discounts, just to land on the short list.
I admit that for a fraction of time I questioned whether our principle to be "honest at all times" makes sense at these times too. Do some prospects just want to buy into the illusion that software will solve all of their problems? Do they buy into "everything is easy", "everything is integrated" and "everything is automated"?”
Wed, 23 Jul 2014 08:11:54 +0200
Ralph Windsor – Transforming DAM From A Product To Service-Oriented Delivery Model:
“[People] like to think in terms of products. Buying a software system confers an illusion of commitment to doing something about the problem, […], by contrast, strategy and planning implies having to give up more of everyone's precious time and taking some ownership of (and therefore responsibility for) the problem.
[…] When the vendors deliver the aforementioned demos, they get asked if they support feature X,Y or Z. If not, there are usually fear-induced long pauses followed by either lying, disingenuous re-interpretation of the question to suit the vendor's current capabilities (see previous point) or an assurance that 'it will be available in the next version' (see first point).”
Wed, 23 Jul 2014 08:18:10 +0200
In Reinventing Digital Asset Management, David Diamond writes about the miserable state of integration between DAM software and the places we want to use our digital assets – “our apps and anywhere else we happen to be — Google plus, a Disqus comments thread, Facebook, Twitter, etc.” Integration with native applications like Photoshop and InDesign is a special topic that I’m trying to stay away from as a developer (not fun at all). But most of the apps we use live on the Web. How well do Web apps interoperate?
A little story: As an “enterprise software” shop, we do a lot of custom development. Each feature to be developed is described in our bug tracker software, including customer and project name. When the developer starts working on a feature, he changes the bug tracker record status from “open” to “assigned” (so that the project manager gets notified of the progress). We also have separate time tracking software. The developer starts a timer there – and has to manually enter the customer and project name, and a short feature description.
Why doesn’t the bug tracker have a simple “start tracking time” link to the time tracker that prefills the required metadata? And a second link for a “show tracked time” report (even better: show it inline)? How about a link from the time tracker to the bug tracker for the full feature description?
In theory, this functionality should be trivial to implement. Both applications are Web based, they even have APIs – I just need to read some fields from their databases (hoping that the data structures are compatible) and inject an HTML snippet into their pages. Hey, I’m a Web developer. How hard can it be?
Actually, it’s so hard that I don’t bother doing it. Like most software, including DAM systems, these Web apps are not built for interoperability (or integration; here’s the difference). You’re not supposed to mess with their HTML output. Software vendors are control freaks, they don’t want you to add glue and features (in part because there are security and performance concerns). And they haven’t architected their software for it (think abstractions, services, components). For example, had the WordPress guys wrapped the Media Library in a well-documented API, all DAM vendors could have swapped in their software much more easily and in the same way. Now each one has to handcraft their own WordPress integration.
There are generic approaches to this problem, though. The Dropbox Chooser and Google Picker are nice “File Open” dialogs for the Web. Portlets and Google Gadgets let you embed “foreign” mini-apps (and for Sharepoint, there’s Web Parts). Web Components are a promising new technology that will probably replace all of the above. Webhooks and Web Intents allow for clever links between Web apps.
(For completeness, some historical background: Component based native software that allows relatively simple, Lego-like app construction was a hot topic twenty years ago. Remember Visual Basic, OLE, OpenDoc, or Interface Builder on NeXT? In Web technology, early Mashups were promising but they mostly remained one-off, short-lived demos.)
It’s wonderful that most information lives on the Web nowadays. The simple and open HTTP and HTML standards, and especially hyperlinks, allow us to build amazing connections. Let’s make use of that potential and move from DAM silos to a Web of information!
Fri, 18 Jul 2014 22:30:18 +0200
David Diamond – Reinventing Digital Asset Management:
“No matter how pretty the UI, a DAM remains a place people must go to get what they need. We place digital assets in a repository that is almost certainly not where we are when we actually need those assets.
[…] We want digital asset access, not from some random website location, but from within our apps and anywhere else we happen to be — Google plus, a Disqus comments thread, Facebook, Twitter, etc.
[…] Best-of-breed enables customers to choose which system components they prefer. Instead, we are now seeing vendors buy up and package complete suites for us to consume, lock stock and barrel.
[…] DAM became about marketing for some DAM vendors when they realized that marketing departments had more money than universities and museums.
[…] Digital asset management needs to replace the OS file system entirely.”
I'd love to think about this a bit longer, but I have to go repaint our DAM Lite UI now...
Thu, 17 Jul 2014 23:13:38 +0200
I keep coming back to Eric Steven Raymond’s 2003 Basics of the Unix Philosophy:
“Rule of Modularity: Write simple parts connected by clean interfaces.
Rule of Clarity: Clarity is better than cleverness.
Rule of Composition: Design programs to be connected to other programs.
Rule of Separation: Separate policy from mechanism; separate interfaces from engines.
Rule of Simplicity: Design for simplicity; add complexity only where you must.
Rule of Parsimony: Write a big program only when it is clear by demonstration that nothing else will do.
Rule of Transparency: Design for visibility to make inspection and debugging easier.
Rule of Robustness: Robustness is the child of transparency and simplicity.
Rule of Representation: Fold knowledge into data so program logic can be stupid and robust.
Rule of Least Surprise: In interface design, always do the least surprising thing.
Rule of Silence: When a program has nothing surprising to say, it should say nothing.
Rule of Repair: When you must fail, fail noisily and as soon as possible.
Rule of Economy: Programmer time is expensive; conserve it in preference to machine time.
Rule of Generation: Avoid hand-hacking; write programs to write programs when you can.
Rule of Optimization: Prototype before polishing. Get it working before you optimize it.
Rule of Diversity: Distrust all claims for "one true way".
Rule of Extensibility: Design for the future, because it will be here sooner than you think.”
Wed, 18 Jun 2014 10:50:48 +0200
Jeremiah Karpowicz interviews David Diamond – Fighting DAM Ignorance with Education and Cooperation:
“Because the long-term benefits of DAM are so horribly obscured at the beginning, DAM always seems to have more downside than upside.
[…] DAM vendors like to spew best-practice advice that tells prospects to do their homework and carefully determine their needs.
[…] Vendors tend to introduce half-baked features that don't get the planning and UX considerations they deserve.
[…] I'm far more a DAM user than I am a marketing director. So rather than just deal with these situations, I become a screaming, maniacal customer-from-hell who expects it all to be fixed today, and I want a handwritten apology for my troubles too.
[…] What people hate about DAM is not ugly icons; people hate all the jumping around the UI they must do in order to get anything meaningful done. Nothing good will happen there until UX designers join R&D teams and DAM employees start actually using their own software.
[…] I'm never shy about referring people to a library sciences professional.
[…] I see [DAM] as a metadata-managed global file system that every program can use and every service can access. When I connect to my corporate network, what I see from my Open/Save dialog boxes is my organization's DAM”.
Must read (as usual).
Tue, 17 Jun 2014 15:40:01 +0200
Ben Horowitz back in 1996 – Good product manager, bad product manager:
“Good product managers know the market, the product, the product line and the competition extremely well and operate from a strong basis of knowledge and confidence.
[…] Bad product managers have lots of excuses. Not enough funding, the engineering manager is an idiot, Microsoft has 10 times as many engineers working on it, I'm overworked, I don't get enough direction.
[…] Good product managers create collateral, FAQs, presentations, and white papers that can be leveraged. Bad product managers complain that they spend all day answering questions for the sales force and are swamped. Good product managers anticipate the serious product flaws and build real solutions. Bad product managers put out fires all day.
[…] Good product managers define good products that can be executed with a strong effort. Bad product managers define good products that can't be executed or let engineering build whatever they want (i.e. solve the hardest problem).”
Apparently a classic, but new to me.
Mon, 26 May 2014 09:14:41 +0200
Edmund Jorgensen – Speeding Up Your Engineering Org, Part I: Beyond the Cost Center Mentality:
“You may have shifted your efforts from the impossible task of making the org go faster to the thankless but crucial job of jealously guarding how engineers spend their time—because as it takes longer and longer to get even simple features out the door, those engineering hours become increasingly precious.
[…] You've been around long enough to know that there won't be any "calm periods" when there's time for your engineers to scratch these other itches—after the Facebook for Cats integration goes out, you'll be right on to integrating with Twitter for Dogs, or LinkedIn for Ferrets. So on this fine morning someone has to make a real and uncomfortable decision: either tell Cindy and Scott to stop complaining and get back to feature work, or let product and the CEO know that you're going to spend some engineering hours on something other than features.
[…] Sometimes the "more money" you expect in return comes from features for which customers will pay, but often (as in our thought experiment) it comes in the form of valuable information, or—if you're doing it right—a reduction in (or prevention of) latency for future work, which, as we've just shown with our thought experiment, is worth actual money.”
Sun, 25 May 2014 22:50:28 +0200
Matt Ellis – Singing the Praises of Chorus:
“Everyone is singing the praises for Chorus, Vox Media's own CMS.
[…] Chorus is doing most of the duties of online journalists for them! It conducts automatic word scans, then finds and links it to other related texts. It also brings up relevant (and licensed) photos and videos available for use. That frees up the writers to focus more of their time on writing.”
Sun, 25 May 2014 23:34:28 +0200
Ralph Windsor – The Rise and Fall of the Imperial Enterprise DAM:
“The problems of each group of users are too diverse for one single solution to be able to answer them all. What invariably happens is either the software becomes bloated and buggy as conflicting needs clash with each other and the developers try to resolve them with numerous options and settings which in turn require skilled engineers to alter, or requests to make amendments are ignored as not being sufficiently important to justify the hassle and cost.
[…] I would still avoid the typical reductionist IT tendency to over-rationalize and generate numerous unplanned productivity problems just because it seems like a neat and tidy thing to do.”
Wed, 21 May 2014 08:29:07 +0200
I’m working in Digital Asset Management and love reading DAM news, and learning about technology, products, and trends. It took me a while to find all the sources for DAM information. If you’d like to dive into the world of DAM news too, you can spare most of that work by starting with my Planet DAM page:
A list of DAM products,
the latest articles (automatically gathered from RSS feeds),
and what’s currently happening on Twitter (I’m maintaining a DAM list there).
I hope this is useful to some of you. Please let me know if you think something’s missing!
(The name “Planet DAM” borrows from the Planet “river of news” feed reader that many topic-focused news-gathering sites are based on. I’m not using that feed reader, though; our DC-X DAM has an RSS / Atom feed importer which is easier to work with if you’re running a DAM anyway.)
Mon, 19 May 2014 08:58:50 +0200
Sun, 04 May 2014 00:24:09 +0200
Since July 2011, I’ve been archiving interesting Web pages in my personal instance of DC-X (the Digital Asset Management system our company is building). My archive contains 12,300 pages already and is growing daily.
I’m totally in love with this feature: It’s my “private file and library” (a quote from Vannevar Bush’s 1945 As We May Think) – a highly relevant, searchable pool of content I might want to revisit or read later. In an instant, I get back to that great or helpful article when I need it. It’s also a tool for curating the links I’m publishing here. And finally, a backup for the day when these articles vanish from the Web or the links to them break (sooner or later, this happens to most of them).
The alternatives don’t cut it for me: Browser bookmarks or Safari’s “reading list” don’t scale well to 10,000 pages, and have very limited search/browse functionality. Services like Delicious or Pinterest can’t be trusted with an archive (which I expect to last for decades). And software that does the archiving from a server process doesn’t see the page exactly as I’m seeing it, and fails at sites that require authentication.
I couldn’t build up this archive if the process wasn’t quick and easy (no metadata entry required). It requires a small Firefox add-on that I custom-built for myself (no customers are using this feature yet). The browser add-on takes a screenshot of the currently displayed page and posts it, along with the HTML source code, to the DAM in a new browser tab. The DC-X DAM asks me to log in (only once per day), creates an import job and waits for its completion. Then I’m redirected to the details page of the “archived Web page” document that was just created. Here’s a screencast:
How are you keeping track of important Web pages? What’s your personal digital archiving workflow?
Sun, 04 May 2014 22:17:17 +0200
Are you an idealist? Then you’re probably daydreaming of getting a say in how your organization is run. Once you have access to the decision makers, your great ideas will be heard, and you’re going to change the world together!
Well – unless you’re working for an exceptional organization, you’ll soon find out that the movers and shakers spend most of their time debating rather mundane details. The really important discussions are postponed or don’t result in decisions or actions. Yes, your great ideas will be heard but not much is going to come out of them. Everything will seem to move very slowly. (Except for the occasional surprising move that must have been decided upon when you were not in the room.)
What’s going on? This group of people could change almost everything for the better, yet nothing much happens. What about their passion, creativity, dreams and visions? Or at least, what about the pressing problems that call for swift and strong action?
Here’s what I learned about leadership realities from repeatedly failing to make a difference in “leadership teams” (as the idealistic but powerless guy):
Who said it matters. A lot. As the expert from the lower ranks, you often won’t be taken seriously. They can ignore you just fine regardless of what you said. The words of the powerful inevitably have a lot more weight.
They’re here for the quick wins. This quarter’s project, this year’s money matters. Soft targets like culture or customer satisfaction are less important than hard money and an easy to calculate ROI.
“Best practices” don’t matter. You might be enthusiastic because you finally found the perfect book or article: proof that what you’ve been talking about all along works great for others! Sorry, you will still have a hard time getting people to even think about it.
Facts don’t matter as much as you think: Your well-researched data can easily be dismissed with some anecdotal evidence or inapt metaphor. Because:
People aren’t rational. Most of the time, feeling right is more important to humans than actually being right. (My theory is that engineers are more likely to reflect and analyze rationally because that’s an important part of their job.)
Everyone believes their own lies and exaggerations. They get into the habit of bending the truth a little (it’s done, we have a great company culture, our customers are loving it) because they’ve got to sell something, and soon start living in their own made-up universe.
You cannot convince a group of a dissenting opinion. No matter how well-reasoned your opinion, it needs time to sink in, and groups reinforce the majority’s belief (“group think”). Changing people’s minds is hard.
They don’t really want to know because they’re afraid of change and discomfort. No-one’s intentionally blind, but they'd rather look elsewhere than face an inconvenient truth.
Priorities can kill anything. Often they won’t say you’re wrong: They’ll say you’re right but there’s more urgent problems, so let’s take care of this later. (Later, of course, there’ll be new high priority issues…)
“We’ve got to do something” doesn’t mean it gets done. Even if they agree on doing something, decisions and actions will be postponed whenever possible. Minimal or fake action (scheduling a follow-up meeting, promising to write a concept) is enough to make everyone feel the problem has been addressed.
People don't understand other people’s jobs, and don’t bother trying to. The CEO probably doesn’t know what the QA guy is doing all day, and that’s fine with him.
Some are doing work they don’t love and aren’t passionate about many aspects of their work. Yes, even in upper management.
They’re only striving for “good enough”, not for perfection, so what they get is mediocrity and they’re either fine with that (as long as it makes money) or telling themselves they’re great.
The real values will eventually surface. Honesty. Humility. Empathy. Taking responsibility. Trusting and developing and empowering others. Genuinely caring for customers and employees. Is that really our leaders? (For example, most people are okay with lies as long as it’s them who’s lying.) Sooner or later, you’ll find out.
Do I sound bitter? I don’t mean to. Just needed to write this down so I don’t forget the lessons I learned. (And I’m noticing I’m guilty of some of the above as well…)
I’d love to hear from you: Please teach us your tricks if you succeeded in hacking leadership. (I’m not giving any advice here because I failed at it…) Don’t stop being idealistic, keep changing your part of the world for the better!
(Inspired by the German article Mist im Management by Klaus Schuster, and many other gems linked to from this blog.)
Mon, 28 Apr 2014 22:13:02 +0200
Ralph Windsor – Dropbox Launches Carousel Photo Sharing App – A Game Changer For DAM?:
“Budget or low-end applications like Dropbox's Carousel are going to place further upward pressure on the scope of what existing DAM market participants are expected to provide. Anything that looks like 'low hanging fruit' will increasingly be picked by these more generic products.
[…] That will slice away anyone who just wants to put a DAM product out there with and let end users get on with it themselves, unless they plan to do so for little or no cost applied to the end user.”
This is getting interesting: (We) DAM vendors are increasingly focusing on user experience, especially on ease of use for new or casual users. This is a good thing, and enforced by the “consumerization” trend (“make it as simple as Dropbox or Google search”). In the process, a few power user features are even stripped from the UI.
Now combine this with the German trend to ignore metadata experts or even fire them (as the German newspaper and magazine industry has been doing for years):
Is the end result something that’s as polished and easy to use as Dropbox or Box, with very few additional features (since customers neglect their metadata, which would have made most of the difference) but at a higher price? Sounds like “specialize or die” to me…
Related – Klaus Sonnenleiter in Guru Talk on DAM as a commodity: “[DAM in 5 years] will be fully embedded. […] Digital assets will continue to be managed, but they will be managed inside a larger solution that handles marketing activities, sales platforms, publishing channels or whatever the primary activity of the company is.”
Update – Laurence Hart in Content Management Step One, Capture that Information: “No system where people actively store Content is ever considered a failure. […] If [Box and Dropbox] can get a strong foothold, show consistent high adoption, and while gradually increasing value organizations derive from using them, they are going to be major players. […] My money is on the companies that are innovating and trying new things while not losing sight of the fact that every organization is staffed by Consumers.”
Update II – must read: David Diamond’s Is Dropbox a Digital Asset Management Game-changer?
Thu, 17 Apr 2014 13:53:22 +0200
Oliver Joseph Ash – Inside the Guardian’s CMS: meet Scribe, an extensible rich text editor:
“The problem with all of these off-the-shelf solutions is their lack of extensibility. TinyMCE, for example, does an excellent job of producing the right markup, but much of the user interface for the editor is kept privately within the library, which made it difficult to augment the user experience we desired.
[…] If you’re in need of a rich text editor then we would love for you to try out Scribe. It’s a great starting place for building your own rich text editing experience, as you won’t have to deal with any of the pains introduced by contentEditable.”
Try the demo, and get the source code on GitHub.
Wed, 09 Apr 2014 22:49:54 +0200
Rebekah Campbell – The Surprisingly Large Cost of Telling Small Lies:
“The act of lying plucks you from the present, preventing you from facing what is really going on in your world. Every time you overreport a metric, underreport a cost, are less than honest with a client or a member of your team, you create a false reality and you start living in it.
[…] I know people who seem to have spent their entire careers inflating the truth and then fighting to meet the expectations they have set.”
Excellent post. I’ve always tried to stick to the truth. Partly because I hate it when others are lying to me, and because I’ve experienced the trust-building power of radical openness. “The truth will set you free.”
Tue, 01 Apr 2014 11:53:31 +0200
Seth Godin – Not even one note:
“At no point did someone sit me down and say, "wait, none of this matters if you can't play a single note that actually sounds good."
[…] We add many slides to our presentation before figuring out how to utter a single sentence that will give the people in the room chills or make them think.
[…] The cop-out would be […] to add one more thing to my list of mediocre.”
Thu, 27 Mar 2014 09:04:52 +0100
Last week, a customer reported a problem with DC-X – some linked metadata seemed broken. It turned out that slightly buggy custom code had written DCX_PUBINFO.PUB_DOC_ID = 'doc123 ' (note the trailing space) into the MySQL database, while the referenced column DCX_DOCUMENT.DOC_ID contained 'doc123' (without the space).
This came as a surprise to us: We didn’t expect InnoDB’s referential integrity to allow different values in a foreign key relation. But experiments showed that MySQL in fact ignores appended spaces (rtrim) when comparing values with “=”!
Here’s a test case if you’d like to reproduce it:
create table T (V varchar(255) not null); insert into T (V) values ('a'); select * from T where V = 'a ';
On MySQL, the SELECT statement returns the row we just inserted. On Oracle, it doesn’t – which seems to make a lot more sense.
The first Stack Overflow post I found, MySQL disable Auto-Trim, suggested that this was somehow acceptable, SQL-standardized behaviour. Weird. The SQL 92 standard seems to recommend MySQL’s padding / trimming (PADSPACE) and describes a NO PAD opt-out (that MySQL doesn’t offer).
Another post, MySQL treatment of ' ', was more informative – apparently LIKE behaves differently:
select * from T where V like 'a ';
And MySQL has a “binary” workaround for SELECT with “=”:
select * from T where binary V = 'a ';
For the full background, and a comparison of different RDBMS, read the PostgreSQL discussion String comparison and the SQL standard. According to this, MySQL and SQL Server always ignore appended spaces as described above. Oracle and PostgreSQL, on the other hand, do what we’d expect the database to do and don’t ignore them – as long as you use VARCHAR not CHAR.
We’re learning something new every day…
Mon, 31 Mar 2014 10:21:09 +0200
Michael Lopp – Drift:
“See, as system thinkers, we’re trying to build a model that, well, explains everything. To assist in our discovery of everything, we’ve built ingenious ways of gathering data. Whether it’s a feed reader, a set of bookmark tab groups, Facebook, Twitter, or a news aggregator, we’ve constructed a personal machine that allows us to rapidly consume information.
[…] The process of consuming all this data gives my mind mental velocity, but it’s not just the rate of consumption that gets me mentally limber, it’s the map I’m constantly building and refining. I’m exercising and developing my Relevancy Engine. I’m instantly evaluating everything I know and comparing this item to that impression.
[…] The high volume of information consumption has forced my brain into high gear to process and analyze it. Analysis is the catalyst that opens the door to creativity.”
Well put. Just the way I feel. (Related: How I’m blogging.)
Mon, 24 Mar 2014 10:50:24 +0100
In my previous post Web of information vs DAM, DM, CM, KM silos, I asked: “When a photographer’s phone number changes, will you update it in your DAM system? How many places will you have to update it in the DAM – is it stored in a single place, or has it been copied into each photo?”
DAM systems have traditionally focused on files and their metadata. With the metadata only existing in the context of the asset, not as standalone data in its own right. I’ve long been convinced that this is wrong, so it makes me very happy to see a trend in the right direction. A few quotes:
David Diamond – A DAM is no place for an “image”: “With content-focused DAM, you think in terms of, for example, of the words in yesterday’s press release. You don’t think in terms of the press release’s Word and PDF files as being separate entities. They are merely disposable containers for the content. And it is the content that needs metadata, not the files. It is the content that has a lifecycle, not the files. One of the many advantages of the Adaptive Metadata technology that Picturepark developed […], is that metadata can be abstracted from the assets themselves. This means, for example, the metadata can exist entirely on the asset class definition. Those assets assigned to the class inherit the metadata while they remain assigned.”
Louis King in a comment on the LinkedIn DAM group discussion on Why Images Don’t Belong In Your DAM (requires registration): “Each of these chunks of metadata represent investments that provide value to the asset. By separating them into individual but related assets DAM users are not burdened by the complexity of the whole but are focused only on the metadata that is returning value to their role. Very few DAMS do this but trends in metadata are moving rapidly in this direction. Take a look at Open Linked Data to see how that might play out in emerging DAM.”
Ralph Windsor – Digital Asset Management And The Politics Of Metadata Integration: “There are many other [applications] and you could include any system where the key entity is not an asset […]. In these scenarios, the external entity which contains the data of interest has an adjacent or perpendicular relationship with a digital asset. In other words, it is not above or below it in terms of the metadata schema hierarchy and needs to be treated independently (i.e. linked by association rather than part of the same record). […] The staff HR record and the employee photo are independent of each other and different users have to work on them separately from each other to fulfil independent business functions.”
I also like how Rory Brown quotes Douglas McCabe on Twitter: “Content has to be atomised because no one knows what the 4th wave of disruption will be (after desktop, phone, tablet)”
For a nice real-world example, see the BBC News Labs presentation on Storylines, Topics & Tags. Their News Archive doesn’t just store “article” and “image” assets, but also contains a database of people (with properties like “birth place”, “birth date”, “role”), organizations, places, events, themes (“unemployment”), and storylines (“the death of Nelson Mandela”). Each of which can be linked to the assets.
Once we agree on the need for standalone data in the DAM (or linked to the DAM) – asset-independent databases or knowledge bases – the next questions are how to model it, and how to ensure a good user experience. I think Topic Maps are perfect for modeling arbitrary, flexibly structured data. How are you doing it?
And what DAM systems do already have this functionality? I know of Picturepark with its Adaptive Metadata, ImageSnippets and our own DC-X with its Topic Maps engine. Any others?
Wed, 12 Mar 2014 09:09:42 +0100
Dan Gillmor – Learning about, and deploying IndieWeb tools:
“Already, using easily deployed tools, I’m using this blog to create posts that show up on Twitter, LinkedIn and Google+. […] What I’ve also done, using the IndieWeb plugin — created by a member of the growing community dedicated to making this all work — is to get Twitter replies and retweets to show up as comments on the blog posts.
[…] Ryan Barrett‘s work is key to this. He created something called Bridgy, which sends webmentions for comments, likes, and reshares on Facebook, Twitter, Google+, and Instagram.”
Tue, 11 Mar 2014 09:26:46 +0100
Gerry McGovern – The complexity-simplicity trade off:
“Many of the systems organizations give to their employees are usability monstrosities.
The reason for this is that senior management just doesn’t care. It has abdicated its responsibility when it comes to technology. It sits there listening to presentations about huge savings if only huge amounts of money are spent. It allocates the budget and walks away, because “it’s technology” and that’s too hard to understand for a senior manager.
The problem goes even deeper. Senior managers don’t care about their salaried employees’ time. I’ve been doing web consulting since 1994 and I have yet to meet a senior manager who really cared about making it easier for employees to do their jobs.”
Mon, 10 Mar 2014 09:37:39 +0100
I have spent years of my life making our software work with other software, and I think we have a problem: The “enterprise” is managing overlapping information in disparate systems that don’t interoperate well. There’s lots of system flavors: DAM (interesting stuff like photos, videos, articles). DM (boring stuff like forms, business letters, emails). CM for publishing on the Web. KM holds expert’s contact info and instructions. CRM, employee directories, project management tools, file sharing, document collaboration… Each one with a different focus, but with overlapping data.
Now one system’s asset metadata can be another system’s core asset… Take the Contact Info fields from the IPTC Photo Metadata standard, for instance: When a photographer’s phone number changes, will you update it in your DAM system? How many places will you have to update it in the DAM – is it stored in a single place, or has it been copied into each photo? You’ll probably just update your address book and ignore the DAM. A DAM system simply isn’t a good tool for managing contact information. But it still makes sense for it to display it…
For a more complex example, here’s a typical scenario from our customers: A freelance journalist submits a newspaper article with a photo. It’ll be published in print and online, copied into the newspaper archive, and the journalist is going to get paid. Now when an editor sees that nice photo in her Web CMS and wants to reuse it, can she click on it to see 1) the name of the editor who placed it in the print edition, 2) the photo usage rights, 3) the amount paid to the journalist for the current use, and 4) the journalist’s phone number? No, she can’t. The data for 1) is stored in the print editorial system, 2) in the DAM (rights) and the DM system (contracts), 3) in the SAP accounting system, and 4) in the employee directory.
Of course, all of this can be made to work since each system has some sort of API. With one-off interoperability hacks, for which you need a programmer who’s familiar with the systems involved! Incompatible information silos are hurting the business and wasting a lot of developer time. This is a known problem, and the subject of two more acronyms: II = Information Integration, and MDM = Master Data Management. As a software developer, I see two possible solutions:
First, going back to a monolithic system that does everything at once is not a solution. Neither its user interface nor its backend implementation would be well-suited to the host of different tasks that users need software for.
But we could find a clever, generic way to link information from various systems together so that we can “surf” it in any direction. Linked data in the form of HTML+RDFa is a great way to do this, see my post Publish your data, don’t build APIs. (And Lars Marius Garshol on Semantic integration in practice.)
Or a much more complicated (but fascinating) solution: Product developers stop rolling their own databases and assume they’re going to operate on a shared datastore that is created and managed by someone else. Their software accesses it through a configurable data access layer. Imagine running WordPress and Drupal simultaneously on top of the same MySQL database, working on the same content! A shared datastore would allow for centralized business rules and permissions. But for practical reasons (performance!), this is likely not going to happen. (A baby step in the right direction: Use LDAP instead of creating your own users and groups database tables. We’ve done this and it works great.)
In real life, information doesn’t stand alone – it lives inside a web of interlinked data. Until our systems can handle this reality, we’ve got to break it down, remodel and copy it for each siloed system. Let’s try to improve on that!
Update: See also Ralph Windsor – Digital Asset Management And The Politics Of Metadata Integration.
Tue, 25 Feb 2014 20:17:42 +0100
We’re doing a lot of customization in our projects. We want a set of configurable UI widgets that can be freely combined when building custom pages, and partners need to be able to add their own widgets. The UI will be based on the Bootstrap framework, and we want to be able to integrate widgets from libraries like jQuery UI.
The MVC (model / view /controller) approach seems to make sense; maybe as implemented in the separable model architecture from Java Swing components: A component can manage its own data, or be configured to share data with other components. Our UI components should be “loosely coupled”, exclusively communicating through events in order to avoid breakage if a component is missing or not initialized (and to make replacing components easier). The Twitter Flight framework has been a wonderful inspiration, make sure to read about it! We’ve extended their event approach a little bit: Events can collect and return responses using event.result in jQuery custom events (with promises/Deferred for asynchronous results).
Small is important to us. The simpler, the better – we need to find a clever, powerful, extensible, future-proof architecture with minimal lines of code. (Good luck with that, I know.)
Wed, 19 Feb 2014 14:54:11 +0100
Last Friday, a project I’ve been involved with was officially launched: filmothek.bundesarchiv.de. It’s a Web site showing contemporary history videos from the German Federal Archives (Bundesarchiv), distributed by their partner Transit Film, implemented by our company Digital Collections (based on our Digital Asset Management system DC-X), with Web design by our partner Pier2Port.
Content is king – the most interesting thing about this is the amazing videos from post-WW2 Germany and beyond (most of them in German): The surrender of Nazi Germany (1945), John F. Kennedy in Germany (1963), the fall of the Berlin Wall (1989). I love that it’s original, unadulterated material. The videos can be viewed freely, no registration required. (Film producers can buy licenses, of course.)
At the moment, there’s about 2,300 videos in the archives. In the back end, there’s a standard DC-X installation that holds the video files (in MP4 and WebM format) and the video metadata. Most of the customization is in the importers and metadata schema. Our customers can edit metadata in the back end, which is then replicated to a front end server.
There’s lots more features to come, and even more videos will be made available – so make sure to come back to the site again in a few months!
Tue, 18 Feb 2014 09:22:55 +0100
Our DC-X DAM systems often are the central content hub at large publishers, with lots of data flowing in from photographers, news agencies, editorial systems, Web CMSes. These provide data (articles, photos, graphics, ads, pages) in a host of different formats, which means we’re building “importers” all the time to ingest content into the DAM system.
As a developer, I’m often told to estimate how long building an importer will take. I can be sure that there’s some information missing, so here’s my checklist of things I need to know before I can give a rough estimate of the development time:
- Is the data copied into a local “hotfolder” (DC-X default), or does the importer have to fetch it (via FTP, an RSS feed etc.)?
- Which file format does the data come in (XML, HTML, CSV, JPEG, PDF, …)? Can it be in different formats?
- Can you provide the data in a format that the DAM system already supports? (Then we’re done.)
- How large are the files (typically, and maximum)? How many files are expected per hour/day/week?
- Is there a naming convention for directories and files? What should the importer do if files don’t follow that convention?
- Should metadata be read from the file and directory name? Which exactly?
- If some data arrives as a set of multiple files (e.g., a PDF file with an accompanying XML file): When starting with one of the files, how can the importer find the other files in the set (naming convention, file name given in the XML etc.)? Will they arrive roughly at the same time? If the set is incomplete, how long should the importer wait for the missing files to arrive? Should it import anyway when files are missing, or report an error?
- How about duplicate files coming in? Can they simply be rejected by the importer (DC-X default), or is there a need to update or replace data from previous imports? How can the importer detect duplicates? (DC-X default: A checksum on the file’s contents.)
- Should preview images be rendered? (DC-X will do this by default.) Or are preview images provided? Any special requirements when rendering preview images (like adding a watermark)?
- When rendering preview images from graphical file formats, is a colorspace or ICC profile conversion needed? (By default, DC-X will detect CMYK and create RGB previews.)
- Should text be extracted from textual files (PDF, EPS, Word)? (DC-X will do this by default, details depending on file format specifics.)
- Are there special requirements for reading file metadata (EXIF, IPTC, XMP etc.)? (DC-X reads and imports common metadata by default.)
- Have you provided representative samples of the input files?
- What exactly do your XML / CSV files contain? Have you provided a textual description? (It’s great if you’re using a standardized format, but please describe how exactly you’re using that format – most standards leave room for interpretation or extensions.) What metadata fields should the XML tags be mapped into on import?
- Are the files linked in some way? How can the importer find out what links where, and must the files be imported in a certain order to be able to establish these links?
- Does the new data fit in with the existing metadata schema, or will we have to define new fields? Any special expectations regarding searching the new data?
I’m sure this list is incomplete – please let me know what I’m missing!
Tue, 18 Feb 2014 15:05:00 +0100
Here’s an update to our thoughts on the main navigation for our new, simpler DAM UI:
The filter column on the left has been removed in favor of Google-style dropdown lists between search box and results. This saves space, and I hope will encourage filter usage because they now appear where the user is actually looking.
The search section indicator (“Bilder” in the screenshot) to the left of the search box has a brighter background; in the old draft you couldn’t really see that it belonged to the search input field.
A nice detail is that the search box now expands once focused. To make space for the larger input field, the links to the right of it switch from icon + text to icon-only while the search box is expanded.
(Please excuse the German screenshot. I took it a few days ago and cannot produce an English one because my development environment is messed up and ugly right now – we’ve switched from frontend to backend experiments for a while.)
If you’re interested in this stuff, you should read the brand new FogBugz Visits the Head(er) Shrinker post by Adam Wishneusky. Looks a bit similar, and also has a search box that grows when you type in it!
Update: The Nielsen Norman Group says we shouldn’t hide the available search sections in a mega dropdown… Jennifer Cardello and Kathryn Whitenton – Killing Off the Global Navigation: One Trend to Avoid: “Even if the global navigation is difficult to design and hard to maintain, most sites will still be better off showing top-level categories to users right away. It's simply one of the most effective ways of helping users quickly understand what the site is about.”
Fri, 07 Feb 2014 15:56:59 +0100
Raph Koster – Self-promotion for game developers:
“If you do not take your field seriously enough to study it, and try to know everything about it, and try to add new knowledge and understanding to the field, then you probably shouldn’t be self-promoting.
[…] You will earn respect for being honest enough to admit mistakes. It will not harm your standing at all. […] You will learn more about those mistakes from writing about them, and that will make your own work better.
[…] Odds are very good that well over half your career will be “dark matter” — stuff that will not be seen by the public. So those parts that are seen matter more than you think.”
[…] Say “we” not “I.” Because it’s almost always the truth.
[…] Have your own website, and have a portfolio of some sort on it. Ideally, the website’s domain is your name. […] Slideshare and its widgets will be the detritus of history in fifteen years. Post/host copies of everything you can on your own site.
[…] Get comfortable with public speaking. Develop a sense of humor if you haven’t got one. Be very good at demoing. […] Your marketing dept will start asking for you because devs with these skills are rare and valuable.”
(Via Patrick Durusau.)
Fri, 07 Feb 2014 22:35:15 +0100
James Rourke – DAM for Beginners: User Interface & experience:
“A note to vendors: don’t underestimate the value of how your system looks; you want to wow your client in a demo. A well-functioning system that looks dated or too technical might miss out to a less well-functioning system that looks nicer and easier to use.
[…] This technical UI can be used at the ‘back-end’ of a DAM system, where administrative functions and other complex actions are carried out, whilst the ‘front-end’ remains a user-friendly portal allowing for more basic actions. In this case only a limited number of well-trained, technically-aware users would operate on the ‘database’ UI.”
Exactly what we’re building right now: A friendlier, simpler UI for the casual user, complementing our complex, fully-featured UI.
Tue, 04 Feb 2014 11:51:01 +0100
Stephen Moss – Pete Seeger: five great performances:
“The manner in which he calls on the audience to participate is telling, too. He wasn't the star; the audience was. Music was a vehicle for mass expression. That helps to explain his opposition to Dylan's new course. Confronted with a rock band, the audience were reduced to mere spectators, fans; Seeger wanted participants, activists, He wanted to change the world, not just entertain it.”
Sorry for the off-topic post. But there’s lessons in here for software as well… (I’m enjoying the 1963 concert recording a lot, by the way.)
Thu, 30 Jan 2014 22:29:47 +0100
I love this quote from the Re/code interview with Chris Fry, Twitter senior vice president of engineering:
“One of the things I always think about is how to deliver three things to everyone that works for me. One is autonomy, one is mastery and one is purpose.”
These three are exactly what I value and want the most as an employee. Here’s what they mean to me:
Autonomy means that we can take initiative, make decisions, take responsibility, and manage our work on our own. We can only have autonomy if management trusts us to be self-motivating grown-ups, experts who work in the best interest of the company and its customers (even when no-one is supervising us). It also requires transparency and full information sharing – if someone holds back information, he’s keeping us from making the right decisions.
Mastery is two things: First, we want to be able to do great work – we love to learn, to get trained and gain experience. But then, we also want to be allowed to do great work. Stop the mediocrity, the permanent rushing and cutting corners, the overpromising and underdelivering. We want quality and beauty and excellence. Not to selfishly enjoy our pretty code, but for the long-term good of the customer and the company. (Be aware that we keep growing: While you might think we’ve just mastered some programming language, we’ve learned a lot more in the process and strive for quality in every other aspect as well.)
Purpose feels different for everyone, I guess. My goal is to make people happy by building tools that make their jobs easier and more fun. Tools that facilitate knowledge sharing, learning and creativity, which in turn will positively affect even more people. (Building photo databases and newspaper archives, as I’m currently doing, is a pretty good match.)
Want to keep your employees happy and motivated? Money can’t buy you that. Be willing to lose power, to truly care for them and treat them as partners. And give them autonomy, mastery and purpose.
(More daydreaming: If I were a manager)
Mon, 27 Jan 2014 22:40:56 +0100
Chad Fowler – Your most important skill: Empathy:
“I’m also a very strong introvert. I recharge when I’m alone or in very small groups of people (no more than 2 including myself is ideal) and I exhaust myself in crowds or in constant discussion.
[…] The reason crowds of people exhaust me is that I am constantly trying to read and understand the feelings and motivations of those around me. If I could just go through life talking and not listening, hearing but not processing, alone time and time in groups wouldn’t be so different for me.”
That’s totally me.
Tue, 21 Jan 2014 23:03:35 +0100
One of the more technical decisions when building our simpler DAM system user interface: Should we build it like a Web site, i.e. as a set of interlinked but independent Web pages? Or as a fancy, Ajax-powered “single page Web application” that you load just once, with all further interactions taking place within the same page?
The last times we had to decide this, we went with what was fashionable: DC4 lived on a single page that consisted of various frames. DC5 was a Web site with different pages. The current DC-X UI is a single page app (SPA).
For a good overview of the pros and cons, see Steven Willmott – The Death of the Web page.
My take: It’s better to start with regular Web pages, because their development takes much less time (enabling an agile, incremental development process). Moving to a single page app later is totally possible (the other way round is way harder). And initial load time as well as working links are important for casual usage (they don’t have the DC-X UI open all day) and Web interoperability.
What do you think?
Wed, 15 Jan 2014 11:23:28 +0100
Phil Libin, Evernote CEO – On Software Quality and Building a Better Evernote in 2014:
“There comes a time […] when it’s important to pause for a bit and look in rather than up. When it’s more important to improve existing features than to add new ones. More important to make our existing users happier than to just add more new users. […] Intentionally slowing down to focus on details and quality doesn’t come naturally to many of us. Despite this, the best product companies in the world have figured out how to make constant quality improvements part of their essential DNA. Apple and Google and Amazon and Facebook and Twitter and Tesla know how to do this. So will we.
[…] Since all Evernote employees are power users by definition, no one is more motivated to make Evernote better just for the sake of our own productivity and sanity. I’ve never seen people happier to just fix bugs.
[…] We understand that we have to maintain a high level of quality for the long term, if we want Evernote to be seen as a truly high-quality product.
[…] Our goal isn’t to have a product that’s just good enough that users rely on it despite its warts, it’s to have a world class product, built with solid technology and with a fit and finish worthy of our users’ love and loyalty.”
A great post, the likes of which I’d love to read from a lot more CEOs!
Wed, 08 Jan 2014 08:33:23 +0100
Jack Vinson – Out of the Crisis - still relevant:
“Deming repeats the main mantra over and over: Management owns the system. It is the system that generates the results. If those results are unacceptable, it is management’s responsibility to investigate and improve the system. Repeatedly. This is continuous improvement and is the only way to survive. Management should not pin the blame on their employees, the equipment, their suppliers, their customers, the weather, or anything else. Management are responsible. Period.”
Fri, 20 Dec 2013 10:34:21 +0100
Ruben Verborgh – The lie of the API:
“Accessing the website is quite easy: you just go to the URL of an object to visit it. […] Now developers come in. It can’t be as easy as reusing this unique identifier, can it? Of course not, we first have to read the documentation. Here are the steps you need to take: 1. Request an API key. 2. Receive an e-mail with this key. 3. […]
You get what you ask for. I imagine that developers were approached with the question “can you build an API?” And this is what they did.
But the question was wrong. It should have been: “can you add machine access?” That’s what we actually wanted all along, and an API is not the Web way to do that.”
Fri, 29 Nov 2013 14:15:48 +0100
Asset Bank – Introducing Crowd Feature:
“As far as we know Asset Bank is the first vendor of enterprise business software to offer crowdfunding as an option for clients who request the development of a product feature. Crowd Feature is intended for clients who want a new feature, can be flexible about time scales, and have a limited budget to spend on it.
[…] Crowd Feature is a SaaS website, available to any software vendor who is interested in doing the same for their clients.”
Great idea! For add-on features that don’t need changes in the core product, I’d go one step further and add an option to collaboratively develop the feature as open source, or ask a third party to implement it. That way, instead of having to give money to the vendor and wait for him to build it, you (a customer or partner) could invest the time of your own developers or pay someone else. Suddenly you have a core product as a common platform, and a market place where anyone can add value…
Thu, 28 Nov 2013 10:37:24 +0100
Jeff Schmitt – The Silent Company Killer:
“We had all hit the ceiling. To our superiors, we were simply plug-and-play commodities who performed a series of tasks. They ignored our ideas, so we quit sharing them.
[…] That’s the silent company killer: The failure to bring out the best in employees. They focus on executing tasks and fitting people into boxes. […] In their race to get the job done, they forget that the most productive employees are those who are learning, growing, and seeing themselves progress.”
Sat, 16 Nov 2013 23:00:31 +0100
Jeff Jarvis – CMS as Media Salvation. Not.:
“We should take inspiration from Doc Searls’ VRM (vendor relationship management) movement, figuring out how the public should manage us so we can serve them better. We should learn by example from Waze, Twitter, Reddit, Instagram, Craigslist, Facebook, et al and explore the value of offering platforms to communities so they can do what they want and need to do (“elegant organization,” Mark Zuckerberg calls that), with us adding journalistic value to the flow of information that now can exist without us.
If you have media ambitions and want to build an application, build something that is useful to the public, not us. No one in the public will value us because of the CMS we made. They couldn’t and shouldn’t give a damn.”
To quote myself, here’s a related idea from my German blog post Journalismus: Themenzentriertes Arbeiten, vernetzte Beiträge und hilfreiche Software:
“Local media needn’t cover everything themselves, but should be able to act as a platform for any topic. If readers start to write about a new topic of interest on their own blogs or social media accounts, local media can set up a central Web page for that topic, to aggregate and curate what their readers have written. They don’t even have to administer that page themselves, readers might volunteer for doing that. (An example: Combine official traffic information and tweets mentioning traffic jams in a way that’s useful to commuters.)”
Thu, 14 Nov 2013 14:59:36 +0100
Neil Gaiman: Why our future depends on libraries, reading and daydreaming:
“We all – adults and children, writers and readers – have an obligation to daydream. We have an obligation to imagine. It is easy to pretend that nobody can change anything, that we are in a world in which society is huge and the individual is less than nothing: an atom in a wall, a grain of rice in a rice field. But the truth is, individuals change their world over and over, individuals make the future, and they do it by imagining that things can be different.”
Wed, 06 Nov 2013 22:20:59 +0100
I’m an idealist who loves to daydream. For example – what would I do if I were a manager? I work as a “lead software architect”, a fancy title that means I’m still just a developer. (Which has its pros and cons.) Here’s the entirely theoretical lessons I learned from watching others manage, and reading pieces on leadership on the Internet:
First I’d assume that our team is a group of hard-working, intelligent, rational, professional, self-managing grown-ups who do their own independent thinking and are happy to take responsibility. We all want to do a great job and want the company and our colleagues to be great, too. Each of us has unique strengths, sees things that others overlook, and has an opinion that matters as much as everyone else’s. The team organizes itself, everyone takes on the next thing that’s important to work on.
In this environment, management mostly means helping to remove impediments which the team finds keep them from being productive. Interactions with other parties (other departments, customers) need to be scheduled and organized. Information must flow freely and tools work smoothly. The team needs help setting up spaces to review, reflect, encourage and criticize each other’s work. An occasional conflict needs mediation. Decisions have to be made in time (often by the team, but someone’s got to drive the decision making process).
I’d make almost all company information transparent, make sure that everyone has an up-to-date overview of the company financials and the work everyone’s currently doing. And of the work that lies ahead: all the features and deadlines we promised to our customers and partners. Sales people would be asked to bring photos and reports from their visits to potential clients. Developers would, where possible, publish screencasts of their work. We’d tell each other more stories of our day-to-day work, and help everyone see the big picture. This, I’m convinced, would make us grow together, increase motivation, uncover hidden problems and improve our decision making.
And we’d obsess over what is delivered, not over processes and rules. I’d try to spark a passion for the customer, to get everyone closer to the customer, closer to the problem. Make it our own problem, either because we’re using the thing ourselves or because we’re personally being held responsible by a customer. Ain’t it funny how priorities change when a developer visits a customer and brings back a list of things to do, knowing he’ll have to return in two weeks and report on the progress?
Other essentials: Be honest, admit your own faults freely but don’t point others’ out. Be understanding and forgiving and kind. Help others to focus and learn and grow. Find the right combination of pragmatic, elegant simplicity and quality/excellence – we’ll need both for our products to survive.
Let’s hope that I’ll never become a manager… I sure wouldn’t be able to live up to these ideals! (Hey, maybe the leader position can rotate between team members?)
Wed, 06 Nov 2013 22:32:35 +0100
Stefan Tilkov – On Monoliths:
“When a project is started, there is an assumption that it’s the goal of a project to create a single system. This typically goes unquestioned, even though the people or person coming up with the project boundaries often don’t decide this consciously.
[…] In my view, the most important thing to do, then, is to find out how many systems we should be building in the first place. It may be a single one, but it may also be two, five or a dozen (though probably not more) – clearly, the decision should be made very consciously, because whatever system boundaries you pick, you will likely be stuck with them for a very long time.”
Wed, 30 Oct 2013 10:44:30 +0100
Dave Ginsberg at Elegant Workflow – Interview with Chad Beer – Director, Digital Assets and Rights Management at American Express Publishing, and Part 2:
“I think search interfaces can be really, really klunky for something that should be so easy to drill down into, especially if you compare search on DAM systems to search on e-commerce sites. And ease of navigability in sort of a fluid, intuititive sense of how you get from one area of a DAM to another tends to be okay, but nothing that users can really teach themselves when you consider how much people can teach themselves about using apps on smartphones. There’s so much good interface design in the world now that is designed around guiding the user to using a new tool, where they don’t have to sit down and take a class. I just don’t see that kind of sweet user-sensitive design in DAMs.
[…] What I would tell anybody to do who’s getting a new system: Pick two to three primary goals they want their system to achieve, or pain points they want their system to address, and stick with those two or three. And don’t go any further – at least not until the system’s in place and successfully addressing that short list. […] People are disappointed by shaving it down. But then once it’s in place, people forget what you didn’t get. They only remember what it’s doing well, and if it addresses one or two or three things really well, people are happy. And you can build on that satisfaction and that success. You can prove your concept that way and then move forward with it.
[…] Having software at people’s fingertips teaches them so much about 1) what the software does well, 2) what they will absorb about the software and 3) what they really need from it. […] You can’t get to nuanced decisions until you’re actually touching software.”
Tue, 29 Oct 2013 12:55:14 +0100
Seth Gottlieb – CMS Adoption. Think Vertical, Not Horizontal.:
“High vertical adoption means using advanced features of the platform.
[…] Most of those flashy features that you see in a software demo are hardly used and the problem is getting worse, not better. […] I can’t tell you how many customer references I have talked to that only use the most basic features. And the software vendors are as concerned as I am about this. At least they should be. If vertical adoption doesn’t improve, customers will migrate to cheaper, simpler software.”
This applies to DAM software as well…
Wed, 16 Oct 2013 11:11:47 +0200
Naresh Sarwan – Can Current DAM Platforms Survive the Maturity Phase?:
“With a few notable exceptions, many DAM vendors have an almost limitless capacity for misplaced arrogance. They have incorrectly interpreted increased demand as a sign of improving customer satisfaction. This is a loud and clear message to vendors: just because you are selling well right now does not mean that users think your products are good enough!
[…] DAM vendors are falling over themselves to copy each other and building ever more complex platforms with layers of legacy issues that will need to be unwound and replaced repeatedly over the next few years. This will tie them up in knots and provide more agile competitors with an opportunity to make rapid progress at their expense.”
Wed, 09 Oct 2013 11:18:52 +0200
Paul Watson – Hack 70,000 UGC videos from the Storyful archive at MediaHackDay:
“The value of one of the biggest assets that publishing houses hold, their content archives, has yet to be unlocked.
[…] What stories happening now have precursors in our archive?
[…] Finding new ways to search, tag, link, package and push data is vital to the evolution of the modern newsroom.”
I fully agree that there’s huge potential in archives. (Our DAM systems have always been both newspaper archival system and news agency content store. When selling, we’re intentionally downplaying the former part because money is made in newspaper production, not in archives.) The conviction that digital creations will have meaning and value later, often in unexpected ways, is at the very heart of Digital Asset Management. Why else spend money to keep yesterday’s stuff? Don’t forget about your archives and librarians!
Mon, 07 Oct 2013 23:13:00 +0200
Texte und Bilder, die Redaktionen von extern zugeliefert bekommen – hauptsächlich von Nachrichten- und Bildagenturen – werden digital und meist mit guten Metadaten angeliefert. Der Weg in die Produktionssysteme des Verlags erfordert aber oft zu viel Handarbeit, und es gehen Metadaten dabei verloren. Das liegt daran, dass die Software im Verlag technische Beschränkungen hat oder sie nicht entsprechend konfiguriert wurde. Ein Rückweg zum Lieferanten nach der Veröffentlichung fehlt entweder oder ist aufwändig.
Im Idealfall hätte der Redakteur:
… eine übergreifende Sicht auf die Planung: wann zu welchen Themen Agenturen oder eigene Mitarbeiter Inhalte liefern werden, mit Übernahme in die Produktionsplanung
… eine einheitliche Sicht auf digitale Inhalte aus allen verfügbaren Quellen: ein Portal oder eine Suchmaschine, über die man auf selbst produzierte Texte und Bilder, Agenturmaterial, Angebote von anderen Redaktionen oder Freien, interne und externe Archive zugreifen kann
… eine 1-Klick-Übernahme aller (geplanten oder bereits vorhandenen) Inhalte in die eigene Produktion, mit sämtlichen Metadaten (Angaben zum Urheber, Nutzungsrechte und Vergütung, Bildunterschriften, Verschlagwortung, Verknüpfung zur Planung)
… einen automatisierten Rückweg, der den Anbieter über die geplante bzw. erfolgte Veröffentlichung informiert und so die Erstellung von Nutzungsstatistiken, Abrechnung und Belegexemplaren stark vereinfacht
… eine einfache Möglichkeit, selbst eigene Inhalte anderen Redaktionen anzubieten
All das ist technisch machbar. Programmieren muss man dafür elegante und verständliche Schnittstellen für Inhalteanbieter (z.B. Portale von Nachrichtenagenturen und Bilddatenbanken) und Produktionssysteme (Redaktionssysteme, CMS). Schwieriger ist die nötige Standardisierung von Metadaten und Protokollen:
Es braucht Konventionen, welche Metadaten-Formate wie genutzt werden (z.B. NewsML G2, RightsML). Zumindest die für die Produktion grundlegenden Metadaten (Datum, Embargo, Bildunterschrift, Nutzungsrechte, Copyright) müssen einheitlich (bzw. kompatibel) ausgetauscht werden können. Für ein themenzentriertes Arbeiten muss man noch deutlich weiter gehen und ein gemeinsames Metadaten-Vokabular (für Personen, Orte, Ereignisse/Veranstaltungen, Themen) schaffen. Das bringt einen erheblichen “Mehrwert”, ist aber schwierig: Mit der Vereinheitlichung von Vokabular und Strukturen kämpft das “Semantic Web” schon länger. Am ehesten können Nachrichtenagenturen hier Standards setzen.
Und es müssen sich Protokolle für die Schnittstellen etablieren, über die man Bilder und Texte anbietet. Wenn jeder Anbieter und Abnehmer sein eigenes Datensilo betreibt mit proprietären APIs, kann kein universaler Content-Marktplatz entstehen. Schauen wir uns doch vom Web ab, wie es geht: Jeder Inhalte-Anbieter nutzt entweder einen Dienstleister oder hostet selbst eine Website, die für jeden Text (bzw. jedes Bild) und jedes Thema eine eigene HTML-Seite (unter einer permanenten URL) anbietet, mit Links und semantischem HTML-Markup (für Metadaten und Rechte, z.B. RDFa). Dafür können verschiedene Parteien Crawler und Suchmaschinen bauen. Alternativ zum Crawling können XML-Sitemaps, RSS-Feeds und PubSubHubbub bereitgestellt werden. Inhalte und Suchmaschinen werden meist nicht öffentlich sein, sondern ein Login erfordern (was z.B. auch die Personalisierung der Rechte ermöglicht).
(Siehe auch: Software für Journalismus – zwei Ideen vor dem scoopcamp 2013 und Linked Data for better image search on the Web.)
Wie sieht’s aus, wer macht mit?
Fri, 27 Sep 2013 15:21:02 +0200
Julien Genestoux – Follow buttons everywhere:
“Generally, users express a future interest when they hit a follow button. Rather than performing a query over past data, they express their interest in future, related events.
Following is to the future what searching is to the past.
They have an amazing value for the service who offers them too, because it’s a strong signal to determine what content is expected by its users on their next visit, or what kind of content they’re willing to be notified for.
[…] We need to decouple the publication platform and the consuming platform for public information. Decoupling would also increase engagement on the publishing platforms because it would open their gates to the logged-out users.”
Great post! Having a curator watch interesting topics for me, and letting me subscribe to these topic feeds, is what I see as an important part of the future of news. (See my – sorry, in German only – blog posts Software für Journalismus and Themenzentriertes Arbeiten.) It’s also why I keep pushing for RSS/Atom feeds and am looking into IndieWeb.
Decoupling is something that search engines are good at. A feed-reading, topic-centric search engine that always sorts by date and uses semantic metadata from the Web pages (especially which topics the author writes about, using URLs instead of hashtags) should be a great hub for “following”.
Tue, 24 Sep 2013 11:18:19 +0200
Tim O’Reilly – How I failed:
“As a management team, you aren’t just working for the company; you have to work on the company, shaping it, tuning it, setting the rules that it will live by. And it’s way too easy to give that latter work short shrift.
[…] I was always pretty good at finding the sweet spot where idealism and business reality meet, but I didn’t spend enough time teaching that skill to everyone on my team. […] If I were starting O’Reilly all over again, I’d spend a lot more time making sure the culture I was trying to create was the one that I actually did create.
[…] Every manager — in fact, every employee — needs to understand the financial side of the business. One of my big mistakes was to let people build products, or do marketing, without forcing them to understand the financial impact of their decisions.
[…] Looking back, I wish we’d worked harder early on to build an organization in which human potential isn’t just expected and taken for granted, but is also nurtured — if necessary, with tough love. […] We ended up building a culture where managers too often compensated for the failings of employees by working around them, either working harder themselves, hiring someone else to fill in the gaps, or just letting the organization be less effective.
[…] I never regretted raising the bar […] but I look back at the many times I let something go by that I shouldn’t have because the team would be upset, and I regret every one of them.”
Sat, 21 Sep 2013 23:40:35 +0200
Laurence Hart – What Constitutes Industry Leadership?:
“People take their cue from the actual Leader. In the Content Management industry, it is usually the vendor(s) that are being copied by other vendors and being brought-up in almost every pre-sales discussion.
[…] Pretenders: These are vendors that think they are leaders but aren’t perceived that way. The older the vendor, the more likely they are a Pretenders.
[…] You could have the best tech that money can develop, but if the world either doesn’t know about it or can’t seem to get it to work, it doesn’t matter. […] Leadership is about having the best vision, communicating on that vision, and delivering on that vision.
[…] This vision needs to be out there year-round, not when a new product release or annual conference takes place.”
Mon, 16 Sep 2013 10:03:55 +0200
Ich habe ein grobes Konzept aus dem letzten Jahr in einer “Schublade” gefunden… Vielleicht ist ja für jemanden eine interessante Idee dabei. Es geht um themenzentrierte Veröffentlichung, das Vernetzen von Beiträgen und ein dafür geeignetes Tool für Redakteure. (Siehe auch mein Blog Post Software für Journalismus – zwei Ideen vor dem scoopcamp 2013.)
These 1: Verwandte Inhalte (“mehr zum Thema”, “das könnte Sie auch interessieren”) und die dafür notwendigen Metadaten werden immer wichtiger, weil…
These 2: … sich für Mediennutzer der Fokus von einzelnen Artikeln auf Themenseiten verschiebt: Eine Seite, die immer das Aktuellste zum Thema zeigt (das macht eine Facebook- oder Twitter-Profilseite ja auch), und darunter die Chronologie (also alle interessanten älteren Artikel zum Thema, inkl. Archiv). (Zum “river of news”-Trend s. auch Nachrichten sind Flüsse, kein Seen von Felix Schwenzel und Stop Publishing Web Pages von Anil Dash.) Themenseiten gibt es heute schon, z.B. bei SPIEGEL ONLINE, allerdings oft etwas lieblos gemacht und nur für wenige Themen.
These 3: Was andere Medien zum Thema beitragen, kann man nicht mehr ausblenden. Zu einer guten Themenseite gehören Links auf das, was die anderen schreiben (“people come back to places that send them away”).
These 4: Zu dem, “was die anderen schreiben”, gehört auch “social media”. Es geht nicht darum, Kommentare zu einem Artikel auf der eigenen Seite zu bekommen, sondern die im offenen Web geführte Diskussion mit anzuzeigen und zu integrieren. Bevorzugt geht es dabei um die eigenen Nutzer/Leser (die sollte man kennen, z.B. die, die Artikel kommentiert haben).
These 5: “social media” wird sich diversifizieren: Facebook und Twitter werden Konkurrenz bekommen, auch von Protokollen, mit denen jeder auf seiner eigenen Website schreiben und sich trotzdem mit anderen verlinken kann (IndieWeb). Aber es wird deshalb auch einfacher, herauszufinden, worüber jemand schreibt (eindeutige URLs statt manchmal mehrdeutiger Hashtags).
These 6: Ein lokales Medium muss nicht über alles selbst schreiben, sollte aber jedem Thema eine Plattform bieten können. Schreiben die eigenen Nutzer auf Blogs und “social media” verstärkt über ein noch neues Thema, kann das Medium einfach als Aggregator und Kurator eine zentrale Seite dazu bieten und ggf. sogar die Betreuung der Seite Nutzern überlassen. (Z.B. können Verkehrsinfos aus Staumeldungen und Tweets wie “Stau auf der A1” zusammengestellt werden zu einer Seite, die besonders für Pendler nützlich ist.)
These 7: Automatisierte Kategorisierung und Verschlagwortung wird als Werkzeug benötigt, reicht von der Qualität aber nicht aus. Menschen müssen die Automatik konfigurieren, übersteuern und trainieren können.
Wie würde ein Tool für Redakteure aussehen, das vernetzte, themenzentrierte Veröffentlichung unterstützt?
1. Eine elegante, schnelle, automatisch aktualisierte Themenübersicht: Worüber berichten wir heute (schon veröffentlicht oder noch in Planung/in Arbeit)? Welche weiteren, momentan nicht abgedeckten Themen finden gerade in anderen Medien und in “social media” statt? Was passiert dort sonst, das wir nicht so einfach in ein Themencluster pressen können?
2. Alle Inhalte zu einem Thema werden zusammen angezeigt: eigener Content (aktuell veröffentlicht, bzw. gerade in Arbeit), fremder (andere Medien, “social media”) sowie Archivinhalte. Das betrifft alle Content-Typen (Text, Bilder, Videos). Die Liste darf nicht zu lang werden: Es gibt einen cleveren Algorithmus, der nur die wichtigsten Dinge anzeigt (z.B. das Neueste und das Umfangreichste, oder die vertrauenswürdigste Quelle). Ein Klick auf “Mehr…” zeigt dann die ganze Liste. Mit einem Klick kann man jedes Element auf der Themenseite veröffentlichen oder verlinken.
3. Diese kombinierte Übersicht zu einem Thema kann auch der Redakteur sehen, während er für das Thema schreibt.
4. Direkt aus dieser Liste kann man mit “social media”-Kommentatoren in Kontakt treten: Nachfragen stellen, um Veröffentlichungserlaubnis bitten usw.
5. Die Themenerkennung wird nicht immer funktionieren. Mit einem Klick muss man etwas aus einem Thema entfernen können, wenn’s nicht zum Thema gehört. Und die Parameter für ein Thema (Signalwörter, Quellen, verbotene Wörter usw.) muss man simpel anpassen können und die Auswirkungen sofort live sehen: z.B. einen falschen Begriff ausschließen und sehen, wie die Liste aufgeräumt wird. Oder eine Themendefinition verallgemeinern und sehen, wie zwei vorher getrennte Themenblöcke zu einem kombiniert werden.
Thu, 12 Sep 2013 23:22:18 +0200
Tue, 10 Sep 2013 11:42:08 +0200
Tue, 10 Sep 2013 22:28:22 +0200
[Sorry for the German blog post – I’ll publish an English version soon.]
Ich arbeite für Digital Collections, einen Hersteller von DAM-Systemen (und mehr) mit Kunden hauptsächlich aus der Verlagsbranche. Deren Umbruch beobachten wir aus der Nähe und machen uns Gedanken darüber… Hier zwei Ideen, die ich aus meiner Perspektive als Software-Entwickler und Dokumentar für den “Journalismus der Zukunft” habe und an deren Verwirklichung ich gern mitarbeiten würde. Ich freue mich über Ergänzungen und Nachfragen; per Mail oder Twitter oder gern persönlich auf dem scoopcamp 2013 in Hamburg (das für mich der Anlass ist, diese Ideen aufzuschreiben).
(Hinweis: Das sind meine eigenen Vorstellungen, ich spreche hier nicht für meinen Arbeitgeber!)
1. “Deine Themen im Blick behalten” – ein Themenportal für den Leser
Den aktuellen Überblick – “was passiert gerade Interessantes in meiner Nähe und dem Rest der Welt?” – bekommt man prima über die gedruckte Zeitung, Fernsehen und Radio oder eine Nachrichtenseite im Web. Sich punktuell über ein spezielles Thema zu informieren (“Mesut Özil wechselt? Das muss ich lesen”), klappt auch recht gut über die Suchfunktion des Online-Auftritts oder Suchmaschinen.
Schwierig ist es dagegen, über ein Thema laufend informiert zu bleiben. Ich interessiere mich z.B. für den Themenbereich “Jugendamt” und das Thema “Snowden/NSA-Affäre”. Mein Wunsch als Leser: “Ich möchte es mitbekommen, wenn Presse, Radio, Fernsehen oder Blogs etwas Wichtiges zu meinen Themen veröffentlichen.” Um es dann lesen/hören/sehen zu können. Gern auch mal gegen Bezahlung.
Für den Fußballfan gibt es viele Angebote, er verpasst nicht viel. Ansonsten finde ich nur mit Glück eine Themenseite (aber nur für eine Nachrichtenquelle, hier ein SPIEGEL-ONLINE-Beispiel). RSS-Feeds sind meist nicht themenspezifisch und auch nur aus einer Quelle. Und die Mühe, die eher mittelmäßigen Google Alerts einzurichten, mache ich mir auch nur selten.
Andreas Fischer fragt: “Warum gibt es nicht längst ein gemeinsames Portal unserer Tageszeitungen, das ähnlich wie Google News dafür sorgt, Leser auf die einzelnen Websites weiterzuleiten?” Die “Paywalls” werden mehr, vielleicht könnte eine Art “iTunes Store für Verlagsinhalte” entstehen. Der hätte aber ein Problem mit der Kleinteiligkeit der Inhalte. Bei Musik bieten sich Seiten für Künstler und Alben an. Die Flut von täglich ein paar zehntausend neuen Artikeln muss ebenfalls sinnvoll gruppiert werden, um einen leserfreundlichen Zugang zu bieten. Meiner Meinung nach wäre eine Gruppierung nach Thema ideal.
Also eine Website, die die aktuell in den Medien behandelten Themen auflistet und für jedes Thema eine Seite anbietet, die täglich (oder noch öfter) aktualisiert auf passende Artikel verlinkt. Artikel auf den bekannten Nachrichten-Websites, aber gern auch von guten Blogs, aus den Archiven, Hintergrundinfos bei Wikipedia oder Hinweise auf Fernsehsendungen. Mit Veröffentlichungsdatum, Name der Quelle, Überschrift, Umfang (lang/mittel/kurz), Autor und einem Hinweis, falls der Artikel hinter einer Bezahlschranke liegt. Ich kann mich per RSS-Feed oder E-Mail benachrichtigen lassen, wenn neue Links hinzukommen.
Das würde ich mir als Leser wünschen. Und ich halte es für machbar!
Update: Hier ein Prototyp einer Themenseite für das scoopcamp 2013. S. auch meinen Blog Post Journalismus: Themenzentriertes Arbeiten, vernetzte Beiträge und hilfreiche Software.
2. Ein offenes Netzwerk für Anbieter von Bildern und Texten – und eine Suchmaschine für den Redakteur
Noch nie bestanden Zeitungen nur aus selbstproduzierten Inhalten. Freie, Externe, Korrespondenten, Nachrichtenagenturen, Bildagenturen liefern zu, und die eigene Produktion wird wieder anderen angeboten. Das Internet und die Digitalfotografie vereinfachen das Verteilen von Inhalten dramatisch – und das Veröffentlichen. Potentielle Anbieter und Abnehmer von Bildern und Texten gibt es immer mehr.
Diese zusammenzubringen und einen einfachen Austausch der Inhalte zu ermöglichen (einschließlich Metadaten zu Veröffentlichungsrechten, Honorierung, Planung), ist allerdings gar nicht so einfach. Mein Ansatz: Anbieter sollten ihre Inhalte auf Webseiten (i.d.R. mit Passwortschutz) bereitstellen und sich dabei an ein paar einfache Konventionen für das Datenformat (HTML+RDFa) halten.
Das ermöglicht es anderen (z.B. Verlagen, Agenturen), mittels bewährter “Crawler”-Technik Suchmaschinen für diese Angebote aufzubauen. (In so einer Suchmaschine können natürlich auch die eigenen, internen Archive enthalten sein.) Im Idealfall findet der Redakteur dann, wenn eigene oder Agenturbilder fehlen, die Bilder vom freien Fotografen, der zufällig gerade vor Ort war und sie über das Netzwerk allen anbietet. Oder der über das Netzwerk das Angebot veröffentlicht, dort Fotos zu machen, wo er sich heute aufhält.
Solch ein Netzwerk wäre offen für beliebige Teilnehmer (die sich natürlich über die Nutzung der Inhalte einig werden müssen) und auf keine proprietäre Software oder zentrale Instanz angewiesen.
Zu diesem Thema siehe auch meine Blog-Posts Linked Data for better image search on the Web und Linked Data for public, siloed, and internal images.
Was denkst Du, was denken Sie? Braucht keiner, rechnet sich nicht? Oder ist etwas dabei, das wir gemeinsam angehen können?
Thu, 05 Sep 2013 00:10:18 +0200
IndieWebCamp – Principles:
“Own your data.
Use visible data for humans first, machines second.
[…] Whatever you build should be for yourself. If you aren't depending on it, why should anybody else?
[…] The more your code is modular and composed of pieces you can swap out, the less dependent you are on a particular device, UI, templating language, API, backend language, storage model, database, platform.
[…] We should be able to build web technology that doesn't require us to destroy everything we've done every few years in the name of progress.”
Great principles for all content-centric software, not just the IndieWeb. “Data for humans first, machines second” sounds like RDFa to me…
Thu, 29 Aug 2013 13:44:24 +0200
Teresa Amabile and Steven Kramer for McKinsey – How leaders kill meaning at work:
“Trap 1: Mediocrity signals
[…] Many of the other 65 Karpenter professionals in our study felt that they were doing mediocre work for a mediocre company—one for which they had previously felt fierce pride. By the end of our time collecting data at Karpenter, many of these employees were completely disengaged. Some of the very best had left. […]
Trap 2: Strategic ‘attention deficit disorder’
[…] At another company we studied, strategic ADD appeared to stem from a top team warring with itself. Corporate executives spent many months trying to nail down a new market strategy. Meanwhile, different vice presidents were pushing in different directions, rendering each of the leaders incapable of giving consistent direction to their people. […]
Trap 3: Corporate Keystone Kops
[…] When coordination and support are absent within an organization, people stop believing that they can produce something of high quality. This makes it extremely difficult to maintain a sense of purpose.”
(Via Christiane Pütter at CIO.de)
Thu, 22 Aug 2013 11:33:29 +0200
Facebook, Google+, Twitter, LinkedIn: Semi-closed networks have grown to capture most “social” interactions on the Web as well as a lot of content, and they own many people’s online identities. There’s an emerging trend in the software developer community to move out of these “walled gardens” or “silos” – for lots of good reasons (see the IndieWebCamp “Why” page and the xkcd “Instagram” comic): Freedom, ownership, control, longevity, avoiding censorship. (And “harder to spy on by hosting in Switzerland”, since the NSA/Snowden revelations.)
To get started, read Klint Finley’s Wired.com article Meet the Hackers Who Want to Jailbreak the Internet. You can also listen to Tantek Çelik talking about the Rise of the Indie Web (45 minutes audio).
The IndieWebCamp site seems to be the most comprehensive collection of resources on the topic. (“IndieWeb”, the Independent Web, is the term many people are using. As of today, there’s not even a Wikipedia entry for it…)
POSSE (Publish (on your) Own Site, Syndicate Elsewhere) is a cornerstone of this movement. You’ll usually need new or extended software that runs on your own server and elegantly connects with other Web sites (the protocols for these interactions are still evolving). MediaGoblin is an interesting small DAM system built on IndieWeb principles, idno a self-hosted social network platform, storytlr a micro-blogging tool. The Indie Web Camp has a list of software projects. See also: unhosted Web apps and PRISM BREAK.
And here’s some articles worth reading:
Bastian Allgeier – Let’s build a better web: “We need to make it easy, convincing and enjoyable to move our personal data away from the big players. We need great self-hosted applications, which we can use to manage our emails, personal pictures, documents, private messages with friends, blog posts, etc..”
Tantek Çelik – On Silos vs an Open Social Web [#indieweb]: “All the silos are pressured to clutter and corrupt their UX with ads, "stickiness", "engagement", and all kinds of other garbage in a never-ending hamster-wheel chase of ever more page views. You don't have that problem. Take their best stuff and make it simpler, more elegant by cutting out all that crap. And then iterate.”
Ben Werdmuller – The #indieweb as a minimum viable social web ecosystem: “Many of the prevalent models for social software are hostile to the needs of both businesses and individual users. The IndieWeb aligns software developers with their users, while providing simpler tools for development, and encouraging both wider participation and more experimentation.”
Aral Balkan – Codename Prometheus: “We need open alternatives that are beautiful holistic experiences. Beautiful experiences that happen to be open and private; where you happen to own your own data. Beautiful experiences that you can hack if you so want to.”
Anil Dash – Rebuilding the Web we lost: “Privately-owned public spaces aren't real public spaces. They don't allow for the play and the chaos and the creativity and brilliance that only arise in spaces that don't exist purely to generate profit. And they're susceptible to being gradually gaslighted by the companies that own them.”
Shane Becker – No More Sharecropping!: “Then as we published all of our content on other services, we became dependent on them. We became digital sharecroppers.”
Marco Arment – Own your identity: “If you care about your online presence, you must own it. I do, and that’s why my email address has always been at my own domain, not the domain of any employer or webmail service. […] I’ve always built my personal blog’s content and reputation at its own domain, completely under my control.”
Jon Udell – Networks of first-class peers: “It is possible for various of our avatars — our websites, our blogs, our calendars — to represent us as first-class peers. That means: They use domain names that we own, they converse with other peers in ways that we enable and can control, they store data in systems that we authorize and can manage. Your Twitter and Facebook avatars are not first-class peers on the network in these ways.”
Will Norris – No one cares about your URLs (so buy a domain): “The only way for you to ensure the integrity and longevity of your content is for you to take ownership of how it is accessed. Do yourself a favor and go buy a domain that you use for publishing your content.”
Julien Genestoux – Independence day on the web: “This starts with owning your presence online: a domain name is cheaper than a phone number, easier to remember and will stay with you for as long as you renew it.”
Laurent Eschenauer – What the hell happened to Federated Social Networks?: “The idea is simple: get your own domain, host your site there, and slowly work towards federating with others. […] You get immediate value out of it (you got a blog) and you make exciting progress with a community of likeminded folks.”
Update: Matthias Pfefferle has also written a nice post – The rise of the IndieWeb [in German].
Wed, 21 Aug 2013 22:25:37 +0200
David Diamond – DAM Beauty and Usability:
“Beauty and usability are typically not words associated with digital asset management software, and for good reason. Have you seen the user interfaces of most DAMs?
[…] DAMs should be as close to invisible as possible. No one learns to create digital content just to spend time in a DAM. Let the digital assets be the stars.
[…] Don’t buy the “it can look like anything you want!” excuse. That just means it’s a DAM you can’t afford. Something must be available out of the box. See it. If the UI is ugly or it makes no sense to you, consider that Strike One against the system.”
I love David’s writing, opinionated and funny, and he’s usually right. There’s so much mediocrity in the DAM software market. Let’s point it out and raise the bar. (If you’re writing on the Web, please dare to have an opinion as well, and voice it clearly – this helps to not bore your readers.)
Tue, 20 Aug 2013 09:02:40 +0200
I’m trying to find a good “elevator pitch” for building hypermedia APIs with HTML. How about this:
Don’t build an API – publish your data instead: easy to read for both humans (not just developers) and software, and easy to link to.
After providing read access, the next step is to enable others to modify your data, manually as well as through software. That’s what we would call an API, of course. But I think it helps if you focus on making your data available instead of starting with “let’s build an API”. (I’m tired of APIs, as explained in my Linked Data for better image search blog post.)
Once the data is out there, everyone can “surf your Web of content” (including search engines if you let them). And developers can write code to automate, to glue separate data sources together, to mash them up.
In my opinion, XHTML+RDFa is the best way to reach that goal. But even if you disagree with my choice of format, I hope you can agree with the general point.
Making data more visible has long been a favorite topic of mine. A decade ago, I wrote a simple PHP script that made it easy to browse an Oracle database, because I hated how my valuable data was hidden behind arcane Oracle tools or the sqlplus command line. (Apparently, some people are still using that script. I guess I should start working on it again, and add RDFa and JSON to it.)
Update: Mike Amundsen comments “don't just tell them what's there (data), show what they can do (actions)”. He’s right, this is missing from my pitch. Don’t stop at publishing your data – let people work with it, and make the actions as easy to discover as the data itself!
Mon, 19 Aug 2013 10:10:50 +0200
One year ago, I wrote on Twitter that “my next API will be semantic XHTML”. Since then, I’ve been thinking a lot about Hypermedia APIs with HTML (and have done some prototyping). My dream API would use XHTML with RDFa, link to Atom feeds and offer an alternative JSON-LD representation.
Here’s a few articles on that topic that made me think:
It all started for me with Using HTML as the Media Type for your API by Jon Moore. Make sure to read this. And the “ugly website” Rickard Öberg quote tweeted by Stefan Tilkov.
Combining HTML Hypermedia APIs and Adaptive Web Design by Gustaf Nilsson Kotte is also a great read.
Then watch the full talk (53 minutes) by Jon Moore on Building Hypermedia APIs with HTML.
If you’ve got some time left, I highly recommend the RESTful Web Services book by Leonard Richardson and Sam Ruby. It already said this, back in 2007: “It might seem a little odd to use XHTML […] as a representation format for a web service. I chose it […] because HTML solves many general markup problems and you’re probably already familiar with it. […] Though it’s human-readable and easy to render attractively, nothing prevents well-formed HTML from being processed automatically like XML.” (By the way, the follow-up RESTful Web APIs is going to be published next month.)
I haven’t read the book Building Hypermedia APIs with HTML5 and Node by Mike Amundsen yet, but it sounds interesting.
Please let me know if I missed out on something important…
Wed, 14 Aug 2013 22:29:51 +0200
The Typo3 Neos 2017 WCM Forecast has experts predict the future of Web content management, with some great quotes.
Karen McGrane: “First, organizations will realize that WCMS doesn’t always support true multi-channel publishing. They will need to invest in new systems to decouple the authoring and storage layer from the presentation and publishing layer. This might mean adding middleware, developing new APIs or even choosing an entirely new CMS. […]”
Perttu Tolvanen: “I believe that in the future our “content management system” will have dozens of different pieces (for photos, for videos, for publications, for people, for projects, for services) and the purpose of our “strategic web content management system” is more about moderating those different sources and streams between different sites than managing the master content for those services. […]”
Martin Goldbach Olsen: “Intranets will be big again […].”
Jacob Floyd: “The latest trend in CMS development has been to make WYSIWYG inline-editing a first-class feature […]. That trend serves content editors very well, however it does not meet the needs of content creators and content authors.”
Mikkel Staunsholm: “We need to find a way to easily navigate and present any information available from a single centralised content hub, spanning all digital platforms.”
I’m reading this with the convergence between WCM and DAM in mind…
(Minor complaint: Let’s hope that in 2017, such an important document will be published as an HTML page instead of a not-too-accessible PowerPoint presentation on Slideshare.)
Mon, 12 Aug 2013 16:16:27 +0200
I’m currently learning/exploring RDFa (try searching my blog for “rdfa”). As a total newbie to the world of RDF and RDFa, these tools and resources have been helpful so far:
First, the W3C RDFa 1.1 Primer is easy reading, a great introduction to RDFa. And it links to the full specifications (which are also well-written).
The W3C RDFa 1.1 Distiller and Parser is a Web page where you enter a URL, then it summarizes the RDFa data it finds there. Good for verifying your own Web site’s RDFa. (Or try it with one of my blog posts or my home page, http://www.strehle.de/tim/ …)
If you’re like me and prefer to analyze your RDFa from the command line, install the pyRdfa distiller/parser library and run “scripts/localRDFa.py -p URL” (-p means RDF/XML output).
RDFa / Play is a Web page where you type in HTML+RDFa code and, as you type, see it turned into a pretty graph visualization. Nice for playing around with the RDFa syntax.
I’m trying to use common vocabulary if possible, often from the schema.org hierarchy.
Of course, the nice thing about RDFa is that you can always “view source” on other’s pages to see what they’re doing.
Are you into RDFa? Please let me know if I’m missing out on something!
Thu, 08 Aug 2013 13:59:45 +0200
A week ago, I wrote on Twitter: “A bit harsh, but: CxOs tend to fantasize, salespeople to lie, developers to underestimate. Poor project managers (and customers).”
This wasn’t intended as a rant: These are common pitfalls which contribute to software projects not being finished on time (or not at all).
It’s a well-known fact that software developers are bad at estimating how much time they need to implement some functionality. There’s an abundance of articles written about estimation (examples: Liz Keogh, Joey Shipley, Anders Abel).
Salespeople have a difficult job; sometimes they’ve got to sell something that doesn’t actually exist but they think can be delivered. And many requested features leave room for interpretation – they get into the habit of saying yes. It’s tempting to remain vague or bend the truth a little just to close the deal.
The CxO’s job is strategic long-term thinking. The potential trap is to become detached from day-to-day business operation. Then she might confuse yesterday’s strategic plans with what little of them development actually managed to implement until today.
There’s traps for everyone to fall into (including project managers and customers). Just because the problems and failures of developers are more widely and openly discussed doesn’t mean others have less responsibility for a successful project. (My theory: As engineers, developers are more likely to look for problems, honestly analyze them and publish their solutions.) If we want to do dramatically better, we need to improve on everyone’s role!
Fri, 02 Aug 2013 08:57:03 +0200
Henrik Kniberg – The Solution to Technical Debt:
“Crap gets into the code because programmers put it in! Let me make that crystal clear: Crappy Code is created by programmers.
[…] However, the most probable reason for why you are writing crappy code is: Pressure.
[…] Sometimes the cause of the pressure is the programmers themselves. Developing a feature almost always take longer than we think, and we really want to be a Good Programmer and make those stakeholders happy, so the pressure builds up from inside.
[…] If you are creating Crappy Code, development is going to get slower and slower over time. There is no business sense in this, and it is certainly not agile.
[…] Tell the world, and the people who you believe are pressuring you into writing code: “We have been writing crappy code. Sorry about that. We’ll stop now.”
[…] The real source of pressure (if there was any) will reveal itself. Quality is invisible in the short term, and that needs to be explained. Take the battle!”
Thu, 01 Aug 2013 14:04:58 +0200
Can you read Derek Sivers’ book Anything You Want (from 2011) and not want to start a company? I’ve been following Derek Sivers’ blog since 2004 so I was familiar with many of the stories he’s telling in the book. But I still loved to read it.
“Business is not about money. It's about making dreams come true for others and for yourself. […] When you make a company, you make a utopia. It’s where you design your perfect world.
[…] The key point is that I wasn’t trying to make a big business. I was just daydreaming about how one little thing would look in a perfect world.
[…] When you say “no” to most things, you leave room in your life to throw yourself completely into that rare thing that makes you say “HELL YEAH!”
[…] Never forget that absolutely everything you do is for your customers. Make every decision – even decisions about whether to expand the business, raise money, or promote someone – according to what’s best for your customers. If you’re ever unsure what to prioritze, just ask your customers the open-ended question, “How can I best help you now?” Then focus on satisfying those requests.
[…] If you want to be useful, you can always start now, with only 1 percent of what you have in your grand vision. It’ll be a humble propotype version of your grand vision, but you’ll be in the game. You’ll be ahead of the rest, because you actually started, while others are waiting for the finish line to magically appear at the starting line.
[…] Starting small puts 100 percent of your energy on actually solving real problems for real people.
[…] When you build your business on serving thousands of customers, not dozens, you don’t have to worry about any one customer leaving or making special demands. If most of your customers love what you do, but one doesn’t, you can just say goodbye and wish him the best, with no hard feelings.
[…] You need to confidently exclude people, and proudly say what you’re not. By doing so, you will win the hearts of the people you want.
[…] That’s the Tao of business: Care about your customers more than about yourself, and you’ll do well.
[…] If you find even the smallest way to make people smile, they’ll remember you more for that smile than for all your other fancy business-model stuff.
[…] There’s a benefit to being naïve about the norms of the world – deciding from scratch what seems like the right thing to do, instead of just doing what others do.”
Sun, 28 Jul 2013 22:54:38 +0200
Jeff Atwood – The Rule of Three:
“We think we've built software that is a general purpose solution to some set of problems, but we are almost always wrong.
[…] To build something truly reusable, you must convince three different audiences to use it thoroughly first.
[…] One customer or user or audience might be a fluke. Two gives you confidence that maybe, just maybe, you aren't getting lucky this time. And three? Well, three is a magic number.
[…] We're spending all our effort slowly, methodically herding the software through these three select partners, one by one, tweaking it and adapting it for each community along the way, making sure that each of our partners is not just happy with our discussion software but ecstatically happy, before we proceed to even tentatively recommend Discourse as any kind of general purpose discussion solution.”
I have long experienced this to be true. (It’s painful, because we’re in the “Enterprise Software” business and generate way too much code used by only one or two clients…)
It’s also true at a smaller scale: The APIs and formats and configuration settings internally used by our software need different use cases as well to prove that they’re well-designed. It helps that DC-X offers a lot of its functionality via UI, Web service, command line and PHP API (see Five faces of a Web app). Still, lots of areas remained “one-hit wonders” though we tried to make them reusable.
Fri, 19 Jul 2013 09:44:38 +0200
Ralph Windsor on Digital Asset Management News – Telerik Add Digital Asset Management To Sitefinity:
“This does mark a clear point of convergence between WCM and DAM – an outcome which has been talked about for some time and now looks to be definitely happening. It’s interesting to note similar trends with DAM systems starting to offer WCM functionality as discussed earlier this week.”
Ralph refers to WebDAM adding an embedded CMS. Adobe CQ (sorry, Experience Manager) tries to go the other way, tacking a DAM system onto their CMS (missing the opportunity to integrate with existing DAMs, and having issues). The free Koken is an interesting hybrid; it looks like a DAM but focuses on publishing and has an editor for essays/pages.
Like most DAM vendors, we did integrations with various Web content management systems at customer request (WordPress, Drupal, red.web, redFACT). Some are more elegant than others, but it’s the usual integration pains – different APIs, data models, UI extensibility points… And the fundamental problem of duplicated data that has to be kept in sync.
This is similar to the editorial systems (for print publications) we’re integrating, but the WCM / CMS market is unique in that it has a lot more active players. And it’s moving faster; new vendors and versions emerge and requirements are changing quickly.
In an ideal world, there would be both a DAM value chain and a WCM value chain: Well-architected software would allow us to use a DAM as the backend for a CMS. From the CMS, we could take the editing and administration frontend, and the Web rendering and delivery functionality, and bolt these onto a DAM content store that contains all of our assets. (Without having to duplicate the data.) DAMs are usually better at search, scaling to millions of assets, file format and metadata handling. (CMIS might be meant for that, but I haven’t heard of it being used that way. Did I miss something?)
Thu, 18 Jul 2013 08:37:47 +0200
As Tim Bray puts it: “There are lots of perfectly-legal reasons to want privacy. If you act all the time in a way that sensibly preserves yours, when one of those legal reasons becomes important you suddenly won’t be acting different in an attention-catching way.” Back in 2011, I already created an OpenPGP key, then forgot about it. Now seems the right time to actually start encrypting e-mails… Likely too few people will bother setting up their e-mail client for encryption. But I’d still like to understand how it’s done, and be ready for it. (I’m a newbie – if you’re doing encrypted e-mail, you’re welcome to send me a test mail that helps me verify my setup… Thanks!)
I’m on a Mac, using Apple Mail on OS X 10.8 for my personal e-mail (email@example.com). So I installed GPGMail from GPGTools, followed their First steps instructions and soon could use the nice “Encrypt” button when composing an e-mail to myself.
My own key, and the keys of people I want to exchange encrypted e-mails with, are managed in a separate application, GPG Keychain Access (“GPG Schlüsselbund” in German). These keys are stored locally on my computer, but there’s a central registry for OpenPGP keys, the “key servers”. I sent my public key to the key server, so you can retrieve it using the key ID 1F20C9AD or my firstname.lastname@example.org address. As I understand it, one should verify the “fingerprint” of the key after retrieving it from the key server – my key’s fingerprint is “C29E 9A3B 786C F2CD 0943 7763 8B3D A0A0 1F20 C9AD”. (I’m also publishing the key ID, fingerprint, and even the full public key on my homepage.)
There’s an ugly but helpful OpenPGP Keyserver Web interface where you can search by name, e-mail or key ID (prepend the ID with “0x”, i.e. “0x1F20C9AD” for mine).
What’s nice is that GPGTools come with a command line “gpg2” executable that lets me encrypt a file for someone (“gpg2 -se -r email@example.com tmp.txt”, turning tmp.txt into tmp.txt.gpg) and decrypt a file encrypted for me (“gpg2 -d tmp.txt.gpg > tmp.txt”).
Unfortunately, the GPGServices can only decrypt text in any OS X application, not encrypt it. Not sure how to work around this; it would be nice to easily both encrypt and decrypt text anywhere.
Tue, 09 Jul 2013 23:47:00 +0200
Jonas Öberg – Developer’s corner: A distributed metadata registry:
“Anyone should be able to run their own registry for their own works or works in which they have an interest.
[…] Standards such as ccREL provide a way in which a user can look up the rights metadata by visiting a URL associated with the work and making use of RDFa metadata on that URL to validate a license. That’s a useful practice, since RDFa provides a machine readable semantic mapping for the metadata while ensuring that the URL could also contain human readable information.
[…] Let’s further imagine that the unique identifier was always a URL.”
Mon, 08 Jul 2013 16:31:17 +0200
Laurence Hart – Box Isn’t Disrupting Because of the Cloud:
“Box is disrupting because they focus on the people using the application. SaaS is the the disruptive delivery mechanism that enables the spread of their solution.
All IT vendors are being disrupted in this fashion, not just Content Management. Ease-of-use is driving adoption in a viral nature that is almost unheard of in the space.”
Wed, 19 Jun 2013 00:04:11 +0200
In Web application development, I’m seeing a trend towards reusable components for building the user interface. The idea isn’t new (see Mashups, Portlets, Web Parts or jQuery Plugins): Make it easy to reuse ready-made UI elements built by different developers (e.g. a form field with autocomplete functionality, a date picker, a tree view, a dialog) in your Web application. That should save a lot of developer time.
But in the last years, lots of Web apps (including ours) committed to fat frameworks (Ext JS or YUI 2) which promised rapid development and a huge set of ready-made widgets. The first 60% of the app actually were developed rapidly, but then you were stuck: Extending the framework yourselves was hard, and swapping in widgets from other frameworks and libraries was ugly or impossible. To quote Dr. Axel Rauschmayer in Google’s Polymer and the future of web UI frameworks: “Currently, frameworks are largely incompatible: they usually come with their own tool chain, inheritance API, widget infrastructure, etc.”
Most prominently, the official W3C Web Components: “Web Components enable Web application authors to define widgets with a level of visual richness and interactivity not possible with CSS alone, and ease of composition and reuse not possible with script libraries today.” Watch the Web Components: A Tectonic Shift for Web Development video for an in-depth technical introduction.
Pete Hunt from Facebook – Why did we build React?: “React is a library for building composable user interfaces. It encourages the creation of reusable UI components which present data that changes over time.”
Making components interoperable (especially event handling, CSS/looks, consistent behaviour) is hard, there will always be elements that don’t go together well. But a simpler, more accessible approach to component building and packaging should make the lives of Web developers easier. I’ll try to share what I learn…
Fri, 14 Jun 2013 22:39:55 +0200
“ImageSnippets™ is a system for creating structured, transportable metadata for your images. It can be used as a digital asset management tool as well as an image/metadata publishing platform.”
Take a look at the help pages, and read Margaret Warren’s post introducing ImageSnippets to the iptc-photometadata Yahoo! Group – a new system which can help with protecting images from becoming orphans:
“ImageSnippets is a bit of a swiss-army knife prototype at the moment with many new types of terms and features not typically found in current metadata editing environments.
[…] The system creates an HTML+RDFa file containing a link to the image AND all of it's metadata is represented as structured data in the file.”
I like that it combines public, application-level and personal datasets. That you can reference an image by its URL, i.e. you don’t have to upload it and can still add metadata for it. (Reminds me of the DAM Value Chains – Metadata article by Ralph Windsor: “separate a digital file from metadata and other associated asset data so you could more easily delegate the task of managing it.”) And I love that it publishes RDFa!
Tue, 28 May 2013 22:57:07 +0200
David Diamond on CMSWire – Five Reasons Why DAM is No Photoshop:
“So what went wrong with the DAM industry? Where is the explosive growth? The IPOs?
[…] DAM vendors lack vision. Just as one could argue that PayPal should have been a product of Western Union, it's easy to argue that DropBox and Google Drive should have come from a DAM vendor.
[…] If a DAM vendor knows anything about DAM, it should be able to speak about it in unique terms, in content authored by its own personnel. Agreeing with Henrik de Gyor, linking to David Riecks articles, or retweeting Real Story Group is not how DAM vendors will move this industry forward.
[…] You can’t just unplug your metadata and assets from one DAM and plug them into another. This is bad news for disgruntled customers, but it’s great news for lazy DAM vendors. Business professionals call it 'high switching costs.'”
Thu, 16 May 2013 22:40:08 +0200
Cameron Morrissey on great leaders – Diary Entry #117 – Jump Under the Bus:
“Any mistake in their area of oversight is their fault – They should have seen it coming, should have prepared better, should have audited work better, or should have set up better processes. They understand that there is always something they could have done to prevent the mistake from occurring, and while the employee or peer may have had culpability as well, ultimately they are the leader.”
Wed, 15 May 2013 10:46:10 +0200
Seth Godin – Lead up:
“We have an astonishing amount of freedom at work. Not just the freedom to call meetings, make phone calls and pitch ideas, but yes, the freedom to quit, to find a new gig, to pick the clients we're going to take on and to decide how we're going to deal with a request from someone who seems to have far more power than we do. "Yes, sir" is one possible answer, but so is leading from below, creating a reputation and an environment where the people around you are transformed into the bosses you deserve.”
Mon, 13 May 2013 09:55:07 +0200
Software Gunslinger – PHP is meant to die:
“No matter how good or clever your idea looked on paper, if you want to keep the processes running forever they will crash, and will do it really fast under load, because of known or unknown reasons. That’s nothing you can really control, it’s because PHP is meant to die. The basic implementation, the core feature of the language, is to be suicidal, no matter what.”
I dare to disagree. We’ve been running PHP daemons on our (many) customer’s production servers for more than 15 years now (yes, on PHP 3 back then) and it has served us well. (The guy who was crazy enough to start this was a PHP Group member, so he knew what he was doing.)
There have been bugs and pain points throughout the years (as with every other technology): We discovered and reported a couple of memory leaks. We got to know gc_collect_cycles() and pcntl_signal_dispatch(). We learned to live with memory_limit (which is actually a feature keeping the processes from making the server swap due to a memory leak) the same way we have to live with JVM memory settings in Java-based software (I’m looking at you, Solr). Currently we have to tell cron to restart certain PHP processes once a day to work around a 5.3 memory leak. We’re using Supervisord for process control. (We’re storing jobs in the database, and a number of PHP CLI worker daemons process them in parallel.)
Still, all things considered, we’ve been able to run rock solid systems on many servers for more than a decade on command line PHP daemons. We’ve got a clean common PHP code base, used by both command line and Web page code. There’s certainly other ways to achieve this, but for us it’s still a very good setup.
(Via Jake A. Smith.)
Mon, 29 Apr 2013 12:49:38 +0200
C. Lawrence Wenham – Signs that you're a good programmer:
“In fact, another way to become emotionally detached from code is to put your interest into the outcome instead. The outcome you should be thinking of is a lady who's going to get fired if she doesn't deliver the output of your program at 4:59pm sharp.”
Mon, 29 Apr 2013 22:15:19 +0200
Derek Sivers – Seeking inspiration?:
“The inspiration is not the receiving of information. The inspiration is applying what you’ve received.
People think that if they keep reading articles, browsing books, listening to talks, or meeting people, that they’re going to suddenly get inspired.
[…] You have to pause the input, and focus on your output.”
Wed, 24 Apr 2013 15:53:21 +0200
Eric Smith – We are Principled: 6th Edition:
“There are many reasons to practice the demo but one of the biggest benefit is catching missed requirements. I couldn't list the number of times one team member would demonstrate a feature only to realize they had left something out or made a simple mistake, usually before the person they were demonstrating to could even catch it. The simple act of walking through the feature slowly proved immensely beneficial.
[…] Development, QA and management are all represented, and then somebody will ask QA: "So how come YOU didn't catch this bug?" It's said with an accusatory emphasis on YOU because after all it's QA's job to catch bugs and if they aren't catching them, then what are they doing? The poor QA lead fumbles around, mentions they'll add a test for it so it won't happen next time, and development goes on doing the same thing they've always done.
[…] If a mistake was made it's because development made it, not because QA didn't see it.”
(Via Rich Rogers.)
Thu, 18 Apr 2013 22:09:48 +0200
Shanley Kane – How the Productivity Myth is Killing Your Startup:
“You have to admire the insipid, dogged and naive devotion people have to believing they are going to get this huge of list of things done.
[…] All 10 projects are delivered late and half-assed. This is most sad for project number 4, which was the most important project of them all, and could have been completed and damn well if everyone had just worked on that.
[…] You are going to get way fewer things done than you think you’re going to get done. And those things will take you much longer than you plan for. Much as you must talk to teens about drinking, you must talk to your team about productivity.
[…] Create a culture of truthfulness about productivity by continually comparing plans, roadmaps, and strategies to their actual results — often the number of things that were cut, late, or done poorly will shock and awe. What would you have done differently if you knew what was actually possible from the onset?”
Thu, 18 Apr 2013 22:27:26 +0200
Alex Pukinskis – 3 Ways to Inspect and Adapt at Scale:
“We’re all familiar with top-down change initiatives. Senior leadership gets together, analyzes the problem, designs a solution, and announces it. Everyone else is left to react.
This approach leads to problems for two main reasons. First, people don’t know when change is going to happen, so they live in a constant state of low-level anxiety. Second, the leadership group never has all of the context.
[…] I talk to a handful of people 1-on-1. Sometimes I pair with another person on figuring out a process change. I write a clear, short explanation of what i’m proposing and what the underlying goal is I’m trying to achieve.
Then I ask for consent to move forward. […] I ask if people know specific ways the proposed change will harm the organization.”
He’s pointing to a technique called Holacracy, which sounds promising if a little secretive – I like this quote: “Managers are no longer needed, the leadership function is now distributed.”
Tue, 09 Apr 2013 10:18:49 +0200
I’ve got a nice job and I’m not hunting for a new one. This allows me to have some fun writing an uncommon resume – honest and personal. Not the polished, impersonal copy you’d use in a real-world job application. (Please remind me to delete this blog post if I’m actually searching for a job someday…)
I usually describe myself as “a passionate Web developer working on Digital Asset Management software.” (You’re not familiar with the “DAM” term? Think image databases and newspaper archives.) I do have a passion for Web development, DAM, data structures, quality, user experience, honesty, communication, and customers. (Not in that order.)
If you want to hire me as a developer, note that I’m not a “real programmer” in the sense that I have no Computer Science degree. Yes, I’ve been working as a full-time programmer (“senior developer” and “software architect” if you’re into fancy titles) for 15 years now, but in German bureaucracy that’s not always sufficient. It also means that I don’t do advanced mathematics and I’m not passionate about algorithms. (If you want me to sketch a quick sort algorithm during the job interview, I’m out.)
Instead of learning programming languages, I enjoy exploring related technologies and disciplines: I dived into XML, XSLT, Unicode, LDAP, Nagios, VMware, Solr, Topic Maps, RDFa, Hypermedia APIs. Tried to figure out how to document software and projects. Managed and implemented customer projects from end to end. I love to communicate so I started using screencasts, Wikis, blogging and Twitter. I appreciate having had the freedom to discover and introduce a lot of these things to our company, and I expect similar freedom from my next job.
I regret not having spent more time working on open source projects. Aside from the occasional PHP bug report and a few small tools I published, I didn’t contribute although most of our software is built on open source.
I have the little-known degree of “Diplom-Dokumentar (FH)”, which is roughly equivalent to a bachelor in information science / information management. This profession is about structuring, researching and disseminating information; I love it. I could work as a newspaper archivist, build taxonomies and metadata guidelines, help researchers find scientific articles and facts, or organize your company library or large intranet. Unfortunately this job market is small in Germany and continues to shrink. (Update: See my blog post Where have all the librarians gone?) But programming was my hobby, so working as a developer to produce software for archivists was something I could identify with.
(Being able to identify with the stuff I’m working on is very important to me. I’m sorry, but I won’t do browser games, software for the financial industry or work on ads. If I think your company is offering boring or pointless services or products, I’m not interested.)
What kind of person am I? First of all, you likely want to hire someone younger than me. Born in 1972, I’m quickly growing too old for the German job market. I’m an uncool non-hipster, not drinking, not partying, no sports. Just a family guy. A bit risk-averse and very loyal, so I’m likely to join you for the long term.
And I’m serious about spending time with my family: I want to travel as little as possible. I prefer office hours from 9 to 5, to be home in time to see the kids. Which doesn’t mean I’m not willing to work more: I’m known for working late, on weekends and even during vacation. My customers have got my mobile number and can call me anytime in case of emergency. But I’m doing extra time on my own terms, at home, when the kids are in bed. You let me go home in time and allow for the occasional “home office” day, and I’ll see that the work gets done (unless the work load gets unreasonably high). I’m also not into after-work activities and weekend retreats. (If you don’t value team building activities enough to do them during work hours, why should I?)
If I may say so, I think I’m a good communicator and can explain things well. I’m very empathic and a good listener; I care about people and harmonious relations. Working in a team – tackling huge tasks together, or playfully exploring and validating ideas – means a lot to me. But I’m also an introvert and sometimes like to focus on a single task all by myself. Then I put on the headphones and ignore everyone around me to get stuff done. (I hate working in a large room full of people, by the way…) You’re welcome to drag me out of my cave if you feel I need to spend more time with the team. I’m full of ideas, I love taking responsibility and having freedom, and I think I’m a good “manager of one” until you’re piling too much work on my desk.
I can be very patient with customers, and very impatient with pointless meetings and dumb policies. I hate lies. From you, I expect good, humble, transparent, team-driven management in some form of “agile” environment. See my blog for lots of quotes on what I consider good management.
How about you? Please send me a link if you dare to publish your own “honest resume”…
Wed, 03 Apr 2013 20:31:59 +0200
A wonderful rant by David Gewirtz on ZDNet – My infuriatingly unsuccessful quest for a good media asset management tool:
“There is a category called "Digital Asset Management" out there as well. These are enterprise-level products, often Web-based. You can begin to tell they'll be trouble because there's no price for the product on the site. Almost all providers of DAM tools have a "let us have an expert call you" button.
[…] I'm also disappointed in the Web-based and enterprise-based solutions.
First, the barrier of entry is huge. There appears to be a disconnect between the needs of a professional designer with thousands of images and a large corporation buying an enterprise package.
Second, most of the Web gallery and enterprise solutions still use relatively primative upload dialogs and download buttons. There are very few solutions that will let you drag from a Web page into a desktop application, or to the desktop, and do it for a bunch of images, and those that do also seem to think the only type of image that exists is based on bitmaps.”
Tue, 02 Apr 2013 16:27:46 +0200
You’re my colleague, or my boss. I wish you would do something, and we both can agree that it’s a good thing to do. What does it take you to actually start doing it?
Well, 1) you need to know about it, 2) you need to be able to do it, 3) you have to want to do it, and 4) you need to get started.
I used to think 1) and 2) and 4) are problematic. Luckily, there’s a lot that can be done about them: teaching, spreading information, giving freedom and responsibility, helping you focus. But as I grow older, I keep learning that I vastly underestimated 3). Whether you want something is your personal decision, your own will, and there’s not much I can do about it.
Now why would you agree something is the right thing to do, but still not want to do it? It turns out there are a lot of reasons: You don’t have the time – which means you don’t think it’s that important, you have other priorities. Or you’d rather have someone else do it. It’s also probably risky or uncomfortable or hard work, and you want to avoid that.
People rarely change. If my dreams or future rely on other people changing their will, I have a serious problem. That’s why you often read that hiring the right people, or choosing the right co-founders, is the most important success factor. (Unless you’re a magician like Steve Jobs who was great at influencing people – Guy Kawasaki remembers having learned from him: “The starting point of changing the world is changing a few minds. This is the greatest lesson of all that I learned from Steve.”)
Wed, 27 Mar 2013 09:19:31 +0100
Scott Adams – The Management-free Organization:
“Our decision-making so far seems to follow a rational model that goes like this:
1. We discuss the question (by email or Skype).
2. Everyone gives an opinion or adds information.
3. The smartest choice becomes obvious to all.
4. The end.
That decision-making model might not work in your company if some of your coworkers are worthless. There's always the one person in every meeting who keeps changing the topic, or doesn't understand the issue, or insists he knows more than he does, or is bluffing to cover his ass, or is jockeying for a promotion, and so on. To put it in clearer terms: Management exists to minimize the problems created by its own hiring mistakes.”
(I don’t think anyone is “worthless”. But I sure agree that mistakes in hiring, training, growing and empowering and motivating people poison companies. Let’s not just track and fix the bugs we developers put into our software, but these management failures as well.)
Mon, 25 Mar 2013 16:59:50 +0100