2014-06-18

Eric Steven Raymond: Basics of the Unix Philosophy

I keep coming back to Eric Steven Raymond’s 2003 Basics of the Unix Philosophy:

“Rule of Modularity: Write simple parts connected by clean interfaces.

Rule of Clarity: Clarity is better than cleverness.

Rule of Composition: Design programs to be connected to other programs.

Rule of Separation: Separate policy from mechanism; separate interfaces from engines.

Rule of Simplicity: Design for simplicity; add complexity only where you must.

Rule of Parsimony: Write a big program only when it is clear by demonstration that nothing else will do.

Rule of Transparency: Design for visibility to make inspection and debugging easier.

Rule of Robustness: Robustness is the child of transparency and simplicity.

Rule of Representation: Fold knowledge into data so program logic can be stupid and robust.

Rule of Least Surprise: In interface design, always do the least surprising thing.

Rule of Silence: When a program has nothing surprising to say, it should say nothing.

Rule of Repair: When you must fail, fail noisily and as soon as possible.

Rule of Economy: Programmer time is expensive; conserve it in preference to machine time.

Rule of Generation: Avoid hand-hacking; write programs to write programs when you can.

Rule of Optimization: Prototype before polishing. Get it working before you optimize it.

Rule of Diversity: Distrust all claims for "one true way".

Rule of Extensibility: Design for the future, because it will be here sooner than you think.”

Wed, 18 Jun 2014 08:50:48 +0000
2014-06-17

David Diamond, Jeremiah Karpowicz: Fighting DAM Ignorance with Education and Cooperation

Jeremiah Karpowicz interviews David Diamond – Fighting DAM Ignorance with Education and Cooperation:

“Because the long-term benefits of DAM are so horribly obscured at the beginning, DAM always seems to have more downside than upside.

[…] DAM vendors like to spew best-practice advice that tells prospects to do their homework and carefully determine their needs.

[…] Vendors tend to introduce half-baked features that don't get the planning and UX considerations they deserve.

[…] I'm far more a DAM user than I am a marketing director. So rather than just deal with these situations, I become a screaming, maniacal customer-from-hell who expects it all to be fixed today, and I want a handwritten apology for my troubles too.

[…] What people hate about DAM is not ugly icons; people hate all the jumping around the UI they must do in order to get anything meaningful done. Nothing good will happen there until UX designers join R&D teams and DAM employees start actually using their own software.

[…] I'm never shy about referring people to a library sciences professional.

[…] I see [DAM] as a metadata-managed global file system that every program can use and every service can access. When I connect to my corporate network, what I see from my Open/Save dialog boxes is my organization's DAM”.

Must read (as usual).

Tue, 17 Jun 2014 13:40:01 +0000
2014-06-05

Cloud software, local files: A hybrid DAM approach

There’s been two interesting articles on hybrid Digital Asset Management systems this week: Jeff Lawrence’s Finding the Perfect Balance Between SaaS and In-House DAM, and Ralph Windsor’s Combining On-Premise And SaaS DAM Strategies.

I don’t know which DAM products already work the way Jeff is describing – a “tightly integrated hybrid DAM solution” that keeps work in progress in a local system, pushing finished assets to a SaaS component for external distribution. [Update: Jeff saysPicturepark, SCC, Kaltura and many others”.] I’ve been thinking about hybrid DAM for quite a while from the developer’s perspective. Here’s an idea that I haven’t gotten around to implementing yet (click to enlarge):

Local server: File storage, delivery and processing. DAM cloud: Software (user interface etc.), metadata database, search engine

The primary benefit of a hybrid DAM is fast internal file transfer because the files remain inside the local network. So let’s assume the asset files (images, PDFs, videos etc.) are stored on a local server. That local server will also deliver the files via a simple Web server, and run minimal file processing software to be able to ingest files and accept uploads, create renditions and extract file metadata.

The rest of the DAM software will run “in the cloud”: The user interface, metadata database and search engine index. When you’ll run a search in the DAM UI, the system will know your files’ URLs and point your Web browser to load them from the (fast) local network. (Just what image search engines on the Web are doing: They copy text and metadata into their index and provide the search interface, while the image files you’re seeing are downloaded from the original servers.)

Upload and ingestion will be a two-step process with the files going onto the local server, which then sends all metadata to the cloud DAM (to put it into its database and search engine index). Instructions for creating renditions (how many, how large) can be fetched from the cloud.

We’ll now have decoupled “software as a service” from “storage as a service”: The storage, delivery and processing of files is cleanly separated from the DAM software, the metadata database and the search engine index. The latter – which require a lot more ongoing maintenance (software updates, search performance tuning etc.) – will nicely be dealt with by the DAM provider in their cloud. The local file server component can be installed relatively easy, or run from a pre-packaged virtual machine appliance or even a hardware offering (“your local DAM storage box”).

Now what about distribution of assets to the outside world which doesn’t have access to your local network? If you expect low traffic, your local file server could be made available on the Internet and directly serve the files. Or files to be distributed could be copied to Internet-connected storage (in the DAM cloud or at any other storage provider). Maybe you’ll just want to copy smaller renditions of the files into the cloud, and redirect download requests for large files to your local server.

I’d add a small component that keeps copies of the metadata records on the local server. If the Internet connection fails or the DAM cloud goes down, you’ll be able to perform basic searches on your local server. (Or easily move to another DAM cloud provider since all the data is still under your control.)

If you wanted to get really creative, imagine the local server software running directly on your Windows or Mac client computer. Asset files could remain on your hard disk, while the heavy DAM machinery runs in the cloud. Or the “local server” would actually be running in a different cloud. With the protocol between DAM cloud and local server being open and well-documented, there could be multiple interoperable implementations. How about a distributed, “peer-to-peer” DAM with many local servers contributing to the same DAM cloud instance?

I’m pretty sure someone’s already doing this. Any pointers? [Update: Jason Wehling of NetXposure writes that NetX can sync portions of the repository onto local drives or shares.]

Thu, 05 Jun 2014 20:50:28 +0000
2014-06-04

Short links (2014-06-05)

Wed, 04 Jun 2014 22:06:08 +0000