DAM Has Been in a Slump for 30 Years

We have all been in a slump for 30 years. We didn’t even know it.

Humans are visual thinkers. Our dreams are in images, not text, as are our nightmares. When we say “Picture This!” that is exactly what we mean. We watch videos and movies and still photos, and the saying “a picture is worth a thousand words” is not just a cliché; it is a truth. And with each passing day, our society spends more time absorbing information from visual stimuli than from books.

But since the beginning of digital imaging 30 years ago, the only way we could find an image or video was to, first, attach words to it, and then expect others to use those exact words when looking for it. Guess the wrong words, and you never find the visual object. Or, as is increasingly the case, there may be little or no metadata. Good luck finding things.

For 30 years, we have just come to expect that is reality, and it has forced some compromises on us: for example, when we write a query to search on, we use few words, because the more words we use, the less the chance they will all be the “right” ones that appear in an object’s metadata. This is as good as it gets, so we learn to play the game and do the best we can, trudging along to get the job done. And living with the frustration of sometimes just being unable to find the right object. The person with the best memory wins, and when they retire or leave, there goes our institutional knowledge, despite our DAM.

Our slump has ended. There is light shining through.

Instead of guessing the words someone else attached to an object, what if you could do a purely visual search? Perhaps there is a solution that allows you to use as many words as you want to describe the image or video you are looking for, and be pretty certain you will get it. What if your DAM “thinks” visually, just like you do?

What does “purely visual search” mean? It means your DAM actually studies each picture and each clip in every video, and really comprehends the visual content. And more than that, it means your DAM understands what you are looking for, and the more words you use to describe a scene or a clip, the better it likes it. No more “exact word matches”.

Purely visual search can certainly be helped by, and work closely with metadata. If you want a picture of two people kissing in Madrid, metadata identifying the place as Madrid takes care of one thing, while a visual understanding of people kissing takes care of the other.

End the slump! Find out about purely visual search, which we call NOMAD™.