Bringing Up Merlin’s NOMAD: Those Troubling Teenage Years….

When we talk about MerlinOne’s extraordinary NOMAD™ AI engine we tell people it is like having a really smart 13-year-old who knows about every single visual asset in your DAM system. We say that because NOMAD™’s initial training was with several hundred million images scraped from the web, with their accompanying captions: and like a good 13-year-old, NOMAD™ studies and remembers every single one of them.

It turns out having a smart 13-year-old in your corner as you search for visual objects is a pretty powerful thing! Describe what you want to see, and NOMAD™ gets it for you by understanding your query and performing a purely visual search.

But like any smart teen-ager NOMAD™ has a constant need to absorb new things. And you want NOMAD™ to keep up with understanding all sorts of things like COVID (why do people wear masks?), and Omicron, and the Ukraine war. Our 13-year-old NOMAD™ will soon be 14 and should know about more things as part of their development.

And not just academic things: NOMAD™ already can identify emotions, but like any teenager needs to learn more about social interactions, recognize more verbs, learn about weddings and romance and pollution and alternative energy and body language.

It turns out training an AI model involves some of the same questions that raising a teenager recalls. If we want NOMAD™ to learn something new, do we throw a ton of examples at it all at once (like making a kid drink a bunch of beers to learn the downsides of drinking) or do we introduce things gradually (in AI we talk about “Batch sizes” of new data)? Or does it depend on the subject, and how familiar NOMAD™ is with related topics?

If a real-world child is home-schooled, do they get more information on topics their parents are passionate about? Should we make sure NOMAD™ does not get a disproportionate amount of sports data to keep it well rounded in knowledge or is some extended expertise in sports OK (or does it crowd out other subjects from NOMAD™’s model)?

And just like kids, some AI models are better suited to some things than others. Kids may have an aptitude for cross country running, or for video games, and they dive into an area they are interested in and gain domain knowledge by watching YouTube videos and reading blogs and articles online. Similarly, we can “fine tune” NOMAD™ to gain a lot of expertise in a narrow subject area, if that helps a specific customer.

Kids sometimes fall under the influence of someone with strong prejudices (“what do you mean not everything on the Internet is true?”), and their understanding of the world must be right sized with some corrective action. The same thing can happen with teenager NOMAD™: if trained on biased data, it can distort its model and that has to be monitored and detected so corrective action can be taken.

Sometimes kids can take a quantum step forward by coming up with a new way of thinking about something. NOMAD™, being multi-modal (working in both the text domain and the visual domain) can sometimes benefit from a swap of its text engine or its visual engine, as the state of the art improves.

Finally, AI engines have all sorts of parameters that can be tuned for best results, not unlike a teen-age athlete learning to adjust their diet for greater performance.

The rewards are tremendous in both areas: watching your child grow in understanding and developing great value systems that will serve them in their lives going forward, and for us, watching NOMAD™ become “wiser” and more useful in helping thousands of people get their jobs done!

This piece was written by David Tenenbaum, CEO of MerlinOne, and an advocate for constant DAM innovation. Connect with him on LinkedIn or email the author directly – dmt@merlinone.com