Skip to content
The DAM Book
Menu
  • Home
  • Bookshop
    • The DAM Book 3.0
    • The DAM Book Guides
      • Digitizing Your Photos with Your Camera and Lightroom
      • Organizing Your Photos with Lightroom 5
      • Multi-Catalog Workflow with Lightroom 5
    • Steve Uzzell
      • Open Roads Open Minds
    • eBook FAQ
    • Register
    • Affiliate Program
    • Media
  • Resources
    • The DAM Book 3.0 Resources
      • Content Revisions
      • Visually Speaking
      • Image Objects & File Formats
      • Using Metadata
      • Storage Hardware
    • Digitizing Your Photos Resources
    • Organizing Your Photos Resources
      • Organizing Your Photos Updates
    • For Educators and Students
    • Errata
  • About
    • In Person
    • Consulting
  • Blog
    • The DAM Book 3
    • Digitizing Your Photos
    • Announcements
    • Appearances
    • DAM
    • How to
    • Lightroom
  • Contact
  • Cart

Notes for DAM Europe presentation – Getting Real with AI and ML

Posted on June 26, 2019November 27, 2020 by Peter Krogh
These notes are prepared for the attendees of my talk at Henry Stewart DAM Europe summer 2019. In this talk I show how you can use Lightroom and the Anyvision plugin to run a collection of images through a Machine Learning tagging service (Google Cloud Vision) and evaluate whether the tags may be of use for your collection and your users.

Lightroom

If you don’t already have it, you’ll need to get Adobe Lightroom Classic (or one of the previous version of Lightroom 5.7 or later. This comes with an Adobe Creative Cloud subscription. There is also a “photographer’s plan” which is £9.98/month in the UK that gives you Lightroom and Photoshop. Here’s a link:
https://www.adobe.com/uk/creativecloud/photography.html

Anyvision Plugin

Here’s where to get the Anyvision plug-in by John R. Ellis:
http://www.johnrellis.com/lightroom/anyvision.htm#overview
The plugin is licensed on a “pay what you think is fair” model. It’s a very nice piece of work. If you’re using it for a corporate collection, $30 or $40 seems fair.

Suggested workflow

Prior to testing, you need to make sure you have Lightroom Classic or other compatible version of Lightroom, download the plugin, and install.

  1. Create a sample collection of at least a few thousand images to test with. I suggest a broad range of subject matter and sources.
  2. Add these images to a Lightroom catalog dedicated to the test
  3. If you want to test ML tags only, strip all other info first
  4. Close the catalog and make a duplicat of the entire catalog. This will be useful in later testingNow let’s run the first test to see the entire universe of tags that Google might assign. 
  5. Select all images and run Plugins>Anyvision>Analyze
  6. Set per screenshot below:Some notes on the settings:
  • I have set all threshold to 0 to get the largest number of tags. In all likelihood, we’re going to want to set these to a higher number like 75.  (With the exception of Landmarks, which seem to include very few false positives).
  • I have this set to write and OCRd text to the Headline field, which is often empty. You could also write it to the Caption (also known as Description) field. Caption is a more broadly accessible field.
  • I have included the scores, which will only show up in the Anyvision panel in Lightroom’s metadata panel.
  • I have checked the box to have Anyvision make letter-based subgroups of returned results to help keep the tags visually organized in the keywords panel.
  • I’ve also asked it to add GPS data whenever it recognizes a landmark.
  • I’ve checked the Reanalyze box, although this is only of use when running these images through a second time for comparison purposes.
  • I only run the translation on the OCR text, but it you have need to make the keywords available in multiple languages, you could do that here.

Making multiple catalog

Once you’ve run the images through Anyvision, you can repeat the process at different confidence levels to see what level is optimal for your own collection and metadata usage. I did that by running it at 0, 50, 75 and 90. To run again, here’s what I suggest:

  1. Take the duplicate catalog made above, and duplicate it again.
  2. Rename the catalog for the confidence level which you would like to run the process.
  3. Run the process and compare

 

 

 

 

 

 

 

 

Posted in Appearances, DAM, How to, Lightroom, Machine Learning, Metadata

Categories

Recent Posts

  • Black Friday deals from DAM Useful November 25, 2022
  • More PS4 camera scanning rigs available April 26, 2022
  • Try Tandem Vault 3 (TV3) for 30 days for free! November 26, 2021
  • Some webinars/podcasts from the last year November 23, 2021
  • Negative Lab Pro – a slam dunk for negative conversion November 22, 2021
Contact us
Copyright © 2025 The DAM Book – OnePress theme by FameThemes