Category Archives: The DAM Book 3 Sneak Peek

Using Google Cloud Vision for OCR

Editor’s Note: This post combines a couple threads I’ve been writing about. I’ll provide some real-world methods for converting visible text to searchable metadata as discussed in Digitizing Your Photos. In the course of this, I’ll also be fleshing out real-world workflow for Computational Tagging as discussed in The DAM Book 3 Sneak Peeks.

In the book Digitizing Your Photos, I made the case for digitizing textual documents as part of any scanning project. This includes newspaper clippings, invitations, yearbooks and other memorabilia. These items can provide important context for your image archive and the people and events that are pictured.

Ideally, you’ll want to change the visible text into searchable metadata. Instead of typing it out, you can use Optical Character Recognition (OCR) to automate the process. OCR is one of the earliest Machine Learning technologies, and it’s common to find in scanners, fax machines and in PDF software. But there have not been easy ways to automatically convert OCR text to searchable image metadata.

In Digitizing Your Photos,  I show how you can manually run images through Machine Learning services and convert any text in the image into metadata through cut-and-paste. And I promised to post new methods for automating this process as I found them. Here’s the first entry in that series.

The Any Vision Lightroom Plugin
I’ve been testing a Lightroom plugin that automates the process of reading visible text in an image and pasting it into a metadata field. Any Vision from developer John Ellis uses Google’s Cloud Vision service to tag your images for several types of information, including visible text. You can tell Any Vision where you want the text to be written, choosing between one of four fields. as shown below.

Here is part of the Any Vision interface, with only OCR selected. As you can see, you have the ability to target any found text to either the Caption, Headline, Title or Source filed. I have opted to use the Headline field myself, since I don’t use it for anything else. 

Results
Here are my findings, in brief:

  • Text that appears in real-life photos (as opposed to copies of textual documents) might be readable, but the results seem a lot less useful.
  • Google does a very good job reading text on many typewritten or typeset documents. If you have scanned clippings or a scrapbook, yearbook or other typeset documents, standard fonts seem to be translated reasonably well.
  • Google mostly did a poor job of organizing columns of text. It simply read across the columns as though they were one long line of non-sensical text. Microsoft Cognitive Services does a better job, but I’m not aware of an easy way to bring this into image metadata.
  • Handwriting is typically ignored.
  • For some reason, the translate function did not work for me. I was scanning some Danish newspapers and the text was transcribed but not translated. I will test this further.

Examples
(Click on images to see a larger version)

Let’s start with an image that shows why I’m targeting the Headline field rather than the caption field. This image by Paul H. J. Krogh already has a caption, and adding a bunch of junk to it would not be helping anybody.
You can also see that the sign in the background is partially recognized, but lettering in red is not seen and player numbers are ignored even though they are easily readable.

In the example below, from my mother’s Hollins College yearbook, you can see that the text is read straight across, creating a bit of nonsense. However, since the text is searchable, this would still make it easy to find individual names or unbroken phrases in a search of the archive.
You can also see that the handwriting on the page is not picked up at all.

In the next example, you can see that Google was able to see the boxes of text, changes of font and use of underline to hep parse text more  properly. 

And in this last example you can see that Google is having a terrible time with the gothic font in this certificate, only picking out a small fraction of letters properly. 

The Bottom Line
If you have a collection of scanned articles or other scanned textual documents in Lightroom, this is a great way to make visible text searchable. While Google is not the best OCR, thanks to Any Vision, it’s the easiest way I know of to add the text to image metadata automatically.

AnyVision is pretty geeky to install and use, but the developer has laid out very clear instructions for getting it up and running and for signing up for your Google Cloud Vision account. Read all about it here.

Cost
Google’s Cloud Vision is very inexpensive – it’s priced at $.0015/per image (which works out to $1.50 for 1000 images.) Google will currently give you a $300 credit when you create an account, so you can test this very thoroughly before you run up much of a bill.

Watch for another upcoming post where I outline some of the other  uses of Any Vision‘s tagging tools.

 

 

Computational Tagging – What is it good for? (Absolutely something!)

This post is adapted from the forthcoming The DAM Book3.

There is a lot of hype and hazy discussion about the future of AI, but it’s often very loosely defined.  In a previous blog post, I made the case for lumping a lot of this into a category I’m calling Computational Tagging. In the second post, I made a distinction between Artificial Intelligence, Machine learning, and Deep Learning, In this post, I’ll outline a number of the capabilities that fall under the rubric of Computational Tagging.

What can computers tag for?

The subject matter will be an ever growing list, and in large part will be determined by the willingness of people and companies to pay for these services. but as of this writing, the following categories are becoming pretty common.

  • Objects shown – This was one of the first goals of AI services, and has come a long way. Most computational tagging services can identify objects, landscapes and other generically identifiable elements.
  • People and activities shown – AI services can usually identify if a person appears in a photo, although they may not know who it is unless it is a celebrity or unless the service has been trained for that particular person. Many activities can now be recognized by AI services, running the gamut from sports to work to leisure.
  • Specific People – Some services can be trained to recognize specific people in your library. Face tagging is part of most consumer-level services and is also found in some trainable enterprise services.
  • Species shown – Not long ago, it was hard for Artificial Intelligence to tell the difference between a cat and a dog. Now it’s common for some services to be able to tell you which breed of cat or dog (as well as many other animals and plants.) This is a natural fit for a machine learning project, since plants and animals are well-categorized training set and there are a lot of apparent use cases.
  • Adult content – Many computational tagging services can identify adult content, which is quite useful for automatic filtering. Of course, notions of what constitutes adult content varies greatly by culture.
  • Readable text – Optical Character Recognition has been a staple of AI services since the very beginning. This is now being extended to handwriting recognition.
  • Natural Language Processing – It’s one thing to be able to read text, it’s another thing to understand its meaning. Natural Language Processing (NLP) is the study of the way that we use language. NLP allows us to understand slang and metaphors in addition to strict literal meaning. (e.g. we can understand what the phrase “how much did those shoes set you back?”). NLP is important in tagging, but even more important in the search process.
  • Sentiment analysis – Tagging systems may be able to add some tags that describe sentiments. (e.g. It’s getting common for services to categorize facial expressions as being happy, sad or mad.) Some services may also be able to assign an emotion tag to images based upon subject matter, such as adding the keyword “sad” to a photo of a funeral.
  • Situational analysis – One of he next great leaps in Computational Tagging will be true machine learning capability for situational analysis. Some of this is straightforward (e.g. “this is a soccer game”.) Some is more difficult (“This is a dangerous situation.”) At the moment, a lot of situational analysis is actually rule based. (e.g. Add the keyword vacation when you see a photo of a beach.)
  • Celebrities – There is a big market of celebrity photos, and there are excellent training sets.
  • Trademarks and products – Trademarks are also easy to identify, and there is a ready market for trademark identification (e.g. alert me whenever our trademark shows up in someone’s Instagram feed). When you get to specific products, you probably need to have a trainable system.
  • Graphic elements – ML services can evaluate images according to nearly any graphic component. This includes shapes and colors in an image, These can be used to find similar images across a single collection or on the web at large. This was an early capability of rule-based AI service, and remains an important goal for both ML and DL services. .
  • Aesthetic ranking – Computer vision can do some evaluation of image quality. It can find faces, blinks and smiles. It can also check for color, exposure and composition and make some programmatic ranking assessments.
  • Image Matching services – Image matching as a technology is pretty mature, but the services built on image matching are just beginning. Used on the open web, for instance, image matching can tell you about the spread of an idea or meme. It can also help you find duplicate or similar images within your own system, company or library.
  • Linked data – There is an unlimited body of knowledge about the people, places and events shown in an image collection – far more than could ever be stuffed in to a database.  Linking media objects to data stacks will be a key tool to understanding the subject matter of the photo in a programmatic context.
  • Data exhaust – I use this term to mean the personal data that you create as you move through the world, which could be used to help understand the meaning and context of an image. Your calendar entries, texts or emails all contain information that is useful for automatically tagging images. There are lots of difficult privacy issues related to this, but it’s the most promising way to attach knowledge specific to the creator to the object.
  • Language Translation – We’re probably all familiar with the ability to use Google Translate to change a phrase from one language to another. Building language translation into image semantics will help to make it a truly transcultural communication system.

Computational Tagging – Artificial Intelligence, Machine Learning, and Deep learning

This post is adapted from the forthcoming The DAM Book3.

There is a lot of hype and hazy discussion about the future of AI, but it’s often very loosely defined.  In a previous blog post, I made the case for lumping a lot of this into a category I’m calling Computational Tagging. In this post, I’ll split that into some large component parts. (Read the next post here).

What’s the difference between Computational Tagging, Artificial Intelligence, Machine Learning, and Deep Learning?

While the definitions of these processes have a lot of overlap, we can draw some useful distinctions. Let’s use a Venn diagram to illustrate the relationships.

Computational tagging refers to any system of automated tagging that is done by a computer. This includes the metadata added by your camera. It also includes information like a Wikipedia page or other network-accessible information  that could be added by simple linking.

Artificial Intelligence (AI) encompasses any computer technology that appears to emulate human reasoning. AI could be as simple as a set of rules that can create an intelligent looking behavior (e.g. a self-driving car could be taught the “rule” that you don’t want to cross a double yellow line.) AI also includes the more complex services  outlined below.

Machine Learning (ML) is a subset of AI that is more complex. Instead of just following an established  set of rules, in an ML environment, the system can be trained to discover the rules. An ML system for identifying species, for instance, uses a training set of tagged images to figure out what a Labrador retriever looks like.

Deep Learning (DL) is a specific type of ML that makes use of a predictive model in its learning process. This process actually mimics the way the brain works. In Deep Learning, the system does not just look at  results, but it uses a predictive model to train itself.  It is constantly testing a hypothesis against results, and adjusting the hypothesis according to this results.

Here’s how it works in your brain. The central nervous system is providing constant  input stimulus. Your brain then makes constant predictions about what the next input should be. When the input does not match the prediction, it recalibrates. You experience this process when you taste something you expect to be sweet and it’s salty, or when you take a step and the level of the ground is not where you expect it to be.

Read the next post here.

Computational Tagging

In my SXSW panel this year, Ramesh Jain and Anna Dickson and I delved into the implications of Artificial Intelligence (AI) becoming a commodity, which will be a commonplace reality by the end of 2017.  We looked at several classes of services and considered what they were good for.

I’ve been spending a lot of time on the subject over the last few months writing The DAM Book 3. Clearly AI will be important in collection management and the deployment of images for various types of communication.

But I  hate using the term AI to describe the array of services that help you make sense of your photos. There’s actually a bunch of useful stuff that is not technically AI. Adding date or GPS info is definitely not AI. And linking to other data (like a wikipedia page) is not really AI. ( It’s actually just linking). Machine Learning and programmatic tagging comes in a lot of forms – some is really basic, and some is complex.

The term Computational Imaging was pretty obscure when the last version of The Dam Book was published, but it’s become a very common term. I think this is a useful concept to extend to the whole AI/Machine Learning/Data Scraping/Programmatic Tagging stack.

In The DAM Book 3, I’m using the term Computational Tagging to refer to all the computer-based tagging methods that involve some level of automation. This runs from the tags made by the computer in my camera to the sophisticated AI environments of the future. At the moment, it’s not widely-used term (Google shows 138 instances on the web), but I think it’s the best general description for the automatic and computer-assisted tagging that are becoming an essential part of working with images.