As part of my recent repair and maintenance jag, here’s a video showing how to replace the dead batteries in an old Quantum Battery 1. Even though this is more than 30 years old, these batteries are still quite useful in powering modern flashes.
If you don’t already have it, you’ll need to get Adobe Lightroom Classic (or one of the previous version of Lightroom 5.7 or later. This comes with an Adobe Creative Cloud subscription. There is also a “photographer’s plan” which is £9.98/month in the UK that gives you Lightroom and Photoshop. Here’s a link:
Prior to testing, you need to make sure you have Lightroom Classic or other compatible version of Lightroom, download the plugin, and install.
- Create a sample collection of at least a few thousand images to test with. I suggest a broad range of subject matter and sources.
- Add these images to a Lightroom catalog dedicated to the test
- If you want to test ML tags only, strip all other info first
- Close the catalog and make a duplicat of the entire catalog. This will be useful in later testingNow let’s run the first test to see the entire universe of tags that Google might assign.
- Select all images and run Plugins>Anyvision>Analyze
- Set per screenshot below:Some notes on the settings:
- I have set all threshold to 0 to get the largest number of tags. In all likelihood, we’re going to want to set these to a higher number like 75. (With the exception of Landmarks, which seem to include very few false positives).
- I have this set to write and OCRd text to the Headline field, which is often empty. You could also write it to the Caption (also known as Description) field. Caption is a more broadly accessible field.
- I have included the scores, which will only show up in the Anyvision panel in Lightroom’s metadata panel.
- I have checked the box to have Anyvision make letter-based subgroups of returned results to help keep the tags visually organized in the keywords panel.
- I’ve also asked it to add GPS data whenever it recognizes a landmark.
- I’ve checked the Reanalyze box, although this is only of use when running these images through a second time for comparison purposes.
- I only run the translation on the OCR text, but it you have need to make the keywords available in multiple languages, you could do that here.
Making multiple catalog
Once you’ve run the images through Anyvision, you can repeat the process at different confidence levels to see what level is optimal for your own collection and metadata usage. I did that by running it at 0, 50, 75 and 90. To run again, here’s what I suggest:
- Take the duplicate catalog made above, and duplicate it again.
- Rename the catalog for the confidence level which you would like to run the process.
- Run the process and compare
It’s Panel Picker time again! Please take a moment and vote for my session proposal for SXSW 2019. Once again, I’ve teamed up with Anna Dickson to explore the use of visual media and the data that is connected to it.
Small Photos, Big Data: A Connectivity Manifesto
On the mobile web, images serve a greater purpose than simple visual description. Rich media images are increasingly used to connect people, events, institutions, ideas, advocacy and commerce. As we move into a new era of visual communication, this trend is accelerating. While the use of connected images blossomed on social media services, it reaches far beyond walled gardens into API-based interchange on the open web. Machine learning and linked data are creating new methods to make connections, and the Data Transfer Project is opening up access to the underlying graph for portability and innovation. In this presentation, we will explore the current state of visual media connectivity, what it can do for you, how to enhance your own image connectivity, and how to avoid costly mistakes.
I’ll be headed to Los Angeles in mid-October for Adobe Max, my third time there. Over the last several years, the conference has grown like crazy, including the addition of a lot of photo-related programming. In each of the years I’ve attended the conference, I walked away with a much better understanding of the emerging media landscape.
Here is a highlight video from 2017. It gives you a peek at the type of content at Adobe Max.
There is a fascinating mix of programming at Max. There are breakout presentations, workshops, pre-conference multi-day workshops, and plenary sessions. The big plenary sessions are the ones that were most interesting to me, including inspirational talks from Annie Griffiths, and Jonathan Adler.If you are interested in where Artificial Intelligence and Machine Learning is going, Max provides a showcase for Adobe’s massive undertaking, Sensei. Sensei is purpose-built for the creative, marketing and communication industries, and it is poised to have far-ranging effects on the way visual media is created and deployed.
The Sneaks are a look at experimental development efforts, including products that are still on the drawing board. Always fun and popular, and hosted by a people like Nick Offerman or Kumail Nanjiani.As you can see, there is a lot of the content available on free video channels. So why go? Like all good conferences, the value is frequently found in the personal connections you make rather than strictly in the programming. And in the best conferences, you open your mind with new programming at the same time you are making connections with new people.
There’s also a pretty good party at the end of the thing, usually including good live music, a ton of great food and drink, along with other fun and games.
Max is not cheap – list price is $1595, and the discounted price of $1295 is only available until July 31. I have still not cracked the code to get a presenter slot at Max, but this year I’m going as a TA. I’ll help out someone’s classes, learn, and meet new people. If you are looking for a hint of what the future of media will bring, I suggest you give Max a try.
There have been a flurry of companies producing white papers on the way that blockchain applications can help to solve the challenges of independent creators. These range from new distribution networks to services that claim to solve the attribution/ownership issues.
In response to a tweet by my friend Leora Kornfeld, I launched a little fusillade at her, explaining why I think that all of the proposals I have seen are worse than worthless – they actually provide negative benefit to independent creators. Here is that tweet storm in paragraph form.
I come at this as a person who makes the bulk of his income by selling intellectual property – photos, books and videos – print, DVD and download. None of the challenges I face are going to be helped by blockchain.
The main challenges are, in this order:
1. Making content worth buying
2. Developing, maintaining and expanding an audience
3. Find technical solutions to make the production and sale of materials possible and profitable
There is a database problem in this task set, primarily the audience relationship management. But this task is not going to be helped by using a public, immutable, distributed database. (GDRP anyone?)
We have a valuable long-term relationship with my readers. We need to keep records of what they bought, how to contact them, what their communication preferences are, whether we’ve done seminars with them, what problems they encounter, whether they are nice (almost all are).
This calls for Filemaker (or some other dedicated contact/customer management tool), not a blockchain.
Okay, so maybe blockchain is not going to help you sell, but can’t it help protect your stuff? Put it in the blockchain and it’s protected…somehow.
In the US at least, putting something in a blockchain gets you exactly zero protection. If you want to protect your stuff, you need to register it with the Copyright office. (A blockchain app might do the registration, but the blockchain part does not get you any benefit.)
And it’s basically impossible to use a blockchain “fingerprint” for any enforcement (unless it’s accompanied by a copyright registration – the real lever).
The “fingerprint” of any digital image or video will change each time it’s uploaded to a new service and recompressed. So the “immutable record” turns into “this might be the same photo/video”, but only if you have other matching software running separately from the blockchain.
So, in the end, what is left for the blockchain? Payment? How will that be better than Visa, Paypal or Venmo? These can all be converted to local currency nearly anywhere in the world. We sell directly to customers in dozens of countries. Shopify handles most of these transactions seamlessly for a 2% transaction fee.
But wait, it gets worse. The investment in blockchain vaporware takes money and focus away from real solutions.
This could include small claims copyright remedies, new distribution channels that have a chance of functioning, international agreements, new methods to monetize owned content, etc.
As a metadata and asset management nerd, I think in data structures. I’m always looking for new paradigms, new uses. But I just can’t find a good structural use for a blockchain.
Since you’re reading this, you might want to talk a look at my books. They deal with the intersection of digital technology and visual media. https:theDAMbook.com
But there is no blockchain in them…
This post is adapted from The DAM Book 3.0. In this post, I outline the structural approaches for media management and how they are changing in the cloud/mobile era.
Back in the early digital photography days, there was a debate about where the authoritative version of a file’s metadata should live. People who liked file browsers would say “the truth should be in the file.” People like me who advocated for database management would say “the truth should be in the database.”
The argument here was how to store and manage metadata, and especially how to handle changes and conflicts between different versions of image metadata. This is a fundamental DAM architecture question.
For a number of years, the argument was largely settled – the only way to effectively manage large collections required the use of a catalog database to be the source of truth. This still holds true for most of my readers. But there’s a new paradigm for managing metadata/versions/collaboration, and eventually it’s going to be the best way forward.
The truth can also live in the cloud. And that’s the way that app-managed library software is being designed. It’s what we see with Lightroom CC, Google Photos, and Apple Photos. Because the cloud is connected to all versions of a collection, it can resolve differences between them and keep different instances synchronized. Typically, it does this by letting the most recent change “win,” and propagating those to the other versions.
Allowing a cloud-based application to synchronize versions and resolve conflicts is really the only way to provide access across multiple devices, or multiple users and keep everything unified.
The truth in the cloud is also the paradigm for enterprise cloud DAM like Widen and Bynder. It’s fast becoming the preferred method to allow distributed collaboration, even for people in the same office.
But there’s a rub, at least for now.
Cloud-based applications will not work for some people – at least not yet. The library may be so large that it’s too costly to store it in the cloud. Or you may not have enough bandwidth to upload and download everything in a reasonable time frame. Or storing stuff on other people’s computers may make you uncomfortable. Some of these problems will be solved by the march of technology and some may never be solved.At the moment, it’s often best to take a hybrid approach where the ultimate source of truth lives in a private archive that is stored on hardware in your own possession. Files can be pushed to the cloud component to be used for distribution and collaboration.
As you decide which system best suits your needs, understanding where “the truth” lives is an essential component for creating distributed access to your collection.
We’ve created an index for The DAM Book 3.0. While this was not terribly necessary for electronic versions of the book, it’s quite helpful for the print version (at the printer now – expected delivery before the end of July).I’ve never personally created an index before, so this was a learning experience for me. It ended up being a tremendous amount of work – maybe 50 hours of combing through the book, making entries, organizing information and then reorganizing it.
If you have already bought the PDF, you’ll soon get an announcement of the update along with a download link. If you don’t have a copy of the book, the index will give you a very good idea of the breadth and depth of the content it includes.
Here’s a PDF of the Index. You can click the top right to see it full screen, or download it onto your computer.
This post is adapted from The DAM Book 3.0. In that book, I describe the ways that connectivity is changing the way we use visual images. In this post, I outline how embedded media can enable new kinds of connections between people, ideas and commerce.
As connected images become more essential for communication and engagement, image embedding creates a new opportunity to gather and disseminate information. A traditional web page uses images packaged up as JPEGs and sent out as freestanding files. But images can also be displayed using embedding techniques. Embedded images (like embedded videos), reside on a third party server and are displayed in a frame or window on another site’s web page.Embedded media offers a direct connection from the server, through the web page or application all the way to the end user. This can provide a two-way flow of information, as well as the ability to customize the embedded media to suit the needs of the end user with updates, custom advertising or other messaging.
Let’s call these embedded objects, because they are actually more complicated than freestanding images. A YouTube video embedded on a web page is an example of an embedded object. The web page draws a box and asks the YouTube media server to fill that box with a video stream.
There is a live link which runs through the webpage, between the viewer’s device and the YouTube server. Because there is a link between YouTube and the viewer, there is a two-way
flow of data back and forth. This allows YouTube to gather all kinds of information, and it allows YouTube to also push out customized information through the window.
The media server can know who sees an image, how they got there, what they are interested in, who they interact with, what other sites they go to, what they search on and more. And the media server can present customized information to the end viewers based on what it knows about them. Remember, these windows are basically open pipelines that serve up the media on-demand.
Once only for video, now for still images too
Of course, the practice outlined above has been part of the business model for video services for a long time. Videos on web pages have historically been hosted by third-party servers, and we have been accustomed to YouTube ads for a decade. But it’s relatively new for still images, which could always be easily and cheaply added to web pages as JPEGs. The most significant marker for change was the introduction of free embedding by Getty Images.
When the stock photography giant decided to make vast numbers of images available for free embedding, it signaled that embedded objects were going to be an important part of its strategy moving forward. Getty has opened up millions of individual pipelines through blogs and other web pages, with the ability to collect and serve information in service of new business strategies.
The use case for images as platforms for two-way communication should be favorable moving forward. Mobile devices increasingly rely on photos instead of text headlines, and methods for connectivity are improving. In the last few years, we’ve seen several companies hang their business models on embedded image objects.
At this writing, Getty has gotten the most traction in such a service, but others are trying. Retailers are using embedded images as mini storefronts, and mission-driven organizations can use them to spread their messages in a viral manner.
What can you do with Embedded objects?
There are several valuable things you an do with embedded objet that are much harder or impossible with standard JPEGs.
• You can add a level of copyright protection that disables right-click saving.
• You can enable deep zoom features that are managed by the server.
• You can add purchase buttons or “more info” links directly onto the image.
• You can update the image when something changes (e.g. product updates.)
Okay, I’m interested – now what?
Making use of embedded media for still photos is an emerging capability. Several companies have taken a run at it, but none has fully cracked the code yet (and even Getty has not publicly disclosed how they intend to monetize the technology). SmartFrame is offering this embedding as a service that bolts on to your DAM. The thing I like about their business model is that it works in service of the image owner, not the middleman like Getty and YouTube do.
SmartFrame can help you with security, sharing, tracking and monetizing.
And the International Image Interoperability Framework is also building around this concept. (“Come for the deep zoom, stay for the great metadata interchange.”) I’ll have more on this project in another post.
I’m keeping close watch on this capability, and I’ll report as more information comes in. I first wrote about this topic in 2013 in this post.
Last fall, I did a presentation at B&H on using your camera as a scanner, based on my book Digitizing Your Photos. The webinar proved a pretty detailed overview of the camera scanning process for prints, slides and negatives. For those unfamiliar with the process, or for people who have been struggling to get high quality scans, there is a lot of good information in here.
I just finished leading my first Maine Media Workshop. It was a week -long intensive workshop that focused on total collection management using Lightroom. Each of my students brought in a large body of work, from 10,000 images up to 400,000. We focused on the processes that would help them preserve, organize and curate the photos.We had a great dinner Thursday night at Skip’s house.
First a quick shout out to the class – Charlotte, Gary, George, Nancy and Skip. They were an outstanding group to spend a week with: passionate about their photos, eager to learn all they could, and patiently allowing me to spend individual time with each person in the class. They were also a delight to spend a week with – interesting, funny and kind.
I’d also like to thank my teaching assistant Sophie Schwartz and Alyson who both helped to keep things running smoothly.
The Maine Media Workshop (MMW) provides a great environment for focusing on an area like collection management. It’s a no-frills camp setting on the outskirts of picturesque Rockport/Camden. The total immersion approach allows the class to push through barriers and make substantial progress.I hope I’ll be back again next year.
True to its new name, there was a lot more at the MMW than photography. The roster of classes included a number of ones on video production, podcasting and writing. The end of week slideshow included some very nice work done by the different groups.
It’s been a long time since I’ve done workshops, and it was very gratifying. Now that we have two new books out, we are actively working to get some future workshops lined up. If you are interested in taking a workshop, we have a place for you to tell us what you’re interested in on this page.