In a little more than a week, I’ll be headed to SXSW for the fifth year in a row. I’ll be speaking again this year, discussing how to programmatically add meaning to photos and other visual media (more on that later). I will have the pleasure of sharing the stage with Anna Dickson from Google and Ramesh Jain, professor of Information and Computer Science at UC Irvine.
2016 SXSW presentation with Dennis Keeley
Anna and I have been talking about these issues for years, ever since we met at the Palm Springs Photo Festival in 2013. We’v been on stage together a number of times, and it’s always an entertaining and enlightening discussion with her. Anna’s current work at Google is centered on deriving a deeper level of context about photographs through computer vision, linked data and more.
I met Ramesh at the LDV Vision Summit last year, and we immediately hit is off with a shared interest in pushing computer vision beyond simple recognition of objects and into the complex realm of meaning. He’s working with his grad students on the creation of a data model describing an Internet of Events which can describe and link geotemporal events. He’s a brilliant guy, and coincidentally was Thomas Knoll‘s professor at the University of Michigan when he wrote the first version of Photoshop. What goes around, comes around.
In our presentation, we’ll be examining how to think beyond what can simply be added by computer vision and analysis. How does the intent of the user get factored in? How can you use external data to understand visual media objects, and how can visual media – as the carrier of rich data – help to better build out an understanding of real world events.
Thanks to Photoshelter for helping to make this possible. I’m really excited that our CEO Andrew Fingerman will be attending.