If necessity is the mother of invention, serendipity might often be the midwife.
On a lazy Sunday morning in 1999, art conservator Paul Messier was trying to keep his two young children entertained while waiting for a hardware store to open. He took them inside a photo shop to let them look at the old cameras the owner had laying out for just such occasions.
For Messier, the visit turned out to be much more than a way to kill a few minutes. It turned out to be a pivotal moment for his career and for a field—computational art history—that had no name at that point. Messier, a well-respected figure in the small fraternity of conservators and appraisers of historic photography, mentioned to the shop owner he was having some difficulty authenticating some photos credited to early 20th century sociologist and photographer Lewis Hine.
"I was looking for a materials baseline to compare the questioned prints against, and I assumed that might have existed, almost like wallpaper samples at a hardware store," Messier said. "And you would go through them systematically until you find one that matches the thing you are trying to put into context. That didn't exist anywhere. I ended up talking to the guy about my problem, and he said, 'I have some old manufacturers' sample books in the back,' and he gave them to me."
The exchange was an "Aha!" moment for Messier. Like many an aficionado at the time, he knew photography was on the brink of epochal change as digital cameras replaced film almost in its entirety. He also gambled somebody had to establish that baseline of the art's historic raw materials—specifically, the properties of photographic papers dating back more than a century—before they ended up in garbage dumps. He went home, opened an eBay account, and began buying up as many pristine samples of historic photographic paper as possible.
Messier's collection of virgin photographic papers grew to more than 5,000 samples and is considered the authoritative global reference for what he calls photography's "genome." In September 2015, Messier accepted a position as the inaugural head of the Lens Media Lab at Yale University's Institute for the Preservation of Cultural Heritage, which acquired the collection and helped accelerate collaborations he had already begun as an independent conservator, in a consultancy he still runs, to create a globally accessible resource of these papers. That resource, obviously, had to be digital.
Those collaborations, leveraged by relationships with chief conservator James Coddington at the Museum of Modern Art (MoMA) in New York and Cornell University computational art history pioneer C. Richard Johnson Jr., are exploring fundamental signal processing approaches to building a globally accessible dataset for the study of photo paper texture. With Coddington, Messier began honing his ideas in a study conducted on MoMA's Thomas Walther Collection of historic photos; part of the project, the Historic Photographic Paper Classification challenge, under Johnson's direction, has garnered worldwide interest.
"Most packages of photographic paper give you some sort of qualitative sense of four basic variables: texture, color, gloss, and thickness," Messier said. "Thickness, color, and gloss are very easy to measure. There are instruments to do it, methodologies to do it, and standards to apply. But texture was really difficult. We wanted to create a method that was relatively inexpensive, and also highly repeatable so we could deploy it widely."
Messier first recruited his brother, an engineer at the Lincoln Laboratory at the Massachusetts Institute of Technology, to create a homemade LED array on a halved plastic baseball, dubbed the "monkey brain," to map paper texture using reflectance transformational imaging (RTI). In RTI, a texture map is made from a set of digital photographs of an object, each taken by a stationary camera under a different lighting direction.
While RTI seemed at first to be a suitable approach, Messier ultimately created a training set of micrographs using single-point obliquely angled raking light, which uses pixel brightness as a proxy for height (the brighter the pixel, the higher the point). The approach is natively digital, and already widely used not only in art scholarship but also in many other disciplines, such as satellite depictions of the Earth.
"We can produce highly repeatable images that can be made anywhere and shared anywhere," he said.
Raking light images, also widely used in the study of paintings, are also readable by the unaided human eye, but as Messier, Johnson, and as their co-authors mentioned in the 2014 paper revealing the results of the HPPC, "the sheer number and diversity of textures used for historic papers prohibits efficient visual classification."
Four teams of researchers, including teams from three institutions in France, Tilburg University in the Netherlands, Worcester Polytechnic Institute in Worcester, MA, and the University of Wisconsin, successfully created classification schemes for the training dataset Messier created (a fifth team also met the challenge, post-publication); now, Messier said, the task is to create data platforms and collaborative relationships that can make the texture data globally available. To some extent, he said, he will look to the work Johnson did in building relationships within the fine art community with the Thread Count Automation Project he began in 2007, which uses signal processing algorithms to analyze the weave of canvas used in paintings, particularly those done before 1900.
Both Johnson and Messier say the two projects are not strictly coupled, but conceptually, Messier said, "they are definitely linked in that they are bringing fairly sophisticated computer science techniques to the humanities, to the study of works of art, and this is something that is really very much in the beginning stage. Humanities people are so used to working with our subjective perception. Our training is experience-based. It takes so long to become a conservator and have this sensibility drilled into you."
Messier said it is now his mission, as he proceeds with building the visibility and credibility of digitizing historic photography's "genome," to convince traditionalists within the arts community that developing computational methods of analysis in the profession will not be an either-or scenario.
"The methodologies and ideas are based on materials science, based on these new quantification methods, based on numbers, yes. But what those numbers can do is unlock new interpretations that provide new insights, that can unlock the 'genome' in ways previous tools did not provide; put it to work in the service of scholars on the humanities side, and make it accessible to them."
Gregory Goth is an Oakville, CT-based writer who specializes in science and technology.
No entries found