Computational Photography

Custom light domes
New camera designs
Novel printing strategies
Computational photography combines hardware and algorithms to capture and reproduce visual information.

In traditional film photography, the scene was projected onto a photo-sensitive material that was either directly viewed (after development) or used to create an analogue print. Within this framework, it was essential that the light was captured, as we wished to view it. As photography evolved, this has been slightly relaxed, first in the form of negative film and later Bayer filters and digital sensors, but, by and large, we still capture the image as we wish to view it, at least spatially. One of the key advantages of the computational photography framework is that we are not restricted to this ‘identity’ mapping and can consider other, more flexible, invertible mappings. This flexibility can lead to cheaper and more efficient devices as well as increased precision enabling new sensing possibilities.

My current research in this area is motivated by Lippmann photography, a Nobel prize winning technique to capture colour using interference. After fully modelling this historical process, we are currently in the process of bringing Gabriel Lippmann’s approach into the realm of computational photography. From a consumer photography viewpoint, this work can get us closer to Lippmann’s perfect photograph; that is, a photograph indistinguishable from looking through a window at the scene. Beyond this, the tools are not limited to the visible spectrum and applications extend way beyond consumer photography.

Related publications

Journal papers
Conference papers

Site Footer