The contrast in real world scenes is often beyond what
consumer cameras can capture. For these situations, High
Dynamic Range (HDR) images can be generated by taking
multiple exposures of the same scene. When fusing information
from different images, however, the slightest change
in the scene can generate, in the resulting image, artifacts
which dramatically limit the potential of this solution. We
present a technique capable of dealing with a large amount
of movement in the scene: we find, in all the available exposures,
patches consistent with a reference image previously
selected from the stack. We generate the HDR image by
averaging the radiance estimates of all such regions and
we compensate for camera calibration errors by removing
potential seams. We show that our method works even in
cases with many moving objects covering large regions of
the scene.
News:
09-04-2009: The results shown in the paper, exactly as output by our algorithm, are now available for download
here.
07-09-2009: The stacks used in the paper are now available for download
here. Please, see the text file included in the zip for information about usage and licensing.
bibtex:
@article{GalloICCP09,
    title = {Artifact-free High Dynamic Range Imaging},
    author = {Gallo, O., and Gelfand, N., and Chen, W., and Tico, M., and Pulli, K.},
    journal = {IEEE International Conference on Computational Photography (ICCP)},
    year = {2009},
    month = {April}
}