In between computer graphics and computer vision, Image-Based Modeling and Rendering (IBMR) methods rely on a set of images (Image-Based) of a scene to generate a three-dimensional model (Modeling) and/or some novel views (Rendering (computer graphics)) of this scene.
Traditional approach of computer graphics has been to create a geometric model in 3D and try to reproject it onto a two-dimensional image. Computer vision, at the opposite, is mostly focused on searching features in a pictures and trying to interprete them as three-dimensional clues. Image-Based Modelling and Rendering would allow to use one or several two-dimensional images in order to generate directly novel two-dimensional images, skipping the modelisation stage.
Instead of considering only the physical model of a solid, IBMR methods usually focus more on light modelling. Therefore the fundamental concept behind IBMR is the plenoptic illumination function which is a parametrisation of the light field. The plenoptic function describes the light rays contained in a given volume. It can be represented with seven dimensions: a ray is defined by its position $ (x,y,z) $, its orientation $ (\theta,\phi) $, its wave length $ (\lambda) $ and its time $ (t) $: $ P (x,y,z,\theta,\phi,\lambda,t) $ . IBMR methods try to approximate the plenoptic function to render a novel set of two-dimensional images from another. Given the high dimensionality of this function, most of the methods put constraints in order to reduce this number (typically to 2 to 4).
A couple of well-known IBMR methods and algorithms are the following: View Morphing generates a transition between images, QuickTime VR renders panoramas using image mosaics, Lumigraph relies on a dense sampling of the scene and Space Carving generates a 3D model based on a photo-consistency check.