This month I completed a project in compositing 3D rendered images into photographic images. This helped me to better understand how composite 3D render layers and passes. I believe that the term ‘Compositing’ has a few different meanings and is sometimes misunderstood, so I want to try to explain it here.
In most of the popular 3D applications, the user has an option to render in different layers or elements. For instance, if there is a scene with a boy and his dog standing in a park-like area with a tree, then the tree and the rest of the environment could exist on one layer, and the boy and the dog could be rendered out in another with proper alpha channels to preserve transparency. These layers could then be broken down further into diffuse, specular, shadow, and reflection/refraction, translucency, ambient occlusion, indirect lighting, and others as needed. This allows for maximum control over every part of the image while post processing. Continue reading
The links below are the result of close to a month of work on CG lighting and compositing. The images were made by me using a 6″ chrome gazing ball, which I rigged to a tripod using a cheap DIY solution I came up with. To create the HDRIs, I used a combination of Photoshop and HDRShop 1.0, then converted them to LatLong coordinates. I then used the images to learn how to composite 3D models onto photo backgrounds.
I hope that these will be of use to someone besides just myself. The two .zip files contain the 23 HDRIs that I created.
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
Based on a work at stringhamd.wordpress.com.