HDRI panorama pipeline

A semi-automatic workflow for creating HDR panorama images for 3D renderings using Python with OpenCV

tl;dr: OpenCV offers a workable approach for generating HDRIs for 3D rendering, but PTGui remains essential for the final stitching phase.

Capturing lighting from real-world locations and replicating it in a virtual 3D environment has long been a powerful technique in achieving photorealistic rendering. HDRI panoramas allow us to record ambient lighting conditions using a camera setup and then translate them into 3D scenes, preserving the intensity, color, and reflections of the environment in a way that’s nearly impossible to simulate from scratch.

Capturing process

I shot an exposure bracket sequence of five images, each with 2 exposure values (EV) apart. I used my full frame Nikon D750 DSLR1 with a 14–24mm f/2.8 lens, set to 14mm at f/8 (ISO 100).

1/125 sec
1/4000 sec
1/1000 sec
1/15 sec
1/2 sec

To capture a full 360° panorama I shot 12 sequences pitching the camera 45° up (each with 30° on the yaw axis) and 12 sequences pitching the camera 45° down. To be on the safe side I also captured one sequence with a pitch angle of 90° up. All images were captured as full resolution *.NEF files (i.e. Nikon’s proprietary *.RAW format) in 16bit.

Capturing HDRIs on a historic location (Bergfried/Donjon Burgruine Kallmünz) with a tripod, a panorama head (Roundabout NP), the Nikon D750 and the Nikkor 14–24mm f/2.8. (Photo by Henrike – thank you!)

The Standard Workflow

There’s already a well-established process for creating high-quality HDRI panoramas, as detailed by Greg Zaal in his excellent tutorial on Poly Haven. Greg even developed a custom tool to automate parts of the process.

That said, the pipeline he describes is fairly complex and fragile. It depends on a specific software stack, including:

  • Hugin (version 2021)
  • PTGui
  • Luminance HDR (version 2.4.0)
  • Blender (version 2.79)
  • RawTherapee
  • And Greg’s own custom application

While the output is generally solid, I’ve encountered issues with artifacts in some HDRIs (see example image below). Additionally, I’ve found the color reproduction to be overly saturated in some cases—not quite true to life.

Demo output when following the pipeline: There seem to be issues in the HDR-merging so that there are some differences in between the exposure bracket sequences. Although the stitching is correct, the bottom row is too dark and blurry (left arrow points to the seams). Also the color rendition is too vibrant (right arrow).

My Approach: Simplify with Python and OpenCV

To streamline this pipeline, I decided to replace several components (i.e. Hugin, Luminance HDR, Blender, and Greg’s tool) with a single Python script. The idea is to simplify HDRI creation: instead of juggling multiple apps in the background, the user simply provides a folder path and the number of exposures per bracket. The script then:

  1. Reads exposure times directly from the images’ EXIF metadata using exifread
  2. Groups the images into brackets based on exposure values
  3. Generates HDRIs from these brackets using OpenCV’s cv.createMergeDebevec() function.

Caveat: OpenCV Only Accepts 8-bit Input

Here’s a big limitation: OpenCV’s merging function only works with 8-bit images. That means no 16-bit RAW or TIF input. Although my tool handles the conversion from RAW to 8-bit automatically, but the question remains: how much does this impact quality?

if img.dtype != np.uint8:
   if img.dtype == np.uint16:
      img = cv.convertScaleAbs(img, alpha=(255.0/65535.0))
    elif img.dtype == np.float32 or img.dtype == np.float64:
        img_norm = cv.normalize(img, None, 0, 255, cv.NORM_MINMAX)
        img = img_norm.astype(np.uint8)
                
     else:
        img = cv.convertScaleAbs(img)           

Later on, I’ll provide a side-by-side comparison to evaluate whether using 16-bit RAWs processed via RawTherapee (and exported as 16-bit TIFs) is really necessary, or if 8-bit images are “good enough” for practical use.

The Critical Bit: Exposure Time Handling

Proper HDR merging requires exact exposure time mapping for each image in a bracket. My script extracts these times from the EXIF metadata, which usually stores them either as:

  • Fractions: 1/8000 sec or 1/400 sec, etc.
  • Decimals: 0.6 sec or 2.0 sec, etc.

Now here’s the tricky part: OpenCV’s documentation is, frankly, misleading about how to format these exposure times. It claims that you must provide inverted exposure values, so 1/400 sec should become 400.2 Other tutorials contradict this and suggest using actual exposure times (1/400),3 or converting them to floats (0.0025 for 1/400).

None of these methods worked reliably.

Through trial and error, I discovered the following approach:

  • Do not invert the exposure times.
  • Convert them to float (1/400 becomes 0.0025)
  • Then multiply by 10,000, so that an EXIF reading of 1/400 becomes 25

This step is undocumented anywhere I could find, but it made all the difference. OpenCV’s createMergeDebevec() began delivering consistent and meaningful HDR results only after applying this transformation.

tag = tags.get("EXIF ExposureTime")
corrected_exposure = float(tag.values[0]*10000)

This snippet demonstrates quite well, what needs to be done: First, the exposure time is extracted from the EXIF metadata by searching for the string "EXIF ExposureTime". This value is then multiplied by 10,000 and casted to float (this casting might not be neccessary – I will investigate this!).

camera

Code availability

The source code with some basic instructions is available at GitHub

Test Setup: Comparing HDRI Pipelines for Raytrace Rendering

To evaluate the effectiveness of different HDRI workflows, I set up a simple test scene in Blender using the Cycles renderer. Each scene was rendered with 100 samples, standard light bounce settings, and no fast GI approximation. Since each HDRI variant produced noticeably different exposure levels (i.e., overall image brightness), I manually adjusted the environment intensity within Blender’s node editor to normalize the output across tests.

Screenshot of the test scene from the shading node editor in Blender

I deliberately used an HDRI that was captured in a rather dark environment, featuring lesser than normal colors and has one very direct light source. Have a look at this environment (compressed to 8bit via tone mapping) here:

Workflows Compared

I tested the following five combinations:

1. Greg’s Full Workflow (Poly Haven Reference)

Image capture as 16-bit RAW, Conversion to 16-bit TIFs using RawTherapee, HDRI merging and panorama stitching via Greg’s custom tool (which uses LuminanceHDR, Hugin, and Blender under the hood). Final panorama stitching in PTGui. Import into Blender as a 32-bit EXR

Rendering with pipeline 1. No exposure compensation in Blender’s shader editor (Strength = 1.000).

2. My OpenCV-Based Test Workflow

Image capture as 16-bit RAW. Conversion to 16-bit TIFs using RawTherapee. Conversion to 8-bit for compatibility. HDRI merging using OpenCV. Final panorama stitching with PTGui. Import into Blender as a 32-bit HDR

Rendering with pipeline 2. Active exposure compensation in Blender’s shader editor (Strength = 2.000).

3. Naive PTGui Workflow

Image capture as 16-bit RAW. Direct import of exposure sequences into PTGui. Merging to HDRI and stitching panorama entirely within PTGui. Import into Blender as a 32-bit EXR

Rendering with pipeline 3. Exposure compensation active in Blender’s shader editor (Strength = 10.000).

4. OpenCV + Hugin Hybrid Workflow

Image capture as 16-bit RAW. Conversion to 16-bit TIFs using RawTherapee. Conversion to 8-bit for OpenCV. HDRI merging via OpenCV. Final panorama stitching using Hugin. Import into Blender as a 32-bit HDR

Rendering with pipeline 4. No exposure compensation in Blender’s shader editor (Strength = 1.000).

5. Control Group

Image capture as 16-bit RAW. Conversion to 16-bit TIFs with RawTherapee. HDRI merging and stitching using Greg’s tool
(again relying on LuminanceHDR, Hugin, and Blender). Final panorama creation with PTGui. Import into Blender as an 8-bit TIF (!) instead of 32-bit EXR

Rendering with pipeline 5. Exposure compensation active in Blender’s shader editor (Strength = 0.500). This rendering was done to evaluate the differences between using true 32bit HDRIs and 8bit LDRIs when lighting a scene.

Conclusions

Hugin as stitching software

I agree with Greg4 to the extent that it’s hardly possible to produce meaningful results with Hugin as panorama stitching software. Although I didn’t have any problems with stitching per se, the color rendering of the output panorama takes some getting used to. (Hugin also takes significantly more time than PTGui to search for control points, because the default settings do not use the GPU – this can be changed easily, though.) Hugin seems to have a hard time in finding enough control points, even when the source images were shot under optimal conditions.

Section from a panorama that was stitched with Hugin: Some strange artifacts appear and the color rendition seems to be off.

Using 8bit images as source

Depending on the scene, it is possible to produce usable HDRIs from 8bit source images, as long as your highlights are not burned out. Personally, I preferred the rendering that used OpenCV and 8bit TIFs (workflow No. 2 above), although more tests are necessary.


Footnotes

  1. It is worth mentioning that I did not use my Nikon D800, since it only allows a maximal distance of 1EV in automatic bracketing mode, which is rarely sufficient for most light situations. ↩︎
  2. See OpenCV’s documentation here: http://man.hubwiz.com/docset/OpenCV.docset/Contents/Resources/Documents/d3/db7/tutorial_hdr_imaging.html ↩︎
  3. See e.g. https://learnopencv.com/high-dynamic-range-hdr-imaging-using-opencv-cpp-python/ ↩︎
  4. https://blog.polyhaven.com/how-to-create-high-quality-hdri/ ↩︎