admin管理员组

文章数量:1287829

I'm trying to set up a scene in Mitsuba 3 where I optimize an environment map parameter, emitter.data. This seems like it should be possible given their caustics optimization tutorial and the fact that they do exactly this in Mitsuba 2.

I've confirmed that manually changing my environment map's bitmap, emitter.data, can properly change the lighting of the rendered image as expected, but I get an error during backprop stating that the loss does not depend on this parameter.


Minimum Example
Below is a short script showing the issue. All it does is create a small, uniform environment map, load a basic scene, attempt a render, then compute a dummy loss against itself (just to test gradient flow).

import mitsuba as mi
import drjit as dr
import numpy as np

mi.set_variant('llvm_ad_rgb')  # I also tried 'cuda_ad_rgb' with the same result

# Create a uniform environment map
env_width, env_height = 256, 128
env_data = mi.Bitmap(np.full((env_height, env_width, 3), 0.5, dtype=np.float32))

# Define a minimal scene that uses the above environment map
scene_dict = {
    "type": "scene",
    "emitter": {
        "type": "envmap",
        "bitmap": env_data,    # The parameter in question
        "scale": 1.0
    },
    "integrator": {
        "type": "path",
        "max_depth": 4
    },
    "sensor": {
        "type": "perspective",
        "fov": 45,
        "to_world": mi.ScalarTransform4f().look_at(
            origin=[0, 0, 5],
            target=[0, 0, 0],
            up=[0, 1, 0]
        ),
        "film": {
            "type": "hdrfilm",
            "width": 256,
            "height": 256
        },
        "sampler": {
            "type": "independent",
            "sample_count": 16
        }
    },
}

scene = mi.load_dict(scene_dict)

# Access and enable gradient tracking on the environment map data
params = mi.traverse(scene)
params.keep(['emitter.data'])
dr.enable_grad(params['emitter.data'])

# Render (forward pass)
image = mi.render(scene, spp=16)

# Create a dummy reference image with the same dimensions as the rendered image
dummy_reference = mi.Bitmap(np.full((256, 256, 3), 0.5, dtype=np.float32))

# Compute a dummy loss for testing backprop
loss = dr.mean(dr.square(image - dummy_reference))

# Attempt backprop - should now properly depend on input variables
dr.backward(loss)

Error Message

RuntimeError: drjit.backward_from(): the argument does not depend on the input variable(s) being differentiated...

Observations

  • The forward pass definitely depends on emitter.data (e.g., if I multiply it by 0.1 before rendering, the scene darkens).
  • Backprop, however, claims the final loss has no dependency on emitter.data.

Questions

  • Has anyone else run into this issue when differentiating environment maps in Mitsuba 3?
  • Am I missing something, or is this a bug?

Thanks!

I'm trying to set up a scene in Mitsuba 3 where I optimize an environment map parameter, emitter.data. This seems like it should be possible given their caustics optimization tutorial and the fact that they do exactly this in Mitsuba 2.

I've confirmed that manually changing my environment map's bitmap, emitter.data, can properly change the lighting of the rendered image as expected, but I get an error during backprop stating that the loss does not depend on this parameter.


Minimum Example
Below is a short script showing the issue. All it does is create a small, uniform environment map, load a basic scene, attempt a render, then compute a dummy loss against itself (just to test gradient flow).

import mitsuba as mi
import drjit as dr
import numpy as np

mi.set_variant('llvm_ad_rgb')  # I also tried 'cuda_ad_rgb' with the same result

# Create a uniform environment map
env_width, env_height = 256, 128
env_data = mi.Bitmap(np.full((env_height, env_width, 3), 0.5, dtype=np.float32))

# Define a minimal scene that uses the above environment map
scene_dict = {
    "type": "scene",
    "emitter": {
        "type": "envmap",
        "bitmap": env_data,    # The parameter in question
        "scale": 1.0
    },
    "integrator": {
        "type": "path",
        "max_depth": 4
    },
    "sensor": {
        "type": "perspective",
        "fov": 45,
        "to_world": mi.ScalarTransform4f().look_at(
            origin=[0, 0, 5],
            target=[0, 0, 0],
            up=[0, 1, 0]
        ),
        "film": {
            "type": "hdrfilm",
            "width": 256,
            "height": 256
        },
        "sampler": {
            "type": "independent",
            "sample_count": 16
        }
    },
}

scene = mi.load_dict(scene_dict)

# Access and enable gradient tracking on the environment map data
params = mi.traverse(scene)
params.keep(['emitter.data'])
dr.enable_grad(params['emitter.data'])

# Render (forward pass)
image = mi.render(scene, spp=16)

# Create a dummy reference image with the same dimensions as the rendered image
dummy_reference = mi.Bitmap(np.full((256, 256, 3), 0.5, dtype=np.float32))

# Compute a dummy loss for testing backprop
loss = dr.mean(dr.square(image - dummy_reference))

# Attempt backprop - should now properly depend on input variables
dr.backward(loss)

Error Message

RuntimeError: drjit.backward_from(): the argument does not depend on the input variable(s) being differentiated...

Observations

  • The forward pass definitely depends on emitter.data (e.g., if I multiply it by 0.1 before rendering, the scene darkens).
  • Backprop, however, claims the final loss has no dependency on emitter.data.

Questions

  • Has anyone else run into this issue when differentiating environment maps in Mitsuba 3?
  • Am I missing something, or is this a bug?

Thanks!

Share Improve this question asked Feb 23 at 0:17 Anson SavageAnson Savage 3413 gold badges5 silver badges14 bronze badges
Add a comment  | 

1 Answer 1

Reset to default 0

Answer thanks to njroussel :

The call to render needs to specify the parameters:

render(scene, params=params, spp=16)

本文标签: pythonDifferentiable Environment Map Failing Backpropagation in Mitsuba 364Stack Overflow