admin管理员组文章数量:1287829
I'm trying to set up a scene in Mitsuba 3 where I optimize an environment map parameter, emitter.data
. This seems like it should be possible given their caustics optimization tutorial and the fact that they do exactly this in Mitsuba 2.
I've confirmed that manually changing my environment map's bitmap, emitter.data
, can properly change the lighting of the rendered image as expected, but I get an error during backprop stating that the loss does not depend on this parameter.
Minimum Example
Below is a short script showing the issue. All it does is create a small, uniform environment map, load a basic scene, attempt a render, then compute a dummy loss against itself (just to test gradient flow).
import mitsuba as mi
import drjit as dr
import numpy as np
mi.set_variant('llvm_ad_rgb') # I also tried 'cuda_ad_rgb' with the same result
# Create a uniform environment map
env_width, env_height = 256, 128
env_data = mi.Bitmap(np.full((env_height, env_width, 3), 0.5, dtype=np.float32))
# Define a minimal scene that uses the above environment map
scene_dict = {
"type": "scene",
"emitter": {
"type": "envmap",
"bitmap": env_data, # The parameter in question
"scale": 1.0
},
"integrator": {
"type": "path",
"max_depth": 4
},
"sensor": {
"type": "perspective",
"fov": 45,
"to_world": mi.ScalarTransform4f().look_at(
origin=[0, 0, 5],
target=[0, 0, 0],
up=[0, 1, 0]
),
"film": {
"type": "hdrfilm",
"width": 256,
"height": 256
},
"sampler": {
"type": "independent",
"sample_count": 16
}
},
}
scene = mi.load_dict(scene_dict)
# Access and enable gradient tracking on the environment map data
params = mi.traverse(scene)
params.keep(['emitter.data'])
dr.enable_grad(params['emitter.data'])
# Render (forward pass)
image = mi.render(scene, spp=16)
# Create a dummy reference image with the same dimensions as the rendered image
dummy_reference = mi.Bitmap(np.full((256, 256, 3), 0.5, dtype=np.float32))
# Compute a dummy loss for testing backprop
loss = dr.mean(dr.square(image - dummy_reference))
# Attempt backprop - should now properly depend on input variables
dr.backward(loss)
Error Message
RuntimeError: drjit.backward_from(): the argument does not depend on the input variable(s) being differentiated...
Observations
- The forward pass definitely depends on
emitter.data
(e.g., if I multiply it by 0.1 before rendering, the scene darkens). - Backprop, however, claims the final loss has no dependency on
emitter.data
.
Questions
- Has anyone else run into this issue when differentiating environment maps in Mitsuba 3?
- Am I missing something, or is this a bug?
Thanks!
I'm trying to set up a scene in Mitsuba 3 where I optimize an environment map parameter, emitter.data
. This seems like it should be possible given their caustics optimization tutorial and the fact that they do exactly this in Mitsuba 2.
I've confirmed that manually changing my environment map's bitmap, emitter.data
, can properly change the lighting of the rendered image as expected, but I get an error during backprop stating that the loss does not depend on this parameter.
Minimum Example
Below is a short script showing the issue. All it does is create a small, uniform environment map, load a basic scene, attempt a render, then compute a dummy loss against itself (just to test gradient flow).
import mitsuba as mi
import drjit as dr
import numpy as np
mi.set_variant('llvm_ad_rgb') # I also tried 'cuda_ad_rgb' with the same result
# Create a uniform environment map
env_width, env_height = 256, 128
env_data = mi.Bitmap(np.full((env_height, env_width, 3), 0.5, dtype=np.float32))
# Define a minimal scene that uses the above environment map
scene_dict = {
"type": "scene",
"emitter": {
"type": "envmap",
"bitmap": env_data, # The parameter in question
"scale": 1.0
},
"integrator": {
"type": "path",
"max_depth": 4
},
"sensor": {
"type": "perspective",
"fov": 45,
"to_world": mi.ScalarTransform4f().look_at(
origin=[0, 0, 5],
target=[0, 0, 0],
up=[0, 1, 0]
),
"film": {
"type": "hdrfilm",
"width": 256,
"height": 256
},
"sampler": {
"type": "independent",
"sample_count": 16
}
},
}
scene = mi.load_dict(scene_dict)
# Access and enable gradient tracking on the environment map data
params = mi.traverse(scene)
params.keep(['emitter.data'])
dr.enable_grad(params['emitter.data'])
# Render (forward pass)
image = mi.render(scene, spp=16)
# Create a dummy reference image with the same dimensions as the rendered image
dummy_reference = mi.Bitmap(np.full((256, 256, 3), 0.5, dtype=np.float32))
# Compute a dummy loss for testing backprop
loss = dr.mean(dr.square(image - dummy_reference))
# Attempt backprop - should now properly depend on input variables
dr.backward(loss)
Error Message
RuntimeError: drjit.backward_from(): the argument does not depend on the input variable(s) being differentiated...
Observations
- The forward pass definitely depends on
emitter.data
(e.g., if I multiply it by 0.1 before rendering, the scene darkens). - Backprop, however, claims the final loss has no dependency on
emitter.data
.
Questions
- Has anyone else run into this issue when differentiating environment maps in Mitsuba 3?
- Am I missing something, or is this a bug?
Thanks!
Share Improve this question asked Feb 23 at 0:17 Anson SavageAnson Savage 3413 gold badges5 silver badges14 bronze badges1 Answer
Reset to default 0Answer thanks to njroussel :
The call to render
needs to specify the parameters:
render(scene, params=params, spp=16)
本文标签: pythonDifferentiable Environment Map Failing Backpropagation in Mitsuba 364Stack Overflow
版权声明:本文标题:python - Differentiable Environment Map Failing Backpropagation in Mitsuba 3.6.4 - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1741323508a2372342.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论