Animated 3DGS Avatars in Diverse Scenes with Consistent Lighting and Shadows

1Tübingen AI Center, University of Tübingen 2Snap Inc.
Teaser

Our method renders consistent lighting and soft shadows for animated 3DGS avatars interacting with 3DGS scenes. Avatars both cast shadows onto the scene and receive scene illumination via SH-based relighting, yielding coherent compositions across diverse environments.

Abstract

We present a method for consistent lighting and shadows when animated 3D Gaussian Splatting (3DGS) avatars interact with 3DGS scenes or with dynamic objects inserted into otherwise static scenes.

Our key contribution is Deep Gaussian Shadow Maps (DGSM)—a modern analogue of the classical shadow mapping algorithm tailored to the volumetric 3DGS representation. Building on the classic deep-shadow mapping idea, we show that 3DGS admits closed-form light accumulation along light rays, enabling volumetric shadow computation without meshing.

For each estimated light, we tabulate transmittance over concentric radial shells and store them in octahedral atlases, which modern GPUs can sample in real-time per query to attenuate affected scene Gaussians and thus cast and receive shadows consistently.

To relight moving avatars, we approximate the local environment illumination with HDRI probes represented in a spherical-harmonic (SH) basis and apply a fast per-Gaussian radiance transfer, avoiding explicit BRDF estimation or offline optimization.

We demonstrate environment-consistent lighting for avatars from AvatarX and ActorsHQ, composited into ScanNet++, DL3DV, and SuperSplat scenes, and show interactions with inserted objects. Across single and multi-avatar settings, DGSM and SH relighting operate fully in the volumetric 3DGS representation, yielding coherent shadows and relighting while avoiding meshing.

Key Contributions

Deep Gaussian Shadow Maps (DGSM)

A volumetric deep-shadow formulation for Gaussian splats, with closed-form light accumulation and efficient octahedral-atlas storage for fast GPU sampling. Unlike classical shadow maps designed for meshes, DGSM operates directly on the continuous volumetric Gaussian representation.

Fast Avatar Relighting via SH HDRI Probes

A per-frame, per-Gaussian spherical harmonic transfer that approximates local environment lighting without explicit BRDFs or meshing. This enables dynamic avatar motion and scene edits without offline optimization.

Coherent Lighting for Dynamic 3DGS Scenes

An integrated pipeline that enables avatars and inserted objects to cast shadows and exhibit scene-matched lighting, validated across ScanNet++, DL3DV, and SuperSplat scenes with AvatarX/ActorsHQ avatars.

Method Overview

DGSM Method Overview

Deep Gaussian Shadow Maps: For concentric spheres radiating out from a light source, we build DGSMs by computing the light absorption by inserted Avatar/Object Gaussians at each radial distance from the light source. An octahedral map takes a 3D unit vector d and maps it to a 2D location in the atlas. Each absorption value is mapped to its own 2D octahedral map. The radial distances are chunked into K discrete bins along the radial direction and stored in octahedral atlases, creating a volumetric shadow map that can be sampled to cast shadows on Scene Gaussians.

How It Works

1. Light Source Estimation

We estimate a compact set of point light sources from the Gaussian scene representation. Using spherical-harmonic (SH) coefficients from multiple viewpoints near the character, we derive photometric cues—mean/max luminance, angular-stability, and DC-dominance—to identify dominant light sources via greedy distance-based NMS suppression.

2. Volumetric Visibility in Closed Form

We model the light absorption field as a Gaussian mixture with an explicit relationship between the absorption coefficient and Gaussian opacity. The optical depth factorizes per Gaussian, allowing us to compute transmittance (visibility) using error functions in closed form—without meshing or voxelization.

3. Octahedral Atlas Storage

Directions on the sphere are encoded with an octahedral atlas, turning the spherical function into a single contiguous 2D texture. This enables precomputation, compact storage, and fast GPU sampling. Distance is discretized into K radial bins, creating a 3D table that can be efficiently sampled via trilinear interpolation.

4. HDRI-based Relighting

We construct an approximate environment by rendering the 3DGS scene on cube faces at the avatar location, then fit SH coefficients via weighted least-squares. For each Gaussian, we perform per-channel lighting transfer using a cosine lobe, yielding efficient image-based lighting consistent with the estimated environment.

Relighting Results

Relighting Results

Environment-Consistent Relighting: Our SH-based relighting approach modulates avatar appearance to match the ambient characteristics of surrounding scenes, producing visually coherent results without explicit BRDF estimation.

Results

Results

Qualitative Results: Our method produces consistent shadows and scene-matched lighting for animated avatars from AvatarX and ActorsHQ, composited into diverse 3DGS scenes from ScanNet++, DL3DV, and SuperSplat.

Ablation Studies

Ablation Studies

Ablation Studies: We evaluate various shadow map parameters including opacity-to-absorption mapping strategies, octahedral maps vs cubemaps, and sampling methods (Monte Carlo vs center sampling).

Conclusion

We have presented a lighting-and-shadowing framework that operates directly in the continuous Gaussian domain to render view-consistent shadows and scene-matched relighting for animated avatars and inserted objects in 3DGS scenes.

Our Deep Gaussian Shadow Maps (DGSM), by deriving a closed-form volumetric transmittance for Gaussian splats and storing light-space accumulation in a compact octahedral atlas, enables efficient, dynamic shadow queries on modern GPUs.

Limitations: Our method assumes static scenes around lights and depends on light-estimation quality. The single-scattering approximation may miss strong interreflections, caustics, or highly specular/anisotropic effects.

Future Work: We see promising directions in handling dynamic illumination and deforming environments, integrating learned global illumination within 3DGS, extending to participating media and glossy materials, and exploring end-to-end differentiable training that unifies light estimation, DGSM construction, and avatar appearance.

BibTeX

@misc{mir2026animated3dgsavatarsdiverse,
      title={Animated 3DGS Avatars in Diverse Scenes with Consistent Lighting and Shadows}, 
      author={Aymen Mir and Riza Alp Guler and Jian Wang and Gerard Pons-Moll and Bing Zhou},
      year={2026},
      eprint={2601.01660},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2601.01660}, 
}