We present DepthLab, an opensourced library for real-time 3D interaction
with depth maps for mobile augmented reality.
Depthlab encapsulates a variety of depth-based UI or UX paradigms,
including geometry-aware rendering,
real-time visual effects such as rain, snow,
flooding, and relighting; and
surface interaction behaviors such as surface splatting,
real-time mesh generation, and physics simulation.
With a single RGB camera, we leverage the low-level depth map from the ARCore Depth API
into reusable, customizable, and self-contained modules.
For each camera image, we compute a raw depth map and generate the depth array, depth mesh, and depth texture data structures for different use cases.
We offer a set of conversion utilities to improve a developerâ€™s workflow and
switch between the screen space and world space representations.
Furthermore, we break down the taxonomy of depth usage into localized depth,
surface depth, and dense depth-based applications,
and describe our real-time algorithms for interaction and rendering tasks.
Localized depth uses the depth array to operate on a small number of points
directly on the CPU.
For example: rendering an oriented reticle and computing physical measurements,
placing a virtual chair while checking if it collides with the physical environment,
and navigating an AR character while respecting the
geometry of physical environments.
Surface depth leverages the CPU,  or compute shaders on the GPU,
to create and update depth meshes in real time, thus enabling:
collision, physics, and geometry-aware shadows,
throwing paintballs and make texture decals,
and touching your phone screen to freeze a textured mesh of the world.
Dense depth ships a depth texture to the
GPU and renders screen-space effects with fragment shaders.
It is useful for many computational photography tasks such as
relighting the world with virtual light sources,
simulating a DSLR camera with aperture effects, like depth-of-field,
and occluding virtual objects with the physical environment.
Next, we showcase how DepthLab components accelerate the mobile
AR development process using Unity prefabs and template scripts.
External developers have reused many components from DepthLab and
developed an AR demo experience with geometry-aware occlusion, shadows,
collision, and splatting using DepthLab algorithms.
Finally, we demonstrate more real-time examples enabled by DepthLab.
First, we show occlusions in the wild using our virtual tiger,
easily prototyped using DepthLab components.
Fog-like effects are simple using the depth texture,
where fog density increases with world-distance across the image.
Surface interactions can enable physics on ramps, staircases, and curved surfaces like tree branches.
With light-weight, screen-space path-planning from depth,
along with surface shadows, your virtual characters can behave more
realistically, and you can then immerse them in snowy or rainy landscapes!
We envision that DepthLab may help mobile AR developers amplify their
prototyping efforts, empowering them to unleash their creativity
and effortlessly integrate depth into mobile AR experiences.
Thank you very much, and please refer to our paper for more detail.
