The following examples use Rerun to create visual walkthroughs of papers. They are typically forks from the official open-source implementations adding Rerun as the visualizer. Check out the respective READMEs for installation instructions. For the simplest possible examples showing how to use each api, check out Loggable Data Types.
Finding a textured mesh decomposition from a collection of posed images is a very challenging optimization problem. “Differentiable Block Worlds” by @t_monnier et al. shows impressive results using differentiable rendering. I visualized how this optimization works using the Rerun SDK. https://www.youtube.com/watch?v=Ztwak981Lqg?playlist=Ztwak981Lqg&loop=1&hd=1&rel=0&autoplay=1 In “Differentiable Blocks World: Qualitative 3D Decomposition by Rendering Primitives” the authors describe an optimization of a background icosphere, a ground plane, and multiple superquadrics. The goal is to find the shapes and textures that best explain the observations. The optimization is initialized with an initial set of superquadrics , a ground plane, and a sphere for the background. From here, the optimization can only reduce the number of blocks, not add additional ones. https://www.youtube.com/watch?v=bOon26Zdqpc?playlist=bOon26Zdqpc&loop=1&hd=1&rel=0&autoplay=1 A key difference to other differentiable renderers is the addition of transparency handling. Each mesh has an opacity associated with it that is optimized. When the opacity becomes lower than a threshold the mesh is discarded in the visualization. This allows to optimize the number of meshes. https://www.youtube.com/watch?v=d6LkS63eHXo?playlist=d6LkS63eHXo&loop=1&hd=1&rel=0&autoplay=1 To stabilize the optimization and avoid local minima, a 3-stage optimization is employed: 1. the texture resolution is reduced by a factor of 8, 2. the full resolution texture is optimized, and 3. transparency-based optimization is deactivated, only optimizing the opaque meshes from here. https://www.youtube.com/watch?v=irxqjUGm34g?playlist=irxqjUGm34g&loop=1&hd=1&rel=0&autoplay=1 Check out the project page, which also contains examples of physical simulation and scene editing enabled by this kind of scene decomposition. Also make sure to read the paper by Tom Monnier, Jake Austin, Angjoo Kanazawa, Alexei A. Efros, Mathieu Aubry. Interesting study of how to approach such a difficult optimization problem.
Tracking any point in a video is a fundamental problem in computer vision. The paper “TAPIR: Tracking Any Point with per-frame Initialization and temporal Refinement” by Carl Doersch et al. significantly improved over prior state-of-the-art. https://www.youtube.com/watch?v=5EixnuJnFdo?playlist=5EixnuJnFdo&loop=1&hd=1&rel=0&autoplay=1 “TAPIR: Tracking Any Point with per-frame Initialization and temporal Refinement” proposes a two-stage approach: 1. compare the query point's feature with the target image features to estimate an initial track, and 2. iteratively refine by taking neighboring frames into account. In the first stage the image features in the query image at the query point are compared to the feature maps of the other images using the dot product. The resulting similarity map gives a high score for similar image features. https://www.youtube.com/watch?v=dqvcIlk55AM?playlist=dqvcIlk55AM&loop=1&hd=1&rel=0&autoplay=1 From here, the position of the point is predicted as a heatmap. In addition, the probabilities that the point is occluded and whether its position is accurate are predicted. Only when predicted as non-occluded and accurate a point is classified as visible for a given frame. https://www.youtube.com/watch?v=T7w8dXEGFzY?playlist=T7w8dXEGFzY&loop=1&hd=1&rel=0&autoplay=1 The previous step gives an initial track but it is still noisy since the inference is done on a per-frame basis. Next, the position, occlusion and accuracy probabilities are iteratively refined using a spatially and temporally local feature volumes. https://www.youtube.com/watch?v=mVA_svY5wC4?playlist=mVA_svY5wC4&loop=1&hd=1&rel=0&autoplay=1 Check out the paper by Carl Doersch, Yi Yang, Mel Vecerik, Dilara Gokay, Ankush Gupta, Yusuf Aytar, Joao Carreira, and Andrew Zisserman. It also includes a nice visual comparison to previous approaches.
Novel view synthesis has made remarkable progress in recent years, but most methods require per-scene optimization on many images. In their CVPR 2023 paper Yilun Du et al. propose a method that works with just 2 views. I created a visual walkthrough of the work using the Rerun SDK. https://www.youtube.com/watch?v=dc445VtMj_4?playlist=dc445VtMj_4&loop=1&hd=1&rel=0&autoplay=1 “Learning to Render Novel Views from Wide-Baseline Stereo Pairs” describes a three stage approach. Image features for each input view are extracted. Features along the target rays are collected. The color is predicted through the use of cross-attention. To render a pixel its corresponding ray is projected onto each input image. Instead of uniformly sampling along the ray in 3D, the samples are distributed such that they are equally spaced on the image plane. The same points are also projected onto the other view . https://www.youtube.com/watch?v=PuoL94tBxGI?playlist=PuoL94tBxGI&loop=1&hd=1&rel=0&autoplay=1 The image features at these samples are used to synthesize new views. The method learns to attend to the features close to the surface. Here we show the attention maps for one pixel, and the resulting pseudo depth maps if we interpret the attention as a probability distribution. https://www.youtube.com/watch?v=u-dmTM1w7Z4?playlist=u-dmTM1w7Z4&loop=1&hd=1&rel=0&autoplay=1 Make sure to check out the paper by Yilun Du, Cameron Smith, Ayush Tewari, Vincent Sitzmann to learn about the details of the method.
OpenAI has released two models for text-to-3d generation: Point-E and Shape-E. Both of these methods are fast and interesting but still low fidelity for now. https://www.youtube.com/watch?v=f9QWkamyWZI?playlist=f9QWkamyWZI&loop=1&hd=1&rel=0&autoplay=1 First off, how do these two methods differ from each other? Point-E represents its 3D shapes via point clouds. It does so using a 3-step generation process: first, it generates a single synthetic view using a text-to-image diffusion model . It then produces a coarse 3D point cloud using a second diffusion model which conditions on the generated image; third, it generates a fine 3D point cloud using an upsampling network. Finally, a another model is used to predict an SDF from the point cloud, and marching cubes turns it into a mesh. As you can tell, the results aren’t very high quality, but they are fast. https://www.youtube.com/watch?v=37Rsi7bphQY?playlist=37Rsi7bphQY&loop=1&hd=1&rel=0&autoplay=1 Shap-E improves on this by representing 3D shapes implicitly. This is done in two stages. First, an encoder is trained that takes images or a point cloud as input and outputs the weights of a NeRF. In the second stage, a diffusion model is trained on a dataset of NeRF weights generated by the previous encoder. This diffusion model is conditioned on either images or text descriptions. The resulting NeRF also outputs SDF values so that meshes can be extracted using marching cubes again. Here we see the prompt "a cheesburger" turn into a 3D mesh a set of images. https://www.youtube.com/watch?v=oTVLrujriiQ?playlist=oTVLrujriiQ&loop=1&hd=1&rel=0&autoplay=1 When compared to Point-E on both image-to-mesh and text-to-mesh generation, Shap-E converges faster and reaches comparable or better sample quality despite modeling a higher-dimensional, multi-representation output space. https://www.youtube.com/watch?v=DskRD5nioyA?playlist=DskRD5nioyA&loop=1&hd=1&rel=0&autoplay=1 Check out the respective papers to learn more about the details of both methods: "Shap-E: Generating Conditional 3D Implicit Functions" by Heewoo Jun and Alex Nichol; "Point-E: A System for Generating 3D Point Clouds from Complex Prompts" by Alex Nichol, Heewoo Jun, Prafulla Dhariwal, Pamela Mishkin, and Mark Chen.
Human-made environments contain a lot of straight lines, which are currently not exploited by most mapping approaches. With their recent work "3D Line Mapping Revisited" Shaohui Liu et al. take steps towards changing that. https://www.youtube.com/watch?v=UdDzfxDo7UQ?playlist=UdDzfxDo7UQ&loop=1&hd=1&rel=0&autoplay=1 The work covers all stages of line-based structure-from-motion: line detection, line matching, line triangulation, track building and joint optimization. As shown in the figure, detected points and their interaction with lines is also used to aid the reconstruction. LIMAP matches detected 2D lines between images and computes 3D candidates for each match. These are scored, and only the best candidate one is kept . To remove duplicates and reduce noise candidates are grouped together when they likely belong to the same line. https://www.youtube.com/watch?v=kyrD6IJKxg8?playlist=kyrD6IJKxg8&loop=1&hd=1&rel=0&autoplay=1 Focusing on a single line, LIMAP computes a score for each candidate . These scores are used to decide which line candidates belong to the same line. The final line shown in red is computed based on the candidates that were grouped together. https://www.youtube.com/watch?v=JTOs_VVOS78?playlist=JTOs_VVOS78&loop=1&hd=1&rel=0&autoplay=1 Once the lines are found, LIMAP further uses point-line associations to jointly optimize lines and points. Often 3D points lie on lines or intersections thereof. Here we highlight the line-point associations in blue. https://www.youtube.com/watch?v=0xZXPv1o7S0?playlist=0xZXPv1o7S0&loop=1&hd=1&rel=0&autoplay=1 Human-made environments often contain a lot of parallel and orthogonal lines. LIMAP allows to globally optimize the lines by detecting sets that are likely parallel or orthogonal. Here we visualize these parallel lines. Each color is associated with one vanishing point. https://www.youtube.com/watch?v=qyWYq0arb-Y?playlist=qyWYq0arb-Y&loop=1&hd=1&rel=0&autoplay=1 There is a lot more to unpack, so check out the paper by Shaohui Liu, Yifan Yu, Rémi Pautrat, Marc Pollefeys, Viktor Larsson. It also gives an educational overview of the strengths and weaknesses of both line-based and point-based structure-from-motion.
SimpleRecon is a back-to-basics approach for 3D scene reconstruction from posed monocular images by Niantic Labs. It offers state-of-the-art depth accuracy and competitive 3D scene reconstruction which makes it perfect for resource-constrained environments. https://www.youtube.com/watch?v=TYR9_Ql0w7k?playlist=TYR9_Ql0w7k&loop=1&hd=1&rel=0&autoplay=1 SimpleRecon's key contributions include using a 2D CNN with a cost volume, incorporating metadata via MLP, and avoiding computational costs of 3D convolutions. The different frustrums in the visualization show each source frame used to compute the cost volume. These source frames have their features extracted and back-projected into the current frames depth plane hypothesis. https://www.youtube.com/watch?v=g0dzm-k1-K8?playlist=g0dzm-k1-K8&loop=1&hd=1&rel=0&autoplay=1 SimpleRecon only uses camera poses, depths, and surface normals for supervision allowing for out-of-distribution inference e.g. from an ARKit compatible iPhone. https://www.youtube.com/watch?v=OYsErbNdQSs?playlist=OYsErbNdQSs&loop=1&hd=1&rel=0&autoplay=1 The method works well for applications such as robotic navigation, autonomous driving, and AR. It takes input images, their intrinsics, and relative camera poses to predict dense depth maps, combining monocular depth estimation and MVS via plane sweep. Metadata incorporated in the cost volume improves depth estimation accuracy and 3D reconstruction quality. The lightweight and interpretable 2D CNN architecture benefits from added metadata for each frame, leading to better performance. If you want to learn more about the method, check out the paper by Mohamed Sayed, John Gibson, Jamie Watson, Victor Prisacariu, Michael Firman, and Clément Godard.
By combining MetaAI's Segment Anything Model and Multiview Compressive Coding we can get a 3D object from a single image. https://www.youtube.com/watch?v=kmgFTWBZhWU?playlist=kmgFTWBZhWU&loop=1&hd=1&rel=0&autoplay=1 The basic idea is to use SAM to create a generic object mask so we can exclude the background. https://www.youtube.com/watch?v=7qosqFbesL0?playlist=7qosqFbesL0&loop=1&hd=1&rel=0&autoplay=1 The next step is to generate a depth image. Here we use the awesome ZoeDepth to get realistic depth from the color image. https://www.youtube.com/watch?v=d0u-MoNVR6o?playlist=d0u-MoNVR6o&loop=1&hd=1&rel=0&autoplay=1 With depth, color, and an object mask we have everything needed to create a colored point cloud of the object from a single view https://www.youtube.com/watch?v=LI0mE7usguk?playlist=LI0mE7usguk&loop=1&hd=1&rel=0&autoplay=1 MCC encodes the colored points and then creates a reconstruction by sweeping through the volume, querying the network for occupancy and color at each point. https://www.youtube.com/watch?v=RuHv9Nx6PvI?playlist=RuHv9Nx6PvI&loop=1&hd=1&rel=0&autoplay=1 This is a really great example of how a lot of cool solutions are built these days; by stringing together more targeted pre-trained models.The details of the three building blocks can be found in the respective papers: - Segment Anything by Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dollár, and Ross Girshick - Multiview Compressive Coding for 3D Reconstruction by Chao-Yuan Wu, Justin Johnson, Jitendra Malik, Christoph Feichtenhofer, and Georgia Gkioxari - ZoeDepth: Zero-shot Transfer by Combining Relative and Metric Depth by Shariq Farooq Bhat, Reiner Birkl, Diana Wofk, Peter Wonka, and Matthias Müller
SLAHMR robustly tracks the motion of multiple moving people filmed with a moving camera and works well on “in-the-wild” videos. It’s a great showcase of how to build working computer vision systems by intelligently combining several single purpose models. https://www.youtube.com/watch?v=eGR4H0KkofA?playlist=eGR4H0KkofA&loop=1&hd=1&rel=0&autoplay=1 “Decoupling Human and Camera Motion from Videos in the Wild” combines the outputs of ViTPose, PHALP, DROID-SLAM, HuMoR, and SMPL over three optimization stages. It’s interesting to see how it becomes more and more consistent with each step. Input to the method is a video sequence. ViTPose is used to detect 2D skeletons, PHALP for 3D shape and pose estimation of the humans, and DROID-SLAM to estimate the camera trajectory. Note that the 3D poses are initially quite noisy and inconsistent. https://www.youtube.com/watch?v=84hWddApYtI?playlist=84hWddApYtI&loop=1&hd=1&rel=0&autoplay=1 In the first stage, the 3D translation and rotation predicted by PHALP is optimized to better match the 2D keypoints from ViTPose. https://www.youtube.com/watch?v=iYy1sfDZsEc?playlist=iYy1sfDZsEc&loop=1&hd=1&rel=0&autoplay=1 In the second stage, in addition to 3D translation and rotation, the scale of the world, and the shape and pose of the bodies is optimized. To do so, in addition to the previous optimization term, a prior on joint smoothness, body shape, and body pose are added. https://www.youtube.com/watch?v=XXMKn29MlRI?playlist=XXMKn29MlRI&loop=1&hd=1&rel=0&autoplay=1 This step is crucial in that it finds the correct scale such that the humans don't drift in the 3D world. This can best be seen by overlaying the two estimates . https://www.youtube.com/watch?v=FFHWNnZzUhA?playlist=FFHWNnZzUhA&loop=1&hd=1&rel=0&autoplay=1 Finally, in the third stage, a motion prior is added to the optimization, and the ground plane is estimated to enforce realistic ground contact. This step further removes some jerky and unrealistic motions. Compare the highlighted blue figure. https://www.youtube.com/watch?v=6rsgOXekhWI?playlist=6rsgOXekhWI&loop=1&hd=1&rel=0&autoplay=1 For more details check out the paper by Vickie Ye, Georgios Pavlakos, Jitendra Malik, and Angjoo Kanazawa.