The following examples illustrate using the Rerun logging SDK with potential real-world (if toy) use cases. They all require additional data to be downloaded, so an internet connection is needed at least once. Each example downloads it's own data, so no additional steps are needed. For the simplest possible examples showing how to use each api, check out Types.
This example visualizes the ARKitScenes dataset using Rerun. The dataset contains color images, depth images, the reconstructed mesh, and labeled bounding boxes around furniture.
This example integrates Rerun into Hugging Face's ControlNet example. ControlNet allows to condition Stable Diffusion on various modalities. In this example we condition on edges detected by the Canny edge detector. https://vimeo.com/870289439?autoplay=1&loop=1&autopause=0&background=1&muted=1&ratio=1440:1080 To run this example use You can specify your own image and prompts using
A more elaborate example running Depth Guided Stable Diffusion 2.0. For more info see here.
Another more elaborate example applying simple object detection and segmentation on a video using the Huggingface `transformers` library. Tracking across frames is performed using CSRT from OpenCV. For more info see here
Example using a DICOM MRI scan. This demonstrates the flexible tensor slicing capabilities of the Rerun viewer.
Use the MediaPipe Face Detector and Landmarker solutions to detect and track a human face in image, videos, and camera stream. CLI usage help is available using the `--help` option: Here is an overview of the options specific to this example: - *Running modes*: By default, this example streams images from the default webcam. Another webcam can be used by providing a camera index with the `--camera` option. Alternatively, images can be read from a video file or a single image file . Also, a demo image with two faces can be automatically downloaded and used with `--demo-image`. - *Max face count*: The maximum face detected by MediaPipe Face Landmarker can be set using `--num-faces NUM`. It defaults to 1, in which case the Landmarker applies temporal smoothing. This parameter doesn't affect MediaPipe Face Detector, which always attempts to detect all faces present in the input images. - *Image downscaling*: By default, this example logs and runs on the native resolution of the provided images. Input images can be downscaled to a given maximum dimension using `--max-dim DIM`. - *Limiting frame count*: When running from a webcam or a video file, this example can be set to stop after a given number of frames using `--max-frame MAX_FRAME`.
Use the MediaPipe Pose solution to detect and track a human pose in video.
Very simple example of capturing from a live camera. Runs the opencv canny edge detector on the image stream. Usage:
A minimal example of streaming frames live from an Intel RealSense depth sensor. Usage:
Example of using the Rerun SDK to log the Objectron dataset. The Objectron dataset is a collection of short, object-centric video clips, which are accompanied by AR session metadata that includes camera poses, sparse point-clouds and characterization of the planar surfaces in the surrounding environment.
Uses `pyopf` to load and display a photogrammetrically reconstructed 3D point cloud in the Open Photogrammetry Format . Requires Python 3.10 or higher because of `pyopf`.
This example demonstrates how to use the Rerun SDK to log raw 3D meshes and their transform hierarchy. Simple material properties are supported.
Example using an example dataset from New York University with RGB and Depth channels.
A minimal example of creating a ROS node that subscribes to topics and converts the messages to rerun log calls. The solution here is mostly a toy example to show how ROS concepts can be mapped to Rerun. Fore more information on future improved ROS support, see the tracking issue: #1527 NOTE: Unlike many of the other examples, this example requires a system installation of ROS in addition to the packages from requirements.txt. This example was developed and tested on top of ROS2 Humble Hawksbill and the turtlebot3 navigation example. Installing ROS is outside the scope of this example, but you will need the equivalent of the following packages: In addition to installing the dependencies from `requirements.txt` into a venv you will also need to source the ROS setup script: First, in one terminal launch the nav2 turtlebot demo: As described in the nav demo, use the rviz window to initialize the pose estimate and set a navigation goal. You can now connect to the running ROS system by running:
Example of using Rerun to log and visualize the output of Meta AI's Segment Anything model. For more info see here.
Generate Signed Distance Fields for arbitrary meshes using both traditional methods and the one described in the DeepSDF paper, and visualize the results using the Rerun SDK. _Known issue_: On macOS, this example may present artefacts in the SDF and/or fail.
An example using Rerun to log and visualize the output of COLMAP's sparse reconstruction. COLMAP is a general-purpose Structure-from-Motion and Multi-View Stereo pipeline with a graphical and command-line interface. In this example a short video clip has been processed offline by the COLMAP pipeline, and we use Rerun to visualize the individual camera frames, estimated camera poses, and resulting point clouds over time.
This is an example that shows how to use Rerun's C++ API to log and view VRS files. VRS is a file format optimized to record & playback streams of sensor data, such as images, audio samples, and any other discrete sensors , stored in per-device streams of time-stamped records.