NIST Manufacturing Objects and Assemblies Dataset (MOAD)
Funded by the National Institute of Standards and Technology (NIST) under awards 70NANB25H124, 70NANB24H047, and 70NANB22H114
Funded by the National Institute of Standards and Technology (NIST) under awards 70NANB25H124, 70NANB24H047, and 70NANB22H114
A collaboration between NIST and the UMass Lowell NERVE Center. This page serves as an access point for all MOAD data collected, as well as instructions for replicating the object scanning rig. This website should also serve as a means for contacting and making suggestions to the groups collecting object data. A Google Sheet for showcasing object data can be found below.
This release (v2) maintains the sensor module configuration used for version 1. There are 5 camera modules, each containing a Canon Rebel S3 DSLR camera, and a RealSense D455 Depth camera, positioned in a quarter circle arc around a motorized turntable. For each scan, data is captured in 5° turntable rotation increments, resulting in 360 DSLR images, and 360 colored pointclouds. For each object, several poses are captured as separate scans in order to cover all angles of each object. For the vast majority of objects, 2 poses were captured. There is currently a total of 76 objects included in MOADv2. These objects include all subcomponents for NIST-ATB 1, 2, 3, and 4.
Each object has the following data types associated with it, available for download:
Object CAD model (.stl)
Full scans of 2 object poses, each containing:
360 high resolution RGB images (24 Megapixel, .png)
360 colored point clouds (640x480, .ply)
Camera transforms file (.json)
Dense NeRF scene reconstruction pointcloud (.ply)
Data created by fusing object poses together:
A dense object point cloud (raw_cloud, .ply)
Colored watertight 3D mesh (raw_mesh, .ply)
Cleaned object 3D mesh (.obj, .usd)
Blender file (.blend)
A easily configurable python script for downloading whichever data types suit your use case is available here:
Additional data is planned to be collected in the future, following a similar methodology to the YCB Object and Model Set:
Download links for easily downloading particular portions of the dataset (i.e. ATB1 USD models)
Segmentation masks for images
More file type options for ease of use (URDF)
Synthetic datasets generated using reconstructed model
Pretrained ATB object detection models
Benchmarking datasets for object detection and pose estimation tasks
This dataset was the first round of full data collection for all ATB taskboards. While still available for download, it lacks many of the qualities introduced with MOADv2 such as a fully controlled lighting environment, wide camera depth of field to keep each object fully in focus as all times, and reconstructed object point clouds and meshes. MOADv1 only contains image data, colored pointclouds, and some scan information in a text file for each object. Individual object data can be accessed by clicking the links in the spreadsheets below, or object data can be downloaded in batches using this python script:
Funded by the National Institute of Standards and Technology (NIST) under awards 70NANB25H124, 70NANB24H047, and 70NANB22H114