7.2.4. Publishing Camera’s Data
7.2.4.1. Learning Objectives
In this tutorial, we demonstrate how to programmatically set up publishers for Isaac Sim Cameras at an approximate frequency.
7.2.4.2. Getting Started
Prerequisite
Completed the ROS2 Cameras tutorial
Completed ROS & ROS 2 Installation so that the necessary environment variables are set and sourced before launching Omniverse Isaac Sim.
Read through the Sensor Axes Representation (LiDAR, Cameras)
7.2.4.3. Setup a camera in a scene
To begin this tutorial, we first set up an environment with a omni.isaac.sensor Camera object. Running the below will result in a simple warehouse environment loaded with a camera in the scene.
1import carb
2from omni.isaac.kit import SimulationApp
3import sys
4
5import argparse
6
7parser = argparse.ArgumentParser(description="Ros2 Bridge Sample")
8parser.add_argument(
9 "--ros2_bridge",
10 default="omni.isaac.ros2_bridge",
11 nargs="?",
12 choices=["omni.isaac.ros2_bridge", "omni.isaac.ros2_bridge-humble"],
13)
14args, unknown = parser.parse_known_args()
15
16BACKGROUND_STAGE_PATH = "/background"
17BACKGROUND_USD_PATH = "/Isaac/Environments/Simple_Warehouse/warehouse_with_forklifts.usd"
18
19CONFIG = {"renderer": "RayTracedLighting", "headless": False}
20
21# Example ROS2 bridge sample demonstrating the manual loading of stages and manual publishing of images
22simulation_app = SimulationApp(CONFIG)
23import omni
24import numpy as np
25from omni.isaac.core import SimulationContext
26from omni.isaac.core.utils import stage, extensions, nucleus
27import omni.graph.core as og
28import omni.replicator.core as rep
29import omni.syntheticdata._syntheticdata as sd
30
31from omni.isaac.core.utils.prims import set_targets
32from omni.isaac.sensor import Camera
33import omni.isaac.core.utils.numpy.rotations as rot_utils
34from omni.isaac.core.utils.prims import is_prim_path_valid
35from omni.isaac.core_nodes.scripts.utils import set_target_prims
36
37# enable ROS2 bridge extension
38extensions.enable_extension(args.ros2_bridge)
39
40simulation_app.update()
41
42simulation_context = SimulationContext(stage_units_in_meters=1.0)
43
44# Locate Isaac Sim assets folder to load environment and robot stages
45assets_root_path = nucleus.get_assets_root_path()
46if assets_root_path is None:
47 carb.log_error("Could not find Isaac Sim assets folder")
48 simulation_app.close()
49 sys.exit()
50
51# Loading the simple_room environment
52stage.add_reference_to_stage(assets_root_path + BACKGROUND_USD_PATH, BACKGROUND_STAGE_PATH)
53
54
55############### Some Camera helper functions for setting up publishers. ###############
56
57# Paste functions from the tutorials here
58
59########################################################################################
60
61# Create a Camera prim.
62camera = Camera(
63 prim_path="/World/camera",
64 position=np.array([0.0, 0.0, 2.0]),
65 frequency=20,
66 resolution=(256, 256),
67 orientation=rot_utils.euler_angles_to_quats(np.array([0, 0, 0]), degrees=True),
68)
69
70simulation_app.update()
71camera.initialize()
72
73############### Calling Camera publishing functions ###############
74
75# Setup publishers.
76#publish_camera_tf(camera)
77#publish_camera_info(camera, 30)
78#publish_rgb(camera, 30)
79#publish_depth(camera, 30)
80#publish_pointcloud_from_depth(camera, 30)
81
82####################################################################
83
84# Need to initialize physics getting any articulation..etc
85simulation_context.initialize_physics()
86simulation_context.play()
87
88while simulation_app.is_running():
89 # Run with a fixed step size
90 simulation_context.step(render=True)
91
92simulation_context.stop()
93simulation_app.close()
7.2.4.4. Publish camera intrinsics to CameraInfo topic
The following snippet will publish a camera intrinsics associated with an omni.isaac.sensor Camera to a sensor_msgs/CameraInfo topic.
1def publish_camera_info(camera: Camera, freq): 2 # The following code will link the camera's render product and publish the data to the specified topic name. 3 render_product = camera._render_product_path 4 step_size = int(60/freq) 5 topic_name = camera.name+"_camera_info" 6 queue_size = 1 7 node_namespace = "" 8 frame_id = camera.prim_path.split("/")[-1] # This matches what the TF tree is publishing. 9 10 stereo_offset = [0.0, 0.0] 11 12 writer = rep.writers.get("ROS2PublishCameraInfo") 13 writer.initialize( 14 frameId=frame_id, 15 nodeNamespace=node_namespace, 16 queueSize=queue_size, 17 topicName=topic_name, 18 stereoOffset=stereo_offset, 19 ) 20 writer.attach([render_product]) 21 22 gate_path = omni.syntheticdata.SyntheticData._get_node_path( 23 "PostProcessDispatch" + "IsaacSimulationGate", render_product 24 ) 25 26 # Set step input of the Isaac Simulation Gate nodes upstream of ROS publishers to control their execution rate 27 og.Controller.attribute(gate_path + ".inputs:step").set(step_size) 28 return
7.2.4.5. Publish pointcloud from depth images
In the following snippet, we publish a pointcloud to a sensor_msgs/PointCloud2 message. This pointcloud is reconstructed from the depth image using the intrinsics of the camera.
1def publish_pointcloud_from_depth(camera: Camera, freq): 2 # The following code will link the camera's render product and publish the data to the specified topic name. 3 render_product = camera._render_product_path 4 step_size = int(60/freq) 5 topic_name = camera.name+"_pointcloud" # Set topic name to the camera's name 6 queue_size = 1 7 node_namespace = "" 8 frame_id = camera.prim_path.split("/")[-1] # This matches what the TF tree is publishing. 9 10 # Note, this pointcloud publisher will simply convert the Depth image to a pointcloud using the Camera intrinsics. 11 # This pointcloud generation method does not support semantic labelled objects. 12 rv = omni.syntheticdata.SyntheticData.convert_sensor_type_to_rendervar( 13 sd.SensorType.DistanceToImagePlane.name 14 ) 15 16 writer = rep.writers.get(rv + "ROS2PublishPointCloud") 17 writer.initialize( 18 frameId=frame_id, 19 nodeNamespace=node_namespace, 20 queueSize=queue_size, 21 topicName=topic_name 22 ) 23 writer.attach([render_product]) 24 25 # Set step input of the Isaac Simulation Gate nodes upstream of ROS publishers to control their execution rate 26 gate_path = omni.syntheticdata.SyntheticData._get_node_path( 27 rv + "IsaacSimulationGate", render_product 28 ) 29 og.Controller.attribute(gate_path + ".inputs:step").set(step_size) 30 31 return
7.2.4.6. Publish RGB images
1def publish_rgb(camera: Camera, freq): 2 # The following code will link the camera's render product and publish the data to the specified topic name. 3 render_product = camera._render_product_path 4 step_size = int(60/freq) 5 topic_name = camera.name+"_rgb" 6 queue_size = 1 7 node_namespace = "" 8 frame_id = camera.prim_path.split("/")[-1] # This matches what the TF tree is publishing. 9 10 rv = omni.syntheticdata.SyntheticData.convert_sensor_type_to_rendervar(sd.SensorType.Rgb.name) 11 writer = rep.writers.get(rv + "ROS2PublishImage") 12 writer.initialize( 13 frameId=frame_id, 14 nodeNamespace=node_namespace, 15 queueSize=queue_size, 16 topicName=topic_name 17 ) 18 writer.attach([render_product]) 19 20 # Set step input of the Isaac Simulation Gate nodes upstream of ROS publishers to control their execution rate 21 gate_path = omni.syntheticdata.SyntheticData._get_node_path( 22 rv + "IsaacSimulationGate", render_product 23 ) 24 og.Controller.attribute(gate_path + ".inputs:step").set(step_size) 25 26 return
7.2.4.7. Publish depth images
1def publish_depth(camera: Camera, freq): 2 # The following code will link the camera's render product and publish the data to the specified topic name. 3 render_product = camera._render_product_path 4 step_size = int(60/freq) 5 topic_name = camera.name+"_depth" 6 queue_size = 1 7 node_namespace = "" 8 frame_id = camera.prim_path.split("/")[-1] # This matches what the TF tree is publishing. 9 10 rv = omni.syntheticdata.SyntheticData.convert_sensor_type_to_rendervar( 11 sd.SensorType.DistanceToImagePlane.name 12 ) 13 writer = rep.writers.get(rv + "ROS2PublishImage") 14 writer.initialize( 15 frameId=frame_id, 16 nodeNamespace=node_namespace, 17 queueSize=queue_size, 18 topicName=topic_name 19 ) 20 writer.attach([render_product]) 21 22 # Set step input of the Isaac Simulation Gate nodes upstream of ROS publishers to control their execution rate 23 gate_path = omni.syntheticdata.SyntheticData._get_node_path( 24 rv + "IsaacSimulationGate", render_product 25 ) 26 og.Controller.attribute(gate_path + ".inputs:step").set(step_size) 27 28 return
7.2.4.8. Publish a TF Tree for the camera pose
The pointcloud, published using the above function, will publish the pointcloud in the ROS camera axes convention (-Y up, +Z forward). To make visualizing this pointcloud easy in ROS via RViz, the following snippet will publish a TF Tree to the /tf, containing two frames.
The two frames are 1. {camera_frame_id}: This is the camera’s pose in the ROS camera convention (-Y up, +Z forward). Pointclouds are published in this frame.
{camera_frame_id}_world: This is the camera’s pose in the World axes convention (+Z up, +X forward). This will reflect the true pose of the camera.
The TF Tree looks like this:
world -> {camera_frame_id} is a dynamic transform from origin to the camera in the ROS camera convention, following any movement of the camera.
{camera_frame_id} -> {camera_frame_id}_world is a static transform consisting of only a rotation and zero translation. This static transform can be represented by the quaternion [0.5, -0.5, 0.5, 0.5] in [wxyz] convention.
Since the pointcloud is published in {camera_frame_id}, it is encouraged to set the frame_id of the pointcloud topic to {camera_frame_id}. The resulting visualization of the pointclouds can be viewed in the world frame in RViz.
1def publish_camera_tf(camera: Camera):
2 camera_prim = camera.prim_path
3
4 if not is_prim_path_valid(camera_prim):
5 raise ValueError(f"Camera path '{camera_prim}' is invalid.")
6
7 try:
8 # Generate the camera_frame_id. OmniActionGraph will use the last part of
9 # the full camera prim path as the frame name, so we will extract it here
10 # and use it for the pointcloud frame_id.
11 camera_frame_id=camera_prim.split("/")[-1]
12
13 # Generate an action graph associated with camera TF publishing.
14 ros_camera_graph_path = "/CameraTFActionGraph"
15
16 # If a camera graph is not found, create a new one.
17 if not is_prim_path_valid(ros_camera_graph_path):
18 (ros_camera_graph, _, _, _) = og.Controller.edit(
19 {
20 "graph_path": ros_camera_graph_path,
21 "evaluator_name": "execution",
22 "pipeline_stage": og.GraphPipelineStage.GRAPH_PIPELINE_STAGE_SIMULATION,
23 },
24 {
25 og.Controller.Keys.CREATE_NODES: [
26 ("OnTick", "omni.graph.action.OnTick"),
27 ("IsaacClock", "omni.isaac.core_nodes.IsaacReadSimulationTime"),
28 ("RosPublisher", "omni.isaac.ros2_bridge.ROS2PublishClock"),
29 ],
30 og.Controller.Keys.CONNECT: [
31 ("OnTick.outputs:tick", "RosPublisher.inputs:execIn"),
32 ("IsaacClock.outputs:simulationTime", "RosPublisher.inputs:timeStamp"),
33 ]
34 }
35 )
36
37 # Generate 2 nodes associated with each camera: TF from world to ROS camera convention, and world frame.
38 og.Controller.edit(
39 ros_camera_graph_path,
40 {
41 og.Controller.Keys.CREATE_NODES: [
42 ("PublishTF_"+camera_frame_id, "omni.isaac.ros_bridge.ROS2PublishTransformTree"),
43 ("PublishRawTF_"+camera_frame_id+"_world", "omni.isaac.ros2_bridge.ROS2PublishRawTransformTree"),
44 ],
45 og.Controller.Keys.SET_VALUES: [
46 ("PublishTF_"+camera_frame_id+".inputs:topicName", "/tf"),
47 # Note if topic_name is changed to something else besides "/tf",
48 # it will not be captured by the ROS tf broadcaster.
49 ("PublishRawTF_"+camera_frame_id+"_world.inputs:topicName", "/tf"),
50 ("PublishRawTF_"+camera_frame_id+"_world.inputs:parentFrameId", camera_frame_id),
51 ("PublishRawTF_"+camera_frame_id+"_world.inputs:childFrameId", camera_frame_id+"_world"),
52 # Static transform from ROS camera convention to world (+Z up, +X forward) convention:
53 ("PublishRawTF_"+camera_frame_id+"_world.inputs:rotation", [0.5, -0.5, 0.5, 0.5]),
54 ],
55 og.Controller.Keys.CONNECT: [
56 (ros_camera_graph_path+"/OnTick.outputs:tick",
57 "PublishTF_"+camera_frame_id+".inputs:execIn"),
58 (ros_camera_graph_path+"/OnTick.outputs:tick",
59 "PublishRawTF_"+camera_frame_id+"_world.inputs:execIn"),
60 (ros_camera_graph_path+"/IsaacClock.outputs:simulationTime",
61 "PublishTF_"+camera_frame_id+".inputs:timeStamp"),
62 (ros_camera_graph_path+"/IsaacClock.outputs:simulationTime",
63 "PublishRawTF_"+camera_frame_id+"_world.inputs:timeStamp"),
64 ],
65 },
66 )
67 except Exception as e:
68 print(e)
69
70 # Add target prims for the USD pose. All other frames are static.
71 set_target_prims(
72 primPath=ros_camera_graph_path+"/PublishTF_"+camera_frame_id,
73 inputName="inputs:targetPrims",
74 targetPrimPaths=[camera_prim],
75 )
76 return
7.2.4.9. Running the Example
Save the above script and run using python.sh in the Isaac Sim folder. You should expect to see the following:
ros2 topic list
/camera_camera_info
/camera_depth
/camera_pointcloud
/camera_rgb
/clock
/rosout
/rosout_agg
/tf
The frames published by TF will look like the following:
Now, we can visualize the pointcloud and depth images via RViz2. Open RViz2, and set the “Fixed Frame” field to world.
Then, enable the topics for the pointclouds and depth images. You should be able to see them.
7.2.4.10. Summary
This tutorial demonstrated how to programmatically set up publishers for Isaac Sim Cameras at an approximate frequency.