Robotics Engineer | ROS2 Specialist | AI Enthusiast
Masters in Robotics graduate with extensive experience in ROS2, computer vision, and autonomous systems. Passionate about developing robust robotics solutions and sharing knowledge with the community.
Currently working on cutting-edge robotics projects involving ROS2, sensor fusion, and machine learning for autonomous navigation and manipulation.
#!/bin/bash # Name: Sai Jagadeesh Muralikrishnan # Location: College Park, MD # Education: # - M.Eng in Robotics @ University of Maryland (2023-2025) # - B.Eng in Mechatronics @ REC Chennai (2018-2022) # Experience: # - Robotics Intern @ Kick Robotics # - Embedded Engineer @ TuTr Hyperloop # - Research Assistant @ Maryland Robotics Center # Skills: # - Robotics: ROS2, SLAM, Sensor Fusion # - Programming: Python, C++, CUDA # - ML/CV: PyTorch, OpenCV, YOLOv8 # - Simulation: Gazebo, Isaac Sim # Contact: # - Email: saijagadeesh.muralikrishnan@gmail.com # - LinkedIn: linkedin.com/in/sai-jagadeesh-m # - Website: saijagadeesh.com
Quick setup guide for TurtleBot3 with ROS2 Humble on Ubuntu 22.04:
# Install TurtleBot3 packages sudo apt install ros-humble-turtlebot3-* # Set TurtleBot3 model (burger, waffle, waffle_pi) echo "export TURTLEBOT3_MODEL=waffle_pi" >> ~/.bashrc source ~/.bashrc # Launch simulation ros2 launch turtlebot3_gazebo turtlebot3_world.launch.py # Launch teleop ros2 run turtlebot3_teleop teleop_keyboard
Create and activate a Python 3.10 virtual environment for ROS2 development:
# Install Python 3.10 if not present sudo apt install python3.10 python3.10-venv # Create virtual environment python3.10 -m venv ~/ros2_venv # Activate environment source ~/ros2_venv/bin/activate # Install ROS2 packages (after sourcing ROS2) pip install -U colcon-common-extensions # Install other dependencies pip install numpy opencv-python pyyaml
Add this to your ~/.bashrc
to open Terminator with 4 panes automatically:
# Function to launch terminator with 4 panes quadterminator() { terminator --layout ros2quad & sleep 0.5 xdotool key Super+Up } # Create the layout file if it doesn't exist if [ ! -f ~/.config/terminator/config ]; then mkdir -p ~/.config/terminator cat > ~/.config/terminator/config << 'EOL' [global_config] title_transmit_fg_color = "#2e3436" title_transmit_bg_color = "#3465a4" title_inactive_fg_color = "#ffffff" title_inactive_bg_color = "#555753" [keybindings] [profiles] [[default]] cursor_color = "#aaa" foreground_color = "#ffffff" background_color = "#000000" palette = "#2e3436:#cc0000:#4e9a06:#c4a000:#3465a4:#75507b:#06989a:#d3d7cf:#555753:#ef2929:#8ae234:#fce94f:#729fcf:#ad7fa8:#34e2e2:#eeeeec" [layouts] [[ros2quad]] [[[child0]]] type = Window parent = "" [[[child1]]] type = HPaned parent = child0 [[[child2]]] type = VPaned parent = child1 [[[child3]]] type = VPaned parent = child1 position = 50 [[[terminal4]]] type = Terminal parent = child2 profile = default command = "echo 'TOP LEFT - ROS COMMANDS'" [[[terminal5]]] type = Terminal parent = child2 profile = default command = "echo 'BOTTOM LEFT - BUILD/LOGS'" [[[terminal6]]] type = Terminal parent = child3 profile = default command = "echo 'TOP RIGHT - EDITOR'" [[[terminal7]]] type = Terminal parent = child3 profile = default command = "echo 'BOTTOM RIGHT - MONITORING'" [plugins] EOL fi
Run quadterminator
in your terminal to launch the pre-configured layout.
Setup script for integrating VOXL2 with ROS2 and PX4 Autopilot:
#!/bin/bash # Install required dependencies sudo apt-get update sudo apt-get install -y ros-humble-mavros ros-humble-mavros-extras \ ros-humble-mavros-msgs ros-humble-geographic-msgs \ ros-humble-tf2-eigen ros-humble-tf2-ros # Download PX4 messages git clone https://github.com/PX4/px4_msgs.git ~/px4_ros2/src/px4_msgs # Build the workspace cd ~/px4_ros2 colcon build --symlink-install # Configure VOXL2 for ROS2 sudo echo "export MAVLINK_DIALECT=common" >> /etc/profile.d/voxl.sh sudo echo "export ROS_DOMAIN_ID=42" >> /etc/profile.d/voxl.sh sudo echo "source /opt/ros/humble/setup.bash" >> /etc/profile.d/voxl.sh sudo echo "source ~/px4_ros2/install/setup.bash" >> /etc/profile.d/voxl.sh # Setup PX4 communication cat > ~/px4_ros2/src/run_px4.sh << 'EOL' #!/bin/bash micrortps_agent -t UDP -n voxl2 & ros2 run px4_ros_com mavlink_router -e px4_sender -t UDP -d 14550 -r 14540 & EOL chmod +x ~/px4_ros2/src/run_px4.sh echo "VOXL2 ROS2 setup complete. Reboot to apply changes."
Built ROS2 validation pipelines in Python/C++ using PyTest and GoogleTest covering motion, sensor integrity, and firmware stability tests.
# Example test for motion validation import pytest import rclpy from std_msgs.msg import Float32 class TestMotionNodes: @classmethod def setup_class(cls): rclpy.init() cls.node = rclpy.create_node('test_motion') @classmethod def teardown_class(cls): cls.node.destroy_node() rclpy.shutdown() def test_velocity_publisher(self): from example_interfaces.msg import Float32MultiArray pub = self.node.create_publisher(Float32MultiArray, '/cmd_vel', 10) # Test publishing msg = Float32MultiArray(data=[0.1, 0.0]) # Linear x, angular z pub.publish(msg) # Give some time for message to be processed rclpy.spin_once(self.node, timeout_sec=0.1) # Add your assertions here assert True # Replace with actual assertions
Implemented multi-view image registration in ROS2 using OpenCV with feature matching and homography estimation.
import cv2 import numpy as np import rclpy from cv_bridge import CvBridge from sensor_msgs.msg import Image class ImageRegistrationNode: def __init__(self): self.node = rclpy.create_node('image_registration') self.bridge = CvBridge() # Subscribers self.image1_sub = self.node.create_subscription( Image, '/camera1/image_raw', self.image1_callback, 10) self.image2_sub = self.node.create_subscription( Image, '/camera2/image_raw', self.image2_callback, 10) # Feature detector self.orb = cv2.ORB_create() self.bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True) self.last_image1 = None self.last_image2 = None def image1_callback(self, msg): self.last_image1 = self.bridge.imgmsg_to_cv2(msg, "bgr8") self.process_images() def image2_callback(self, msg): self.last_image2 = self.bridge.imgmsg_to_cv2(msg, "bgr8") self.process_images() def process_images(self): if self.last_image1 is None or self.last_image2 is None: return # Find keypoints and descriptors kp1, des1 = self.orb.detectAndCompute(self.last_image1, None) kp2, des2 = self.orb.detectAndCompute(self.last_image2, None) if des1 is None or des2 is None: return # Match features matches = self.bf.match(des1, des2) matches = sorted(matches, key=lambda x: x.distance) # Extract matched points src_pts = np.float32([kp1[m.queryIdx].pt for m in matches]).reshape(-1,1,2) dst_pts = np.float32([kp2[m.trainIdx].pt for m in matches]).reshape(-1,1,2) # Find homography h_matrix, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5.0) # You can now use h_matrix to transform images or points between views
Step-by-step guide to calibrating cameras in ROS2 using OpenCV's checkerboard method.
mkdir -p ~/ros2_ws/src cd ~/ros2_ws/src ros2 pkg create --build-type ament_python camera_calibration cd camera_calibration
sudo apt install ros-${ROS_DISTRO}-cv-bridge ros-${ROS_DISTRO}-image-transport pip install opencv-python numpy
Create a file camera_calibration/calibration_node.py
:
#!/usr/bin/env python3 import rclpy from rclpy.node import Node from sensor_msgs.msg import Image from cv_bridge import CvBridge import cv2 import numpy as np import os from datetime import datetime class CameraCalibrationNode(Node): def __init__(self): super().__init__('camera_calibration_node') # Parameters self.declare_parameter('chessboard_width', 9) self.declare_parameter('chessboard_height', 6) self.declare_parameter('square_size', 0.024) # meters self.declare_parameter('image_topic', '/camera/image_raw') # Initialize variables self.chessboard_size = ( self.get_parameter('chessboard_width').value, self.get_parameter('chessboard_height').value) self.square_size = self.get_parameter('square_size').value self.bridge = CvBridge() self.image_sub = self.create_subscription( Image, self.get_parameter('image_topic').value, self.image_callback, 10) # Calibration variables self.criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001) self.objp = np.zeros((self.chessboard_size[0]*self.chessboard_size[1],3), np.float32) self.objp[:,:2] = np.mgrid[0:self.chessboard_size[0],0:self.chessboard_size[1]].T.reshape(-1,2) * self.square_size self.objpoints = [] # 3D points in real world space self.imgpoints = [] # 2D points in image plane self.calibration_done = False def image_callback(self, msg): if self.calibration_done: return try: cv_image = self.bridge.imgmsg_to_cv2(msg, 'bgr8') gray = cv2.cvtColor(cv_image, cv2.COLOR_BGR2GRAY) # Find chessboard corners ret, corners = cv2.findChessboardCorners( gray, self.chessboard_size, cv2.CALIB_CB_ADAPTIVE_THRESH + cv2.CALIB_CB_NORMALIZE_IMAGE) if ret: self.get_logger().info('Chessboard detected!') # Refine corner locations corners2 = cv2.cornerSubPix( gray, corners, (11,11), (-1,-1), self.criteria) # Draw and display the corners cv2.drawChessboardCorners(cv_image, self.chessboard_size, corners2, ret) cv2.imshow('Calibration', cv_image) cv2.waitKey(500) # Display for 500ms # Add to calibration data self.objpoints.append(self.objp) self.imgpoints.append(corners2) if len(self.objpoints) >= 15: # Use at least 15 images self.calibrate_camera(gray.shape[::-1]) except Exception as e: self.get_logger().error(f'Error processing image: {str(e)}') def calibrate_camera(self, image_size): self.get_logger().info('Starting calibration...') try: ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera( self.objpoints, self.imgpoints, image_size, None, None) if ret: self.get_logger().info('Calibration successful!') self.save_calibration(mtx, dist) self.calibration_done = True except Exception as e: self.get_logger().error(f'Calibration failed: {str(e)}') def save_calibration(self, camera_matrix, distortion_coeffs): timestamp = datetime.now().strftime('%Y%m%d_%H%M%S') filename = f'camera_calibration_{timestamp}.npz' np.savez(filename, camera_matrix=camera_matrix, distortion_coeffs=distortion_coeffs) self.get_logger().info(f'Saved calibration to {filename}') def main(args=None): rclpy.init(args=args) node = CameraCalibrationNode() rclpy.spin(node) node.destroy_node() rclpy.shutdown() if __name__ == '__main__': main()
Create launch/calibration.launch.py
:
from launch import LaunchDescription from launch_ros.actions import Node def generate_launch_description(): return LaunchDescription([ Node( package='camera_calibration', executable='calibration_node', name='camera_calibration', parameters=[{ 'chessboard_width': 9, 'chessboard_height': 6, 'square_size': 0.024, 'image_topic': '/camera/image_raw' }] ) ])
# Add this to setup.py if not present entry_points={ 'console_scripts': [ 'calibration_node = camera_calibration.calibration_node:main', ], }, # Build and run cd ~/ros2_ws colcon build --packages-select camera_calibration source install/setup.bash ros2 launch camera_calibration calibration.launch.py
Setup Gazebo Fortress with ROS2 for PX4 simulation with cameras and LiDAR.
sudo sh -c 'echo "deb http://packages.osrfoundation.org/gazebo/ubuntu-stable `lsb_release -cs` main" > /etc/apt/sources.list.d/gazebo-stable.list' wget https://packages.osrfoundation.org/gazebo.key -O - | sudo apt-key add - sudo apt-get update sudo apt-get install gazebo-fortress gazebo-fortress-common
sudo apt install ros-${ROS_DISTRO}-ros-gz-sim ros-${ROS_DISTRO}-ros-gz-bridge # Launch Gazebo with ROS2 bridge ros2 launch ros_gz_sim gz_sim.launch.py gz_args:="-v fortress -r empty.sdf" # In another terminal, spawn a model ros2 service call /world/empty/create ros_gz_interfaces/srv/SpawnEntity \ '{ entity_factory: { name: "drone", allow_renaming: false, pose: { position: {x: 0.0, y: 0.0, z: 1.0}, orientation: {x: 0.0, y: 0.0, z: 0.0, w: 1.0} }, sdf: """ <sdf version="1.7"> <model name="drone"> <include> <uri>model://iris</uri> </include> <plugin filename="libignition-gazebo-multicopter-motor-model-system.so" name="ignition::gazebo::systems::MulticopterMotorModel"> <robotNamespace>iris</robotNamespace> <jointName>iris::rotor_0_joint</jointName> <linkName>iris::rotor_0</linkName> <turningDirection>ccw</turningDirection> <timeConstantUp>0.0125</timeConstantUp> <timeConstantDown>0.025</timeConstantDown> <maxRotVelocity>1000</maxRotVelocity> <motorConstant>8.54858e-06</motorConstant> <momentConstant>0.06</momentConstant> <commandSubTopic>command/motor_speed</commandSubTopic> <motorNumber>0</motorNumber> <rotorDragCoefficient>8.06428e-05</rotorDragCoefficient> <rollingMomentCoefficient>1e-06</rollingMomentCoefficient> <motorSpeedPubTopic>motor_speed/0</motorSpeedPubTopic> <rotorVelocitySlowdownSim>10</rotorVelocitySlowdownSim> </plugin> </model> </sdf> """ } }'
Modify the SDF to include sensors:
# In the model SDF, add this before closing the model tag <link name="camera_link"> <pose>0.1 0 0.1 0 1.57 0</pose> <visual name="camera_visual"> <geometry> <box> <size>0.05 0.05 0.05</size> </box> </geometry> </visual> <sensor name="camera" type="camera"> <camera> <horizontal_fov>1.047</horizontal_fov>  <clip> <near>0.1</near> <far>100</far> </clip> </camera> <always_on>1</always_on> <update_rate>30</update_rate> <visualize>true</visualize> <topic>camera</topic> <plugin filename="libignition-gazebo-ros-camera-system.so" name="ignition::gazebo::systems::Camera"> <ros> <namespace>/model/iris</namespace> </ros> <camera_name>front_camera</camera_name> <frame_name>camera_link</frame_name> </plugin> </sensor> </link>
This script automatically builds your ROS2 workspace with different build types for development and release.
#!/bin/bash # Exit on error set -e # Define build type (dev or release) if [ "$1" == "--release" ]; then BUILD_TYPE="Release" BUILD_FLAGS="--cmake-args -DCMAKE_BUILD_TYPE=Release -DNDEBUG=ON" echo "Building in Release mode" else BUILD_TYPE="Debug" BUILD_FLAGS="--cmake-args -DCMAKE_BUILD_TYPE=Debug" echo "Building in Debug mode" fi # Clear previous build if [ "$2" == "--clean" ]; then echo "Cleaning previous build..." rm -rf build install log fi # Source ROS setup if [ -z "$ROS_DISTRO" ]; then echo "ROS environment not set, sourcing..." if [ -f "/opt/ros/humble/setup.bash" ]; then source /opt/ros/humble/setup.bash else echo "ERROR: ROS setup file not found. Update the path or install ROS." exit 1 fi fi # Find the package directory PKG_DIR=$(pwd) while [ "$PKG_DIR" != "/" ] && [ ! -f "$PKG_DIR/COLCON_IGNORE" ] && [ ! -f "$PKG_DIR/package.xml" ]; do PKG_DIR=$(dirname "$PKG_DIR") done if [ -f "$PKG_DIR/package.xml" ]; then echo "Building ${PKG_DIR##*/} package in $BUILD_TYPE mode" cd "$PKG_DIR/../.." || exit # Move to workspace root else echo "No package.xml found in any parent directory." echo "Please run this script from within your ROS2 package directory." exit 1 fi # Build commands colcon build \ $BUILD_FLAGS \ --symlink-install \ --event-handlers console_direct+ \ --packages-select "${PKG_DIR##*/}" \ --parallel-workers $(nproc) echo "Build complete. Source the workspace with:" echo "source ./install/setup.bash"
Usage: ./build_ros2.sh [--release] [--clean]
Convert ROS2 bag files containing image topics to mp4 videos with timestamps.
#!/bin/bash if [ "$#" -ne 4 ]; then echo "Usage: $0 <input_bag> <image_topic> <output_video> <fps>" exit 1 fi INPUT_BAG=$1 IMAGE_TOPIC=$2 OUTPUT_VIDEO=$3 FPS=$4 # Create temp directory TEMP_DIR=$(mktemp -d) echo "Created temp directory: $TEMP_DIR" # Extract images from bag file echo "Extracting images from bag file..." ros2 run image_view extract_images _sec_per_frame:=0 _filename_format:="$TEMP_DIR/frame_%04d.jpg" __name:=extract_images & EXTRACT_PID=$! ros2 bag play "$INPUT_BAG" --rate 1.0 --topics "$IMAGE_TOPIC" --remap "$IMAGE_TOPIC:=/extract_images/image" kill $EXTRACT_PID # Convert images to video echo "Converting images to video..." ffmpeg -y -r "$FPS" -i "$TEMP_DIR/frame_%04d.jpg" -c:v libx264 -vf "fps=$FPS,format=yuv420p" "$OUTPUT_VIDEO" # Clean up rm -r "$TEMP_DIR" echo "Done. Output video saved to $OUTPUT_VIDEO"
Usage: ./bag_to_video.sh input.bag /camera/image_raw output.mp4 30
A simple Bash script to monitor the status of ROS2 nodes with color coding.
#!/bin/bash # ROS2 Node Status Monitor # Colors RED='\033[0;31m' GREEN='\033[0;32m' YELLOW='\033[0;33m' BLUE='\033[0;34m' NC='\033[0m' # No Color # Check ROS environment if [ -z "$ROS_DISTRO" ]; then echo -e "${RED}ERROR: ROS2 environment not sourced!${NC}" exit 1 fi # Main loop while true; do clear echo -e "${BLUE}ROS2 Node Status Monitor - $(date)${NC}\n" # Get node list NODE_LIST=$(ros2 node list 2>/dev/null) if [ -z "$NODE_LIST" ]; then echo -e "${YELLOW}No active ROS2 nodes detected.${NC}" else echo -e "${GREEN}Active ROS2 Nodes:${NC}" while read -r node; do # Check node info NODE_INFO=$(ros2 node info "$node" 2>/dev/null) if [ $? -eq 0 ]; then # Count publishers and subscribers PUB_COUNT=$(echo "$NODE_INFO" | grep -c "Publishers:") SUB_COUNT=$(echo "$NODE_INFO" | grep -c "Subscribers:") SRV_COUNT=$(echo "$NODE_INFO" | grep -c "Service Servers:") CLI_COUNT=$(echo "$NODE_INFO" | grep -c "Service Clients:") # Get node namespace NS=$(echo "$node" | awk -F '/' '{OFS="/"; $NF=""; print $0}') NODE_NAME=$(echo "$node" | awk -F '/' '{print $NF}') # Print node info echo -e "\n${BLUE}Node: ${NC}${NODE_NAME} ${YELLOW}(ns: ${NS:-/})${NC}" echo -e " ${GREEN}Publishers:${NC} $PUB_COUNT ${GREEN}Subscribers:${NC} $SUB_COUNT" echo -e " ${GREEN}Services:${NC} $SRV_COUNT ${GREEN}Clients:${NC} $CLI_COUNT" else echo -e "\n${RED}Error getting info for node: $node${NC}" fi done <<< "$NODE_LIST" fi # Check for parameter server PARAM_LIST=$(ros2 param list 2>/dev/null) if [ ! -z "$PARAM_LIST" ]; then echo -e "\n${BLUE}Global Parameters:${NC}" echo "$PARAM_LIST" | while read -r param; do echo -e " - $param" done fi echo -e "\n${YELLOW}Press Ctrl+C to exit...${NC}" sleep 2 done
Run this script to continuously monitor your ROS2 system status.
I'm always open to discussing robotics projects, collaborations, or just chatting about the latest in ROS2 and autonomous systems.