Setting up SLAM for RASCAPP robot

SLAM: Simultaneous Localization and Mapping, is a suite of algorithms and software systems that enable a mobile robot to localize itself on a map of its environment and autonomously generate path trajectories using path planning algorithms and execute these trajectories to get it from one point in an environment to another, usually avoiding obstacles in its path.

In order to get our domestic mobile manipulator robot, RASCAPP, to move from one point in the house to another, we would have to get it to perform some form of SLAM. In this work, I use theĀ  move_base node, a popular SLAM software suite on the ROS platform. To get move_base working on RASCAPP, we’d need to provide 4 essential sets of data to the move_base node;

  1. All the necessary transforms (Transforms from the sensors to the robot)
  2. Odometry data of the robot (xyz position, quaternion orientation)
  3. Laser scan data
  4. Map of environment

Following this requirements list, I began working on the transforms. RASCAPP’s upper body (from waist upwards) is basically the Baxter robot made by Rethink Robotics. They thankfully make all the transforms available in their sdk so there was not much work there. For the lower body of RASCAPP, I simply broadcast a static transform from the mobile base to the waist of the baxter robot. I broadcast another static transform from the Primesense 3D camera, which is mounted on RASCAPP’s head to the head link of the baxter robot and finally broadcast a static transform from the RPLidar A1 Laser scanner to the base of the baxter robot. These satisfied all the transform requirements of move_base node.

To get the odometry data, instead of using the conventional combination of encoders and an IMU, which were infamous for being easily susceptible to drift, I instead opted for the much more expensive but accurate and reliable indoor GPS modules developed by Marvelmind Robotics. They are reputed to have a +/- 2cm accuracy and do not experience drift. I set the modules up in the lab following the instructions given by Marvelmind robotics, installed their dashboard monitor software to monitor the modules and ensure each module had a line of sight to each of the other modules. This proved to be much more tedious than I anticipated, mainly because the dashboard software wasn’t exactly user-friendly. I spent 2 full days working to get it to work before it finally worked as I needed it to. I then cloned and built their ROS package and read the position and orientation ROS topics that were published. The messages from these topics weren’t in the standard geometry_msgs/PoseStamped and sensor_msgs/Imu formats so I had to write a quick ROSĀ  subscriber-publisher node to subscribe to these topics and publish them in the standard ROS message formats in order to be used by the move_base node. I then wrote an odometry broadcaster to combine the pose messages and imu messages into a nav_msgs/Odometry format which was required by move_base node. I got this to work after a few hours.

Laser scan data was to be provided by the RPLidar A1 laser scanner. I cloned and built its ROS package and run its test launch file. Thankfully, everything worked as I expected it to. The laser scans were published to the ‘/scan’ topic.

The final requirement of the move_base node was the map of the environment. I opted to use gmapping, a popular mapping node in ROS to map out the lab.

 

[TO BE CONTINUED]

Leave a Reply

Your email address will not be published. Required fields are marked *