This post is the follow up post of the previous JETTY: Jarvis Serial to ROS-2 Transport Layer post.
The arduino firmware on the one hand implements the JETTY protocol for communicating with ROS and on the other hand takes care of all low-level hardware communication including:
The Jarvis booklet section Arduino Firmware presents the insight detail on the firmware implementation. It covers the following topics:
Follow up reading at: https://doc.iohub.dev/jarvis/Ym9vazovLy9jXzIvc18yL0lOVFJPLm1k/Arduino_Firmware.md
My ROS based DIY robot( presented in the previous post) uses the NVIDIA Jetson Nano for high level robotic algorithms with the ROS 2 middle-ware. The Jetson is connected to an Arduino via a serial link for low-level hardware interaction and control.
As the Arduino is used for low-level communication with actuators/sensors. We need a software transport layer on top of the physical serial link (Jetson - Arduino) to stream (sensor) data/command from Arduino to ROS 2 and vice versa. On Dolly (my previous robot version), which used ROS 1, this was handled by Rosserial, a protocol for wrapping standard ROS serialized messages and multiplexing multiple topics and services over a serial link. On ROS 2, however, Rosserial is not available. Other alternative solutions exist but are not mature enough, some implementations require more computational resource which exceeds the capability of the Arduino Mega 2560.
So i decided to implement a dedicated transport layer for Jarvis called JETTY (Jarvis SErial to ROS-2 TransporT LaYer). I do not aim at a generic protocol for ROS to serial communication like ROS serial. Instead, the implementation of the transport layer should be specific only to the robot. However, the protocol must be easy to extend to adapt to any future upgrade of the robot such as adding more sensor/actuators.
Requirements on the transport layer:
Brief, we need an efficient and reliable delimiting/synchronization scheme to detect the frame with short recovery time.
The detail on the choice of protocol and algorithm as well as insight implementation is presented on a section of my Jarvis booklet accessible via the following link:
It has been a while since i started to build an upgraded version of Dolly, my first DIY ROS based robot. This upgraded version is named Jarvis.
Changes from the previous version:
As a work in progress, I'm now writing a booklet that detail the building process of the robot both on hardware and soft software (Arduino, ROS 2), as well as some application use cases. The initial plan is:
All further updates on the booklet can be found here: https://doc.iohub.dev/jarvis/.
Stay tunned!!!
Ladies and gentlemen, please meet "Dolly the robot", the first version of my DIY mobile robot. My goal in this DIY project is to make a low-cost yet feature-rich ROS (Robot Operating System) based mobile robot that allow me to experiment my work on autonomous robot at home. To that end, Dolly is designed with all the basic features needed. To keep the bill of material as low as possible, i tried to recycle all of my spare hardware parts.
PhaROS is a collection of Pharo libraries that implements the ROS (Robot Operating System ) based client protocol. It allows developing robotic applications right in the Pharo environment by providing an abstract software layer between Pharo and ROS. This guide makes an assumption that readers already have some basic knowledge about ROS, if this is not the case, please check the following links before going any further on this page:
This is a demonstration of my current work on controlling robot using ROS and PhaROS. For that task, I've developed a dedicated PhaROS package that defines:
When evaluating the performance of a SLAM algorithm, quantifying the produced map quality is one of the most important criteria. Often, the produced map is compared with (1) a ground-truth map (which can be easily obtained in simulation) or (2) with another existing map that is considered accurate (in case of real world experiment where the ground-truth is not always available ).
Basically, grid maps are images, so image similarity measurement metrics can be used in this case. In this post, we consider three different metrics: Mean Square Error (MSE), K-nearest based normalized error (NE) and Structure Similarity Index (SSIM)
I've had funny time playing around with the Gazebo simulator for autonomous robot exploration. One thing I've encountered is that the odometry data provided by Gazebo is so perfect that, sometime, makes the simulation less realistic. I used a Turtlebot model as robot model in my simulations. Googling around, i didn't find any solution of adding noise to the odometry data of this robot (using the URDF file). I then decided to develop a dedicated ROS node allowing me to add some random noise to the Gazebo's odometry data.
First thing fist, we need to understand the robot motion model. There are many motion models, but in the scope of this article, we focus only on the odometry motion model. Often, odometry is obtained by integrating sensor reading from wheel encoders, it measures the relative motion of the robot between time \(t\) and \(t-1\) or \((t-1,t]\). In 2D environment, a robot pose is represented by a point \((x,y)\) and an orientation (rotation angle) \(\theta\), so the robot pose at the time \(t-1\) and \(t\) are denoted by:
$$p_{t-1}=(x_{t-1},y_{t-1},\theta_{t-1})$$
$$p_{t}=(x_t,y_t,\theta_t)$$
This setup is performed and documented on an Ubuntu system with the following software stack:
sudo apt-get install git
)To follow this post, some basic knowledge on ROS is needed: