Vision Based Obstacle Avoidance

This blog post is result of the work I did as part of my endeavors for MS thesis “Local Autonomy for Continuum Robots”. In order to get results I tried so many things and and while doing so learned and came upon some work that was totally unintentional.

Avoiding obstacles using LIDAR, SONAR etc are quite common and almost all the undertakings in this domain are done using aforementioned sensors. These sensors however might not be suitable in every scenario because of the weight, specially in continuum and soft robots we want to keep our assembly as light as possible while delivering maximum functionality. Vision can offer one such solution. It might not be perfect but works for sure in certain cases.

Optical Flow Based Robot Obstacle Avoidance

This post is highly based on the above mentioned paper. Its not the exact reproduction as I wasn’t able to figure out calculation of Focus of expansion – FOE (Actually I calculated it but the results weren’t good so I bypassed them you are free to experiment on your own and if successful share the results with me as well 🙂 ).

The idea :

As quoted in the paper  “The Optical flow contains information about both the layout of surfaces, the direction of the point of observation called the Focus of expansion (FOE), the Time To Contact (TTC), and the depth.” Above quoted line is the crux of our implementation. I would like to cover a few terms before I dive into the implementation details.

Optical Flow :

Optical flow is the apparent change in the position of pixels as the camera view changes either due to the motion of camera or the scene/object. If we consider two consecutive images of the same scene and assuming that a motion has occurred , the corresponding points int the subsequent images will have some displacement. This displacement can be described by a vector and when such vectors are drawn for all the points in an image we get a vector field which we call as optical flow. Consider the image below.

optical_flow_basic1
Image Courtesy: Wikipedia article on Optical Flow

The picture shows a ball moving in 5 consecutive frames. Arrow depicts displacement vector as the ball moves in successive frames.

Focus of Expansion :

Simply put its the point in the image from where all the flow vectors appear to be originating from. IF you happen to have found optical flow vectors you can simply calculate FOE by calculating the point where those flow vectors intersect.

FOE

In my case I made use of the method mentioned in this paper to calculate focus of expansion. Some useful links that helped me calculate it are as under

void calculate_FOE(vector<Point2f> prev_pts, vector<Point2f> next_pts)

    MatrixXf A(next_pts.size(),2);
    MatrixXf b(next_pts.size(),1);
    Point2f tmp;

    for(int i=0;i<next_pts.size();i++)
    {

        tmp = prev_pts[i]-next_pts[i];
        A.row(i)<<prev_pts[i].x-next_pts[i].x,prev_pts[i].y-next_pts[i].y;
        b.row(i)<<(prev_pts[i].x*tmp.x)-(prev_pts[i].y*tmp.y);

    }


    Matrix<float,2,1> FOE;
    FOE=((A.transpose()*A).inverse())*A.transpose()*b;

Function “calculate_FOE” can be used for calculating FOE using OpenCV. “prev_pts” are the points (features)  in the last frame and “next_pts” are the points in next frame that you will get via optical flow calculation.

Motion Parallax :

Motion parallax refers to the depth cues that results as we move. Because of motion parallax objects that are closer to us appear to move faster than the objects that are farther away. Motion parallax affects optical flow and as a result closer objects give rise to large optical flow and bias the flow vectors in their direction. This is because objects closer / larger in the image will acquire more features thus giving rise to larger flow vectors and the overall magnitude of flow vectors for the obstacle will be larger.

 

Motion Parallax.png

 

Control Law :

Flow Vectors.jpg

Consider the figure above which has been divided into four quadrants. The point to notice is the direction of vectors in each of the four quadrants . i.e. each quadrant has its own direction for the flow vectors. We can exploit this effect to avoid obstacles. The figure also shows an obstacle in front of the robot as it travels towards it. We only detect features in the predefined patch in the image which is encompassed by black rectangle for visualization purposes (It is not centered perfectly here but consider it perfectly centered in actual implementation). The reason for selecting this central patch and not the whole image for feature extraction is that we only need to avoid obstacles that are right in front of the robot. We are not concerned with objects that are not directly in line of sight as they don’t ought to be avoided. The black vertical line in the image divides feature patch into left and right portions. We calculate the sum of magnitudes of all the vectors in both halves

Left Right Flow.PNG

In order to avoid obstacle the robot simply has to turn away from the side (half) of greater flow .

Github link for the code : Obstacle Based obstacle avoidance 

 

Leave a comment