Testing structure from Motion using Farneback’s optical flow

Standard

Here’s structure from motion using Farneback optical flow, which is a dense optical flow algorithm, in contrast to Lucas Kanade method, which is a sparse optical flow. I created this rough version mainly using OpenCV.

The following is my whole ‘workflow’, please take note beforehand that what I did is an amateurish example.

  • First, calibrate your camera(s) to obtain intrinsic params and distortion coefficients
  • For (each) pair of image, invoke calcOpticalFlowFarneback, which takes in grayscale version of those images. E.g: calcOpticalFlowFarneback( prev_gray, next_gray, flow, 0.5, 3, 10, 3, 5, 1.2, 0 );
  • Create two vectors / arrays of points. First vector contains the coordinates of each pixel in the first image (or you can space them accordingly), and second vector contains the corresponding coordinates in the second image calculated with optical flow .
vector<Point2f> left_points, right_points;
        for ( int y = 0; y < prev_img.rows; y+=6 ) {
            for ( int x = 0; x < prev_img.rows; x+=6 ) {
                Point2f f = flow.at(y, x);
                left_points.push_back( Point2f( x, y ));
                right_points.push_back( Point2f(x + f.x, y+f.y) );
            }
        }
  • Undistort these two arrays of points using intrinsic parameters and distortion coefficients.
  • Find fundamental matrix. E.g: Mat fundamental = findFundamentalMat( left_points, right_points, FM_RANSAC, 0.1, 0.99 );
  • Calculate essential matrix by: Mat essential = cam_matrix.t() * fundamental * cam_matrix;
  • Decompose essential matrix into rotation and translation components by:
    SVD svd(essential);
    Mat W = (Mat_<double>(3, 3) <<
        0, -1, 0,
        1, 0, 0,
        0, 0, 1);
    
    Mat_ R1 = svd.u * W * svd.vt;
    Mat_<double> T1 = svd.u.col( 2 );
    
    /* Not really used in our example here */
    Mat W_inv = W.inv();
    Mat_ R2 = svd.u * W_inv * svd.vt;
    Mat_ T2 = -svd.u.col( 2 );
    
    Mat P1 = Mat::eye(3, 4, CV_64FC1 );
    Mat P2 =( Mat_<double>(3, 4) <<
        R1(0, 0), R1(0, 1), R1(0, 2), T1(0),
        R1(1, 0), R1(1, 1), R1(1, 2), T1(1),
        R1(2, 0), R1(2, 1), R1(2, 2), T1(2));
    
    
  • Then invoke triangulatePoints( P1, P2, left_points, right_points, output ); to obtain a matrix which has the triangulated points in each row of the matrix. Please note that those points are homogenous coordinates, thus you’d need to divide each point with the w component later on.

Here’s a sample of 3d point cloud created from these two images:

IMG_0290 IMG_0289

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s