Quantcast
Channel: OpenCV Q&A Forum - Latest question feed
Viewing all articles
Browse latest Browse all 19555

How to compute the translation and rotation in stitching(opencv module)?

$
0
0
In the opencv module (stitch_detail.cpp): warper->warp(img, K, cameras[img_idx].R, INTER_LINEAR, BORDER_REFLECT, img_warped); **Img_warped** is the output image, **img** is the input image. With the **cameras[img_idx].R** , we can compute the rotation of each axis :theta_x, theta_y, theta_z. But how to compute the translation of each axis? Does anybody has some tips? As far as I know, the **cameras[img_idx].t** does not contain the translation information. I had modified the t, but nothing happen. (stitch_detail.cpp): if (blender.empty()) { blender = Blender::createDefault(0, try_gpu); Size dst_sz = resultRoi(corners, sizes).size(); float blend_width = sqrt(static_cast(dst_sz.area())) * 5 / 100.f; if (blend_width < 1.f) blender = Blender::createDefault(Blender::NO, try_gpu); **blender->prepare(corners, sizes);** } // Blend the current image blender->feed(img_warped_s, mask_warped, corners[img_idx]); I find corners and sizes can influence the result. The corners is a `vector`, it is a set of the left-and-top points of sub-images. The sizes is a `vector`, it is a set of the size of each sub-image. They are modified in **warper->warpRoi** (stitch_detail.cpp) // Update corner and size Size sz = full_img_sizes[i]; if (std::abs(compose_scale - 1) > 1e-1) { sz.width = cvRound(full_img_sizes[i].width * compose_scale); sz.height = cvRound(full_img_sizes[i].height * compose_scale); } Mat K; cameras[i].K().convertTo(K, CV_32F); **Rect roi = warper->warpRoi(sz, K, cameras[i].R); //** corners[i] = roi.tl(); sizes[i] = roi.size(); } The code of warper->warpRoi:( warper_inl.hpp ) template Rect RotationWarperBase

::warpRoi(Size src_size, const Mat &K, const Mat &R) { projector_.**setCameraParams**(K, R); Point dst_tl, dst_br; **detectResultRoi**(src_size, dst_tl, dst_br); return Rect(dst_tl, Point(dst_br.x + 1, dst_br.y + 1)); } The code of projector_.**setCameraParams**(K, R):( warper_inl.hpp ) void ProjectorBase::setCameraParams(const Mat &K, const Mat &R, const Mat &T) { CV_Assert(K.size() == Size(3, 3) && K.type() == CV_32F); CV_Assert(R.size() == Size(3, 3) && R.type() == CV_32F); CV_Assert((T.size() == Size(1, 3) || T.size() == Size(3, 1)) && T.type() == CV_32F); Mat_ K_(K); k[0] = K_(0,0); k[1] = K_(0,1); k[2] = K_(0,2); k[3] = K_(1,0); k[4] = K_(1,1); k[5] = K_(1,2); k[6] = K_(2,0); k[7] = K_(2,1); k[8] = K_(2,2); Mat_ Rinv = R.t(); rinv[0] = Rinv(0,0); rinv[1] = Rinv(0,1); rinv[2] = Rinv(0,2); rinv[3] = Rinv(1,0); rinv[4] = Rinv(1,1); rinv[5] = Rinv(1,2); rinv[6] = Rinv(2,0); rinv[7] = Rinv(2,1); rinv[8] = Rinv(2,2); Mat_ R_Kinv = R * K.inv(); r_kinv[0] = R_Kinv(0,0); r_kinv[1] = R_Kinv(0,1); r_kinv[2] = R_Kinv(0,2); r_kinv[3] = R_Kinv(1,0); r_kinv[4] = R_Kinv(1,1); r_kinv[5] = R_Kinv(1,2); r_kinv[6] = R_Kinv(2,0); r_kinv[7] = R_Kinv(2,1); r_kinv[8] = R_Kinv(2,2); Mat_ K_Rinv = K * Rinv; k_rinv[0] = K_Rinv(0,0); k_rinv[1] = K_Rinv(0,1); k_rinv[2] = K_Rinv(0,2); k_rinv[3] = K_Rinv(1,0); k_rinv[4] = K_Rinv(1,1); k_rinv[5] = K_Rinv(1,2); k_rinv[6] = K_Rinv(2,0); k_rinv[7] = K_Rinv(2,1); k_rinv[8] = K_Rinv(2,2); Mat_ T_(T.reshape(0, 3)); t[0] = T_(0,0); t[1] = T_(1,0); t[2] = T_(2,0); } The values above are defined in here : (warepr_inl.hpp) struct CV_EXPORTS ProjectorBase { void setCameraParams(const Mat &K = Mat::eye(3, 3, CV_32F), const Mat &R = Mat::eye(3, 3, CV_32F), const Mat &T = Mat::zeros(3, 1, CV_32F)); float scale; float k[9]; float rinv[9]; float r_kinv[9]; float k_rinv[9]; float t[3]; }; The code of **detectResultRoi**(src_size, dst_tl, dst_br) :( warper_inl.cpp ) template void RotationWarperBase

::detectResultRoi(Size src_size, Point &dst_tl, Point &dst_br) { float tl_uf = std::numeric_limits::max(); float tl_vf = std::numeric_limits::max(); float br_uf = -std::numeric_limits::max(); float br_vf = -std::numeric_limits::max(); float u, v; for (int y = 0; y < src_size.height; ++y) { for (int x = 0; x < src_size.width; ++x) { projector_.mapForward(static_cast(x), static_cast(y), u, v); tl_uf = std::min(tl_uf, u); tl_vf = std::min(tl_vf, v); br_uf = std::max(br_uf, u); br_vf = std::max(br_vf, v); } } dst_tl.x = static_cast(tl_uf); dst_tl.y = static_cast(tl_vf); dst_br.x = static_cast(br_uf); dst_br.y = static_cast(br_vf); } I want to compute the translation of xyz. Can I use the corners to represent the translation of x and y ? But the value of corners[id].x and corners[id].y are less than zero. I do not know how to deal with that. The translation of z may represent the scaling ratio of image. How can I compute the ratio? I have no ideas. I do hope you can give me some suggestions! If I do not express my question clearly, please add a comment and I will reply soon. Thanks in advance!


Viewing all articles
Browse latest Browse all 19555

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>