Quantcast
Channel: OpenCV Q&A Forum - Latest question feed
Viewing all articles
Browse latest Browse all 19555

Real-time video stitching from initial stitching computation

$
0
0
I have two fixed identical synchronized camera streams at 90 degrees of each other. I would like to stitch in real-time those two streams into a single stream. After getting the first frames on each side, I perform a full OpenCV stitching and I'm very satisfied of the result. imgs.push_back(imgCameraLeft); imgs.push_back(imgCameraRight); Stitcher stitcher = Stitcher::createDefault(false); stitcher.stitch(imgs, pano); I would like to continue the stitching on the video stream by reapplying the same parameters and avoid recalculation (especially of the homography and so on...) How can I get maximum data from the stitcher class from the initial computation such as: - the homography and the rotation matrix applied to each side - the zone in each frame that will be blended I'm OK to keep the same settings and apply them for the stream as real-time performance is more important than precision of the stitching. When I say "apply them", I mean I want to apply the same transformation and blending either in OpenCV or with a direct GPU implementation. The cameras don't move, have the same exposure settings, the frames are synchronized so keeping the same transformation/blending should provide a decent result. Question: how to get all data from stitching for an optimized real-time stitcher?

Viewing all articles
Browse latest Browse all 19555

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>