Quantcast
Channel: OpenCV Q&A Forum - Latest question feed
Viewing all 19555 articles
Browse latest View live

tracking movement in a

$
0
0
Hello Forum. Need some help (well a lot really please). I am trying to track movement in an area. Imagine a cat walking across a lawn, it starts at one side and walks across to the other, how do I count the number of times this is done ? Thanks in advance. PeteUK

Object detection rectangle doesn't cover whole object

$
0
0
Hello everyone, I just started to work with openCV. I decided to use this library to detect an object on image and cut it from them. I trained cascade, unfortunately it doesn't work very well. I tried it on several images and every time rectangle doesn't cover a whole object. Here are examples: [1](https://pp.vk.me/c637826/v637826722/2d3cb/X3yMDYPNpcc.jpg), [2](https://pp.vk.me/c637826/v637826722/2d3c2/_9A9FCAMT4g.jpg), [3](https://pp.vk.me/c637826/v637826722/2d3b9/EWoprUX4_PM.jpg) import cv2 image = cv2.imread("data/test/4.jpg") resized = image #resized = cv2.resize(image, (500, 500)) gray = cv2.cvtColor(resized, cv2.COLOR_BGR2GRAY) detector = cv2.CascadeClassifier("output/cascade/cascade.xml") rects = detector.detectMultiScale(gray, scaleFactor=1.3, minNeighbors=10, minSize=(100, 100)) # loop over the cat faces and draw a rectangle surrounding each for (i, (x, y, w, h)) in enumerate(rects): cv2.rectangle(resized, (x, y), (x + w, y + h), (0, 0, 255), 2) # show the detected cat faces cv2.imshow("Bags", resized) cv2.waitKey(0) Could anyone help me to understand what do I need to tune or fix. I used handbags images from ImageNet.

Default color space for frames in video

$
0
0
Hai, I have a video which is having yuv color space and H264 codec. I am using Opencv videocapture function to read a frame. I just want to know whether the frame's mat object is in BGR or YUV color space pixels ? Thanks in advance.....

rectA & rectB not working

$
0
0
I am trying to get the intersection of 2 rectangles. What happens if the 2 rectangles do not intersect at all ? Will it give me an error ? Because the output below is what I am getting. The limits are (0,0,1242,375). I don't know the rectangles I am passing in. I just want to keep those that intersect and to retain the intersection. [1170 x 232 from (72, 0)] [1242 x 375 from (0, 0)] [1242 x 375 from (0, 0)] [967 x 211 from (275, 0)] [1242 x 375 from (0, 0)] [1242 x 375 from (0, 0)] [1242 x 375 from (0, 0)] [2147449647 x 375 from (2147483647, 0)] [1242 x 375 from (0, 0)] I am using this temporary fix after the intersection operator to remove all erroneous rectangles if (rect.tl().x < 0 || rect.tl().y < 0 || rect.br().x >= image.cols || rect.br().y >= image.rows || rect.area() >= image.cols * image.rows || rect.area() == 0)

How to create Mat from rgb 2d arrays?????

$
0
0
Mat om=new Mat(); double[] rx=new double[3]; for(int i=0;i< sizeA.height;i++) { for(int j=0;j< sizeA.width;j++) { rx[0]=r3[i][j]; rx[1]=g3[i][j]; rx[2]=b3[i][j]; om.put(i, j, rx); } } I am using this code but I am not getting any output for the given code. r3,g3 and b3 are the rgb arrays & sizeA is Size of the Mat.

Decoding a h264 (High) stream with OpenCV's ffmpeg on Ubuntu

$
0
0
I am working with a video stream (no audio) from an ip camera on Ubuntu 14.04. Also i am a beginner with Ubuntu and everything on it. Everything was going great with a camera that has these parameters (from FFMPEG): Input #0, rtsp, from 'rtsp://*private*:8900/live.sdp': 0B f=0/0 Metadata: title : RTSP server Stream #0:0: Video: h264 (Main), yuv420p(progressive), 352x192, 29.97 tbr, 90k tbn, 180k tbc But then i changed to a newer camera, which has these parameters: Input #0, rtsp, from 'rtsp://*private*/media/video2':0B f=0/0 Metadata: title : VCP IPC Realtime stream Stream #0:0: Video: h264 (High), yuvj420p(pc, bt709, progressive), 1280x720, 25 fps, 25 tbr, 90k tbn, 50 tbc My C++ program uses OpenCV3 to process the stream. By default OpenCV uses ffmpeg to decode and display the stream with function VideoCapture. VideoCapture vc; vc.open(input_stream); while ((vc >> frame), !frame.empty()) { *do work* } With the new camera stream i get errors like these (from ffmpeg): [h264 @ 0x7c6980] cabac decode of qscale diff failed at 41 38 [h264 @ 0x7c6980] error while decoding MB 41 38, bytestream (3572) [h264 @ 0x7c6980] left block unavailable for requested intra mode at 0 44 [h264 @ 0x7c6980] error while decoding MB 0 44, bytestream (4933) [h264 @ 0x7bc2c0] SEI type 25 truncated at 208 [h264 @ 0x7bfaa0] SEI type 25 truncated at 206 [h264 @ 0x7c6980] left block unavailable for requested intra mode at 0 18 [h264 @ 0x7c6980] error while decoding MB 0 18, bytestream (14717) The image sometimes is glitched, sometimes completely frozen. After a few seconds to a few minutes the stream freezes completely without an error. I tried appending `?tcp` to the input stream, but the video still froze without an error after ~10 seconds. However on vlc it plays perfectly. I installed the newest version (3.2.2) of ffmpeg player with ./configure --enable-gpl --enable-libx264 Now playing directly with ffplay (instead of launching from source code with OpenCV function VideoCapture), the stream plays better, doesn't freeze, but sometimes still displays warnings: [NULL @ 0x7f834c008c00] SEI type 25 size 896 truncated at 320=1/1 [h264 @ 0x7f834c0d5d20] SEI type 25 size 896 truncated at 319=1/1 [rtsp @ 0x7f834c0008c0] max delay reached. need to consume packet [rtsp @ 0x7f834c0008c0] RTP: missed 1 packets [h264 @ 0x7f834c094740] concealing 675 DC, 675 AC, 675 MV errors in P frame [NULL @ 0x7f834c008c00] SEI type 25 size 896 truncated at 320=1/1 Changing the camera hardware is not an option. The camera can be set to encode to h265 or mjpeg. When encoding to mjpeg it can output 5 fps, which is not enough. Decoding to a static video is not an option either, because i need to display real time results about the stream. Maybe i should switch to some other decoder and player? From my research i conclude that i have these options: - Somehow get OpenCV to use the ffmpeg player from another directory, where it is compiled with libx264 - Somehow get OpenCV to use libvlc instead of ffmpeg One example of switching to vlc is [here](http://answers.opencv.org/question/65932/how-to-stream-h264-video-with-rtsp-in-opencv-partially-answered/), but i don't understand it well enough to say if that is what i need. Or maybe i should be [parsing](http://stackoverflow.com/questions/30345495/ffmpeg-h264-parsing) the stream in code? I don't rule out that this could be some basic problem due to a lack of dependencies, because, as i said, i'm a beginner with Ubuntu. - Use vlc to preprocess the stream, as suggested [here](http://stackoverflow.com/questions/23529620/opencv-and-network-cameras-or-how-to-spy-on-the-neighbors). This is probably slow, which again is bad for real time results. Any suggestions and coments will be appreciated.

Open udp encoded videostream with gstreamer

$
0
0
Hello, I have a host which sends videostream with opencv and gstreamer. Writter looks like this: > writer.open("appsrc ! videoconvert ! x264enc noise-reduction=10000 tune=zerolatency byte-stream=true threads=4 ! mpegtsmux ! udpsink host=192.168.0.148 port=7002", 0, (double)30, Size(640, 480), true); I can open it with gstreamer with this parameters, and everything works correctly: > gst-launch-1.0 udpsrc port=7002 ! tsparse ! tsdemux ! h264parse ! avdec_h264 ! videoconvert ! ximagesink sync=false But If I try to open this stream in opencv with next command: > VideoCapture cap("udpsrc port=7002 ! tsparse ! tsdemux ! h264parse ! avdec_h264 ! videoconvert ! appsink"); Program halts during execution of next line:> cap >> frame; P.S. opencv is build with support of gstreamer-1.0 Thank you for attention, anunuparov

Access channels of CV_8UC Image

$
0
0
I want to convert a *float* matrix to an *unsigned char* matrix with the same number of channels. cv::Mat imgA(3, 3, CV_32FC(5)); // image is filled with something cv::Mat imgB; imgA.convertTo(imgB, CV_8UC(imgA.channels())); How can the additional channels of a CV_8UC image be accessed? The following gives a segmentation fault. std::cout << (int) imgB.at(0,0,1) << "\n";

Why cv::CascadeClassifier works slover in 320 version?

$
0
0
I have found in one of my projects, that cv::Cascadeclassifier from the new 320 opencv becomes to work slower compare to 310. Why so? I am manually build both versions with pretty same settings. After some researches I have also found that if object size becomes smaller than in new version speed of detection increases whereas in the old version speed decreases. It seems that scaling method has been changed from the downscale sceme to upscale. Am I right? Did someone else notice that?

DNN module - Can I pass in an arbitrary sized input ?

$
0
0
The deep NN module in opencv requires me to specify the input size http://docs.opencv.org/trunk/d5/de7/tutorial_dnn_googlenet.html resize(img, img, Size(224, 224)); //GoogLeNet accepts only 224x224 RGB-images dnn::Blob inputBlob = dnn::Blob(img); //Convert Mat to dnn::Blob image batch Is it possible to send in an arbitrary sized input because some of the recent papers accept an arbitrary sized input.

opencv VideoCapture property

$
0
0
Hi, Is there any API in opencv ( or any programmatic way ) to get the range of a particular property of VideoCapture class like exposure or brightness.

Roboust Human detection and tracking in a crowded area

$
0
0
Hello! I am working on an application where I need to detect and track people in a crowded indoor area(like a mall). Right now I am using the OpenCV Background Subtraction class(MOG2) to detect blobs and a Kalman filter and Hungarian Algorithm for tracking(based on this video https://www.youtube.com/watch?v=2fW5TmAtAXM). The issues I'm having are i) the blobs merging together when two people come close to each other ii) Parts of the person not getting detected which leads to false and multiple detections on a person iii) The background subtraction itself leading to too many false detections. I would like to know your suggestions to improve this and any solutions to fix the problems? Thanks in advance!

How to Do Matching in Opencv android

$
0
0
![image description](/upfiles/14829299266523964.jpg) I'm doing an android project to scan the code on the card( holding by a girl in above image) and identifying a person. I searched in google n came to know OpenCV. I'm learning but not getting the idea of how to do. can you please suggest me how to done this in android using OpenCV. sample code will be more helpful.

how to find angle from x y points?

$
0
0
I have image which contain 12 center point of rectangle. I am not getting how to code for image rotation from that point.[C:\fakepath\deskew.png](/upfiles/14829318308820113.png).Please help me with some code in opencv and python

Sports Betting Secrets 3 Pointers To Win Football wagers

$
0
0
The challengers. You do not only need to understand what your favourite teams' stats are. The competitor's stats are significant if that's vital. Who would not desire to know anything about their rivals? Research the competition's stat also. Understand the players, the coach and other relevant parts.

Everyone loves to win their this website - and we are here to help you do just this. Follow these two important soccer gambling tricks and you may be in a better position to win some cash.

Suggestion #1 - Relearn about football. It's true, you heard me right. Get yourself familiar with everything relating to football and you need to relearn. The results of the match cans considerably affect. Know the rules by hard.

You also need to establish the amount of money that you are placing on your own stake. After doing this, in order to finish the process of internet Football Betting Odds, you have to confirm your wager. Once the bet is entire, the total amount of your wager will soon be deducted automatically from your on-line account. If you win, your winnings will even be added to the exact same account. If you are betting in a sportsbook outlet, receive a sheet of paper that states the sum of your wager and confirms your bet and you are required to place your bet through the teller. That little piece of paper is very important because if you win you need to present it to be able to demonstrate your winning wager and claim your winnings.

As a football better, it's also wise to use this interval to research and take notes. Note down the performance of each and every player and you need to find. Attempt to observe their performances with no support from the starters. This period is quite significant for forecasting the performance of a specific team for the entire season.

Now, if you believe it could be a House Win or an depict, only choose both alternatives on that Premier League wagering match. In the event that you like to ensure a right guess you can also select all three options. It'll also massively boost your likelihood of winning, although it'll cost you a lot more to make more sets.

This is the fundamental reason why so many sports bettors who give professional gaming a go, fail. They cannot consider psychologically, the losing streaks that are an inevitable section of betting and / or they don't recognize how big their negative do not allow for a large enough betting fund and could be.

Straight Wager: It is an individual bet on one result. It's accessible on head-to-head matchups with money lines, point totals, and spreads. A better can either wage on the "side" or "total" of the match.

How to? Aruco java...

$
0
0
Hello all. I build OpenCV and Aruco module, from Git... Build script: mkdir -p build_android_armeabi-v7a cd build_android_armeabi-v7a cmake \ -DANDROID_ABI=armeabi-v7a \ -DANDROID_NATIVE_API_LEVEL=android-14 \ -DANDROID_SDK=/Users/admin/Library/Android/sdk \ -DANDROID_NDK=/Users/admin/Library/Android/sdk/ndk-bundle \ -DANT_EXECUTABLE=/Users/admin/Documents/apache-ant/bin/ant \ -DJAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_112.jdk/Contents/Home \ -DJAVA_JVM_LIBRARY=/Library/Java/JavaVirtualMachines/jdk1.8.0_112.jdk/Contents/Home/include/jni.h \ -DJAVA_INCLUDE_PATH=/Library/Java/JavaVirtualMachines/jdk1.8.0_112.jdk/Contents/Home/include \ -DBUILD_FAT_JAVA_LIB=ON \ -DBUILD_SHARED_LIBS=OFF \ -DCMAKE_TOOLCHAIN_FILE=../android/android.toolchain.cmake $@ ../.. everything went well. i include libraries and generated java classes in Android Studio. OpenCV is working, I tried several functions and it works. But with Aruco something is wrong, it always crash on **Aruco.detectMarkers** Mat inputImage = inputFrame.gray(); List corners = new ArrayList<>(); Mat ids = new Mat(); Dictionary dictionary = Aruco.getPredefinedDictionary(Aruco.DICT_6X6_250); Aruco.detectMarkers(inputImage, dictionary, corners, ids); if i will use this: List rejectedImgPoints = new ArrayList<>(); DetectorParameters detectorParameters = DetectorParameters.create(); Aruco.detectMarkers(inputImage, dictionary, corners, ids, detectorParameters, rejectedImgPoints); i always get assert in this line: CV_Assert(params->adaptiveThreshWinSizeMin >= 3 && params->adaptiveThreshWinSizeMax >= 3); **Please help!** I would not want to use C ++ code. I want to write a program in Java only.

openmp with sections directive

$
0
0
Hi, I have a tracking algorithm with two main parts; 1. tracking algorithm 2. video overlay. A lot of stuff needs to be overlayed and it takes a lot of time. I was thinking of parallelizing the two parts using openMP with minimal effort. So I thought of using the `sections` directive available in openMP. The following code is just a crude form of what I am trying to achieve: #include "opencv2\highgui\highgui.hpp" #include "opencv2\core\core.hpp" #include "opencv2\imgproc\imgproc.hpp" #include #include #include "Timer.h" using namespace std; using namespace cv; int main() { VideoCapture cap(0); //start the webcam Mat frame, roi; Timer t; //timer class int frameNo = 0; double summ = 0; while (true) { cap >> frame; frameNo++; roi = frame(Rect(100, 100, 300, 300)).clone(); //extract a deep copy of region of interest; for tracking purposes t.start(); //start the timer #pragma omp parallel sections { #pragma omp section //first section: tracking algorithm { //some tracking algorithm below which uses only "roi" variable GaussianBlur(roi, roi, Size(5, 5), 0, 0, BORDER_REPLICATE); } #pragma omp section //second section: overlay video { //a lot of overlay in different video parts which uses only "frame" variable putText(frame, "string 1", Point(10, 10), 1, 1, Scalar(1)); putText(frame, "string 2", Point(20, 20), 1, 1, Scalar(1)); putText(frame, "string 3", Point(30, 30), 1, 1, Scalar(1)); putText(frame, "string 4", Point(40, 40), 1, 1, Scalar(1)); putText(frame, "string 5", Point(50, 50), 1, 1, Scalar(1)); putText(frame, "string 6", Point(60, 60), 1, 1, Scalar(1)); putText(frame, "string 7", Point(70, 70), 1, 1, Scalar(1)); putText(frame, "string 8", Point(80, 80), 1, 1, Scalar(1)); putText(frame, "string 9", Point(90, 90), 1, 1, Scalar(1)); putText(frame, "string 10", Point(100, 100), 1, 1, Scalar(1)); } } t.stop(); //stop the timer summ += t.getElapsedTimeInMilliSec(); if (frameNo % 10 == 0) //average total time over 10 frames { cout << summ / 10 << endl; summ = 0; } imshow("frame", frame); if (waitKey(10) == 27) break; } return 0; } I don't seem to see a performance boost with timing analysis and in some cases the timing with openMP gets worse even when I am using different variables in my sections My question is whether I am using the right approach (using sections directive) for my case or is there a better way to parallelize my existing code using openMP with minimal effort? Thanks.

Color band on SURF descriptors

$
0
0
I'm reading [this](http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6847226) paper for obtaining better VLAD descriptors. The main difference is the use of the so called **CSURF** which is (quoting the paper):> In order to extract CSURF, the image> is first transformed to grayscale and> interest points are computed using the> standard SURF algorithm. Then, instead> of computing the SURF descriptor of> each interest point on the intensity> channel, CSURF computes three SURF> descriptors, one on each color band. How could I implement this in OpenCV? All the descriptors that I've seen so far (SIFT, SURF, ETC) are computed on the gray scale, how could I use SURF to describe keypoints based on one color band (Red, Green or Blue)?

Problem getting Mat from GigE camera (Sapera SDK)

$
0
0
Hi, I am trying to get Sapera SDK SapBuffer object (Genie NANO GIgE camera) as a cv::Mat. From documentation for this sdk I know that "The SapBuffer Class includes the functionality to manipulate an array of buffer resources." From SDK I know that in "SapBuffer* Buffers" there it is 1024x1280 image type: 32 Depth: 8. I am not sure if I am correctly acquiring Mat object (rest of the code is from example for this sdk): SapLocation loc(serverName,0); Acq = new SapAcquisition(loc, FALSE); //ETHERNET came doesnt work with this? AcqDevice = new SapAcqDevice(loc, FALSE); Buffers = new SapBufferWithTrash(2, AcqDevice); View = new SapView(Buffers, SapHwndAutomatic); Xfer = new SapAcqDeviceToBuf(AcqDevice, Buffers, AcqCallback, View); // Create acquisition object if (AcqDevice && !*AcqDevice && !AcqDevice->Create()) goto FreeHandles; } // Create buffer object if (Buffers && !*Buffers && !Buffers->Create()) goto FreeHandles; // Create transfer object if (Xfer && !*Xfer && !Xfer->Create()) goto FreeHandles; // Create view object if (View && !*View && !View->Create()) goto FreeHandles; // Start continous grab Xfer->Grab(); src1 = new Mat(1024, 1280, CV_8U, Buffers); std::cout << "Total(): " << src1->total() << std::endl; //Buffers->GetAddress(pData); imwrite("E:\OpenCV\Gray_Image.jpg",*src1); //delete mat; delete src1; printf("Press any key to stop grab\n"); CorGetch(); // Stop grab Xfer->Freeze(); if (!Xfer->Wait(5000)) printf("Grab could not stop properly.\n"); Its part of an example and my code is only Mat *src1 = NULL; and OpenCv part. After running it shows "The program stopped running". Please help.

Can't build opencv 2.4.11 with python

$
0
0
When I try to build opencv 2.4.11 with both c++ and python, an error comes up for python- Traceback (most recent call last): File "", line 1, in ImportError: No module named numpy.distutils The thing is numpy is installed for python 2.7.6 and not for python 2.7.9 which is what opencv is trying to build from. How do I rectify this issue? How can I make opencv build particularly with python 2.7.6 or install numpy with python 2.7.9?
Viewing all 19555 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>