It's time to pick a simple example to gain more confidence! I am able to finish 'drawing.c' example. It goes through the openCV drawing functions, lines, rectangles,... They are all pretty rich APIs, able to go beyond the basic drawing primitivesAPI. The PolyLine API is a bit difficult to understand at the beginning. Turns out it is able to draw a number of contours with a single API. The angle arguments of ellipse are confusing.
OpenCV uses a 'liberal font' called Hershey. It's a vector font. Interesting story behind this:
http://idlastro.gsfc.nasa.gov/idl_html_help/Hershey_Vector_Font_Samples.html
The example uses the random number generator APIs given by OpenCV. Equally impressive, it is able to specify distribution type (Uniform or Normal). Useful for generating an array of numbers, such as 'noisy screen'.
Thursday, March 18, 2010
Wednesday, March 17, 2010
With some experimentation, it turns out that adjusting the following parameters are useful:
Notes: a car leaving the parking lot would leave a permanent 'change'. The passing-by cars leaves mark also. The rays from headlight introduced a big block of 'differences'. Low frame-rate makes fast-moving object blurry. The person is more likely to show up in the 'radar' if he/she wears light-colored clothes. Skin is 'dark'.
The number-of-frames to learn MATTERS: use small value if the expected movement is small, like the tennis lesson videos. Otherwise, all the subsequent movements will still be 'in the dark'.
Finally, having trouble with editing SVN log message. Needs to setup the pre-commit hook. The template that comes with SVN installation are Unix shell scripts. Able to found a great working one here for Windows (Awesome!):
http://ayria.livejournal.com/33438.html
- - command-line thresholds: increasing the range reduces the noises as seen from the 'Diff window'
- - perimeter ratio argument to cvSegmentFGMask(): reducing this to keep smaller area from being thrown away. It works well if the noise has been reduced.
Notes: a car leaving the parking lot would leave a permanent 'change'. The passing-by cars leaves mark also. The rays from headlight introduced a big block of 'differences'. Low frame-rate makes fast-moving object blurry. The person is more likely to show up in the 'radar' if he/she wears light-colored clothes. Skin is 'dark'.
The number-of-frames to learn MATTERS: use small value if the expected movement is small, like the tennis lesson videos. Otherwise, all the subsequent movements will still be 'in the dark'.
Finally, having trouble with editing SVN log message. Needs to setup the pre-commit hook. The template that comes with SVN installation are Unix shell scripts. Able to found a great working one here for Windows (Awesome!):
http://ayria.livejournal.com/33438.html
Saturday, March 13, 2010
Finally able to run the bgfg_codebook demo. Few hours spent on getting the videos ready. Surveillance camera video posted on YouTube. Download as MP4 (discovered that short ones like (0:09) doesn't seem to work). The next step is to find a suitable video as input. MediaCoder does that job (the latest 0.7XX version keep failing though). The input format chosen is Huffyuv (requires a simple codec installation) in a AVI container. I suppose the requirement is to have something like motion jpeg where every frame is a full-frame, unlike MP4. For some reason this PC is unable to play motion jpeg with an AVI container.
Using the default parameters is not very good. The 'Frame Differences' output seems alright. But the 'Connected Component' output often unable to 'connect the dots'. Tested using the Car-Theft-At-JoJo video.
Downloaded the Tennis Lesson #1 from "Fuzzy Yellow Balls". Wonder if that would give a better result.
Using the default parameters is not very good. The 'Frame Differences' output seems alright. But the 'Connected Component' output often unable to 'connect the dots'. Tested using the Car-Theft-At-JoJo video.
Downloaded the Tennis Lesson #1 from "Fuzzy Yellow Balls". Wonder if that would give a better result.
Friday, March 12, 2010
Spent the last few days on the sample bgfg_codebook. At this point, slowly examined the source code of the sample program - which exercises the functions of a cvaux module: cvbgfg_codebook. Have yet to run the program yet. During such course, explored a little bit on dynamic storage provided by CVCore. The MemStorage and CVSeq. Other topics that is touched on is the built-in contour tracing, supposedly the same as edge-detection/perimeter-finding. The ApproxPoly() function uses something called Douglas-Peucker algorithm to reduce the number of points needed to represent a curve. Wonder if that is what Inkscape does to 'Simplify Path'. Amazed by the depth of this openCV library.
Saturday, March 6, 2010
cvAdaptiveSkinDetector
It turns out that the argument to cvWaitKey() is the timeout value for waiting for user input. Passing a zero (0) to this would make it hold until user presses a key.
Spent a few hours today to look through the AdaptiveSkinDetector sample program. It is an exercise of the cvAdaptiveSkinDetector module. The program expects a series of frames pictures split from video or actually grabbing frames from a the webcam input. The thresholds of skin is just a range of Hue values. The adaptive part comes in as it adjusts the range based on the histogram collected going from frame to frame.
Spent a few hours today to look through the AdaptiveSkinDetector sample program. It is an exercise of the cvAdaptiveSkinDetector module. The program expects a series of frames pictures split from video or actually grabbing frames from a the webcam input. The thresholds of skin is just a range of Hue values. The adaptive part comes in as it adjusts the range based on the histogram collected going from frame to frame.
Subscribe to:
Posts (Atom)