Black and White Marker Detection

In this article, I’m trying to give technical clues on how to implement a simple marker detection engine similar to the technology provided in the well-known ARtoolkit software.

We are going to focus on the simplest case: a marker that contains black and white squares such as presented below.

A black and white marker that can encode 1024 combination (using hamming code)

Digging the web, I found few months ago this tremendous piece of library: ARUCO. It is a tool based on a free to use BSD license, that provides a basic marker detection implementation. What’s interesting is that it demonstrates that one can simply use OpenCV to make augmented reality projects.

Augmented Reality with ARUCO and Ogre

According to the site explanations and my own knowledge, I would like to summarize common general steps to achieve a frame to frame marker detection

  1. Access to the camera image: video capture is not simple and OS dependent. However, gracefully provided by OpenCV is a video capture module named “cvCapture” . It is based on ffmpeg on one side and on windows ™ direct show on the other side.
  2. Find camera calibration intrinsic parameters: a focal distance and an optical center should be enough, but one can also determine distortions and other terms.
  3. Provide an algorithm that can detect image edges/borders. Usually we can select one of the following option (according to the robustness we want to reach) :
    • Simple Threshold method (fast but not robust to lightening variations),
    • Canny Threshold method (quite expensive in terms of CPU usage, but more robust)
    • Adaptive Threshold method (which is an extension of the simple threshold principle using neighboring pixels to find the correct value of the threshold).
    • … Others
    • These functions are available in OpenCV (cvAdaptiveThreshold, cvCanny, …) and quite well optimized.
  4. Convert bordering pixels into polygons (OpenCV : cvFindContour), and find outer polygons (inner polygons should be inside the black and white marker) that contains 4 segments exactly.
  5. For each polygon in the list of candidates:
    • Project the current polygon into a square (projective reprojection).
    • Verify that there is a black border around the center area of the marker.
    • Try to identify the code contained inside the marker. ARUCO uses a hamming code so that errors can be detected.
  6. Once the marker is identified, it is possible to estimate its position and orientation (also named pose) according to the camera reference. To do that, there exists multiple possibilities:
    • Estimate the exact marker pose by solving a polynomial equation. This equation needs 3 points and can result in zero, ideally one or up to four solutions. Usually, the 4th point is used to find the best solution by projecting its 3D coordinates on the image plane using previous pose estimation and camera calibration estimation. The distance between this projected 2D point and the real coordinates measure an error that we wish to minimize. The problem with this technic is that there is no exact solution because the calibration is always an approximation of the real world projective information. Thus the resulting pose becomes a bad approximation and a “jitter” effect appears on the frame to frame process (an example of object tracking with jitter effects on youtube.com).
    • As the first solution only give an exact solution considering 3 points. We explore here a non-exact solution that can be obtained by optimizing a projection distance using the 4 points given by the principle. To do that we can use a standard Gauss Newton algorithm (or a Gauss Newton based Levenberg-Marquardt algorithm to avoid strange parallax inversions when the previous 4 possible exact solutions are close to each other). Notice that once again, OpenCV provides functions to resolve this kind of equations (see cvFindExtrinsicParams for more information).
  7. Last but not least, you have to integrate a rendering module to your library since it is mandatory (is it?) to display virtual objects to make augmented reality applications.
Links

10 thoughts on “Black and White Marker Detection

    • Ogre is fine for the rendering but it’s limited if you wish to handle the whole rendering workflow. However, this question raises many other questions and may be the subject for another topic soon ;)

  1. Pingback: ARUCO: a technical evaluation | ipl.Images

  2. Hi

    I need help regarding marker detection. I am using a checkerboard pattern for camera calibration using findchessboardcorners and then calibrate camera finding the estimated camera parameters, now i want to use marker detection so i may find the point of origin where i may start to build a 3d model from that point and with occlusion the camera calibration doesn’t detect any corners.

    so with what you said. should i keep the checkerboard pattern or use a rectangle black outline and white colour fill.

    but don’t know how i would go about finding the camera parameters.

    plz help
    regards

    • Hi,

      not sure about your question,
      * ARUCO uses standard calibration process (opencv)
      * calibration is a plus. Otherwise, you can use some default parameters

  3. Would this work
    I use mat images more easier to process.
    Step 1: find camera parameters using a checkerboard.
    Step 2:another calibration target a small rectangle in corner
    Step 3:detect edges of the rectangle green and dark green as i have to segment it from the foreground object, so i can binarize and then work with that
    step 4:convert to polygon should be 1 inside and outer cvfindcontour
    step5:i should only have one polygon in the corner of the sheet.

    after this i am lost
    ■Project the current polygon into a square (projective reprojection).
    ■Verify that there is a black border around the center area of the marker.
    ■Try to identify the code contained inside the marker. ARUCO uses a hamming code so that errors can be detected.

    how would i do the above

    and then 6 and 7 i am completely lost.

    I need a camera calibration program to be robust to occlusions.

    thanks

    • Hi, are you using opencv on this?

      I am stuck on step 4. I already convert to polygons using opencv findcontour but how can i find outer polygons?

      • Imagine this.

        I have a paper which i have made marks of a circle or square on each four corner, i want to be able to use marker detection to find the location of those four corners and keep track of it at any angle of the paper and also at any orientation and even with occlusion, this would be so awesome if i could find a way to do this, then i use subcornerpix to get the 2d location and i would be confident on how to continue from there the only problem is keep track of the four marked corners on the paper the current detection method i am using corner detection and it keeps jumping around detecting others as well as the four marked corners.

        thanks

        • That’s not an easy question,

          to detect markers, you have to be discriminant enough (eg a pattern inside to identify them).

          once you identified correctly the markers, you can try to “follow” them recursively (refer to the Lucas Kanade tracking approach to do that => opencv).

  4. I’m a new researcher in the field of augmented reality and visual tracking and I wonder if you could help me to answer this question.
    ArUco augmented reality toolkit cannot detect markers when the markers’ board makes fast or sudden movements. But, the detected markers are located precisely without any jittering error.
    I have applied another algorithm based on meanshift tracker and FAST detector (which depends on tracking keypoints) and it succeeded in detecting all the markers including which ArUco failed to detect.
    The only problem is that the markers which are not detected with ArUco were detected in my algorithm with jittering errors. Otherwise rest of markers was detected with the same accuracy.
    My question is: Is my algorithm is considered an enhancement of ArUco toolkit and I have handled its motion-blur problem or not?
    By other words, detecting few number of markers without jittering errors is better or detecting all markers with jittering errors of the blurred ones?
    Many thanks in advance.

Leave a Reply

Your email address will not be published. Required fields are marked *

*


nine + 7 =

* Copy This Password *

* Type Or Paste Password Here *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>