Make Parametric Mesh is my friend?

Hi,

i’m trying to calibrate a camera with a projector.
There are several softwares that, using OpenCV primitives, returns parameters that allow to define a mesh that describes the projection area that matches the camera view, correcting also lenses distortions.
The data i can have with these application are of this form:

Camera:

  • reprojection error: 0.210507

  • K:
    [2472.650464858401, 0, 1080.34400778601;
    0, 2478.277708298557, 592.8451712869836;
    0, 0, 1]

  • kc: [-0.07008445244701324, -0.4577870235611326, -0.01161241742799688, 0.001468694637194292, 0]

Projector:

  • reprojection error: 0.223887

  • K:
    [2340.992905452029, 0, 549.7710568026544;
    0, 2330.14648205205, 826.4022818412827;
    0, 0, 1]

  • kc: [-0.08960494425914402, 0.3183706281153378, -0.01535928252295888, -0.0009942819072223292, 0]

Stereo:

  • reprojection error: 0.43296

  • R:
    [0.9219375437963626, -0.01707792154636977, -0.3869618972642992;

0.0445729663540128, 0.9970684541428851, 0.06219122465043889;

0.3847654038620599, -0.07458446452695999, 0.9199960552318105]

  • T: [344.9443200939483; -263.7704818566896; 197.2384908148998]

I think “stereo” datas are the useful one i really need.

Now, i wish to make a mesh described by these parameters, and apply a texture on it - you guess, the texture is the video camera feed, so what i get out of the mesh is the matching image to project.

I see “Make Parametric Mesh” node and i suspect could be the one i need to use to make this mesh.
I don’t know so ,much of parametric math, but before digging into that, I’d like to know if the aim I have is really possible with this node, and some hints if possible.

ciao grazie

michele

I just found "“Warp Image with Projection Mesh”.

My hypotesis is that, if “Make Parametric Mesh” is the right way to compute mesh point coordinates and i find a way to write them to a file, "“Warp Image with Projection Mesh” is the right node to show a calibrated image.

Does it make sense?

@cremaschi, I’m sorry I’ve been slow to respond.

I’m trying to figure out what those numbers mean. Looking at this camera calibration derivation, it seems your “K” matrix contains the Focal Length, Principal Point, and Skew Coefficient values, and your “kc” vector contains the Radial and Tangential Distortion Coefficients. Together those form a complete set of Intrinsic Parameters for the Heikkilä/Silvén camera model.

Your “R” matrix and “T” vector appear to be the Extrinsic Parameters — a rotation matrix and translation vector, respectively. I’m not sure why they’re labeled “Stereo”.

So, I think you need to make a 3D mesh based on the Intrinsic Parameters, and place it in 3D space using the Extrinsic Parameters.

First, the easy part, the Extrinsic Parameters — at the bottom of this page there’s a JavaScript widget to convert a 3x3 rotation matrix into a quaternion. Entering your matrix, I get quaternion [-0.034904, -0.196936, 0.015732, 0.979669] — you can enter this into Vuo’s Make Quaternion Transform node. Likewise with the “T” vector (but given how large those values are relative to Vuo’s -1 to +1 coordinate space, you’ll have to apply a scale transform, too).

Next, how do we make a mesh based on the Intrinsic Parameters? If we can find parametric equations of the form (x,y,z) = f(u,v), we can use Vuo’s Make Parametric Mesh node to produce a mesh. I skimmed through the original 1997 Heikkilä/Silvén paper, but didn’t find anything appropriate, and I don’t understand enough of the math yet to devise my own set of equations. Any ideas? Do you know of any open-source software that can generate a mesh given the camera’s Intrinsic Parameters?

Hi smokris,

your contribution is awesome. Thanks!!! This sounds for me a great step forward.
I’ll study better and test the solution you provide, and look back to the information i collected to see if the last opened question mark finds some way to get an answer.

keep in touch
michele

Dear,

if I understand well, I think the first rows of the answer to the question posted here could be the solution: http://stackoverflow.com/questions/22290017/how-do-i-re-project-points-in-a-camera-projector-system-after-calibration

It sounds also to consider the extrinsics data, so: do you think that “Make Parametric Mesh” is, as I suspected, the node that, feeder with the formula there described, can get the result?

Or am I misunderstanding its behavior?

thanks
michele