Stable Interest Point Detection in 3D

I would love for someone to do this research. Maybe I will. Point me in the direction of anything that looks promising, and I will buy you a beer if it leads to the apprehension of the suspect. This is the most promising I’ve seen to date, but I e-mailed the guy and he’s no longer working on it.

Given: A series of stereocalibrated smart camera pairs (nodes), fixed at unknown relative locations and orientations, ostensibly observing the same scene. You can either assume the scene is static or that the nodes are in temporal sync so that they all image the same stuff. You can see the original left and right images as well as a dense range image (a.k.a. disparity map), and you know the epipolar geometry (essential matrix, fundamental matrix, whatever you want).

Find: The best way to detect and triangulate sets of interest points, roughly similar in size (say, 20 to 80 points, but better if scalable), at each node such that there is as much overlap as possible between the point sets at each node. Obviously, the differing fields of view and scene occlusions are going to arbitrarily affect performance. The idea is to make point detection repeatable (stable, robust) in 3D, as all current methods I am aware of are at best invariant to 2D image Euclidean or affine transformations. Ideally, from two substantially different 3D views, any points not occluded and within both fields of view which are detected by one node are also be detected by the other.

Bonus: This problem really ought to be tackled from a general perspective which would not necessarily involve smart camera nodes. However, in my case, they are smart camera nodes, and they can talk to each other! If there’s a better way that involves sharing information between nodes, feel free to make that assumption.

Nov 18th, 2008
Comments are closed.