Patrick Bader's Blog a blog about software development


it’s about time…

I finally finished my Master's Thesis about GPU based image processing in the context of a multi-touch application. As I wrote in older posts I have built a prototype of an LCD based multi-touch screen. The PlayStation Eye camera is used to track blobs and fiducials using IR light and the images from the cam are processed on the GPU to extract positions, IDs, etc. For the implementation which will possibly be published in a later post, I used OpenCL, which allows programming on various heterogenous hardware. The visualization is done using solely OpenGL 3.2 Core Profile.

The thesis in in german and can be downloaded here. Feel free to read and comment it.


Upgrading PS3 Eye Camera and Blob Detection

As noted in my last posting, the images of the modified Eye camera were quite blurry. That was the case because the IR filter was missing and so the camera got out of focus. I was asking myself for the reason, since the filter is not a lense at all. Glass has a different refraction index than air resulting in diagonal rays becoming offset. I tried to visualize the effect in the following picture:


More on that can be found on wikipedia.

I have finally managed to correct this issue. I had to find a replacement for the filter. It had to be of glass or a similar material with the same width as the filter. Quite tough to find a piece of glass or plastic with a depth of about a millimeter. Suprisingly the cap of a CD case did the job as a filter replacement. I first sawed a piece off and afterwards filed it into shape.

The room was quite dark but the results are  better than without the replacement, I think:

PS3 Eye Capture with IR filter replacement

PS3 Eye Capture with IR filter replacement

Now the second part of this posting, blob detection. Blob detection is about finding bright or dark spots in an image. I used some kind of global flood fill algorithm to find the blobs. Sadly I was not able to programm this part in the pixel shader so I implemented an algorithm on the CPU. On my laptop CPU power is fairly restricted so it took me a few attempts to do blob detection at 60 fps. The algorithm is a scanline algorithm which searches for horizontal lines of bright pixels. After the complete image has been scanned, one or more connected lines become a blob. The blob center is the barycenter of the pixels, at the moment without regard of their brightness. I have to test this algorithm for robustness, but can already give you a first impression:

A first test of blob detection. The red dots are centers of detected blobs.

A first test of blob detection. The red dots are centers of detected blobs.

Hope I could help anyone with the camera focus problem. Maybe I release the source code of the project when it is out of experimental stage, or to say it straight: When it's a bit less messy. 😉

Tagged as: No Comments