Kinect-based Medical Image Viewer

Standard

This is from one of my master project. So what happened is that, My supervisor and I talked about how to leverage the use of Kinect in operating rooms, providing a hands free solution (thus more sterile) for surgeons when viewing thru X-Rays or MRI images. It was my first foray into OpenCV and OpenNI (open source SDK for Kinect).

In nutshell the application operates like this:

  • Use OpenNI to obtain skeleton and joint data from Kinect, from which I know the positions of wrists, shoulder, etc.
  • Since we know the position of wrist, I can roughly approximate where the palms are. So rather than trying to detect from the whole image, I detect the appearance (or the lack of) fingers in that region.
  • OpenCV is used to extract the hand region, apply polylines, and check the convexity of that shape. Thus I can figure out whether I am making a knuckle or open palm gesture.
  • The images is shown using OpenGL (and if I remembered correctly I used VTK / DICOM libraries to extract the MRI images)
  • I added some logic to emulate finger gestures on mobile phones, but this time using the palms instead. Each closed palm gesture is treated as if you’re holding down a mouse button, and open palm means letting the button go.

Here’s the first prototype to show that I could detect fingertips

Then it was extended so that it could manipulate images

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s