Added CMake / installation scripts to Three.cpp


Hurray, I just added Cmake and some bash scripts to install Three.cpp, it only works on Mac OS X though, since I don’t own a Windows PC.

Just run the build/, and it should try to install Homebrew if it’s needed, then it will install necessary libraries (e.g. glm, glfw3, etc), before running CMake, and makefile.

When the installation is done successfully, it should run a test demo that shows something like this:

2014-08-18 14_19_12


And thanks to github, I’ve created a GitHub page at:, check it out and thanks!

PS: I’m still looking for collaborators / contributors for this port though, it has kind of deviated from the original three.cs, and there’s so many interesting thing that I wanted to add, but my 3D programming skill is not the greatest.

Ongoing port of Three.cpp


This is an ongoing process of porting Three.js to C++ / OpenGL. The main idea is not to create the fastest 3D engine out there, but to create a 3D engine that’s simple enough to instantiate, so that other field of computing could use it rapidly for prototyping. Thus reducing the unnecessary hoops for OpenGL newcomers when it comes to thing such as context creation, loading shaders, etc. Continue reading

Calculating angle and direction between two vectors



Normally, if we were to find the angle between two vectors, say up-vector and dir-vector, what we would do is to do a dot product between them. Since

a \cdot b = |a||b| \cos{\theta} and

\cos{\theta} = \frac{a \cdot b}{|a||b|}

dot_product_01However, since the calculated angle is the smallest / nearest angle between the two vectors, we can’t derive the (rotational) direction of one vector from the other. Thus in the following figure, it’s hard to figure out whether the direction vector is on the left or right of the up vector. The angle between two vectors calculated using dot product is always between 0° to 180°, thus we are not able to derive the direction from it. Continue reading

Simple head tracking


This is a very simple head-tracking using OpenCV and only Viola Jones’ face detection framework.

Basically the idea is to use face / head tracking to create an immersive 3D experience. This, is of course is still a very early prototype.

I whipped this up in less than an hour, so it’s very dirty and lack of any optimization. You can make the whole thing faster using CamShift algorithm, and some cleanup on the OpenGL codes.

Have fun ! Continue reading

Augmented Reality


Here’s another of my small projects. Basically it’s just a small Augmented Reality program, it loads OBJ file and show the 3d object on a given marker using OpenCV.

Here’s the code which have been extended to a crappy ‘swat the mosquito’ game : But you can still salvage the code and see how the whole thing works.

Perfume Global #prfm_global_site


This is part of promotion for Japanese Electro-pop group, Perfume. Basically Perfume’s marketing team offers motion capture data for their dancing routine and also a parser for the mocap data.

What I did next is to use Processing to parse and show the visualization of it.

You can download my source files from here:

It requires PeasyCam library for processing which basically implements trackball rotation camera. Get PeasyCam here:

Have fun with it !

PerfumeGlobal: main class in Processing

import peasy.*;
import krister.Ess.*;

/* trackball camera */
PeasyCam cam;

/* Instances from ESS library to obtain the audio and perform Fast Fourier Transform on it */
AudioChannel audio_channel;
FFT fft;
color [] spectograph_colors = new color[32];

GlobalManager global = new GlobalManager();
int start;
boolean play = true;

public void setup() {
 size( 800, 600, P3D );

/* Randomize colors for the equalizer (spectograph) running on the back */
 for ( int i = 0; i < 32; i++ )
 spectograph_colors[i] = color( random(255), random(255), random(255), 200 );

textFont( createFont( "Arial", 24 ), 12 );

/* init the trackball camera */
 cam = new PeasyCam( this, width / 2.0f, height / 2.0f - 100.0f, 0, 500 );

/* Read in the mocap data */

audio_channel = new AudioChannel( "Perfume_globalsite_sound.wav" );
 fft = new FFT( 64 );

frameRate( 120 );


public void draw() {
 background( 0 );

if ( !play )

/* update each skeleton based on the timing */
 global.update( millis() - start );

/* draw the background equalizer */
 fft.getSpectrum( audio_channel );
 translate( 10.0f, 0.0f, -380.0f );
 for ( int i = 0; i < 32; i++ ) {
 fill( spectograph_colors[i] );
 float temp = max( 0, 180 - fft.spectrum[i] * 175 );
 rect( i * 25, temp, 20, height / 2.0f - temp + 0.5 );

/* draw the ground */
 fill( 255, 255, 255, 200 );
 translate( width / 2.0f, height / 2.0f + 15, 0.0f );
 box( 800, 5, 800 );

/* draw each dancing figures */
 translate( width/2, height/2-10, 0);
 scale(-1, -1, -1);
 global.draw( cam.getRotations() );

public void stop() {

void keyPressed() {
 if ( keyCode == 32 ) {
 play = !play;
 start = millis();

if ( play );

Skeleton: each Skeleton represents a stick figure

/* Each skeleton represents one stick figure */
public class Skeleton {
 private BvhParser parser;
 private color skeletonColor;
 private String id;

public Skeleton( String id, color skeleton_color, String filename ) {
 /* Instantiate parser and parse the mocap data */
 parser = new BvhParser();
 parser.parse( loadStrings( filename ) ); = id;
 this.skeletonColor = skeleton_color;

private BvhParser getParser() {
 return parser;

/* Update the current motion of the skeleton based on the timing */
 public void update( int ms ) {
 parser.moveMsTo( ms );

public void draw( float [] rotations ) {
 fill( this.skeletonColor );
 stroke( this.skeletonColor );

BvhBone prev = null;
 /* To determine the size of circular shadow */
 float min_x = 9999;
 float min_z = 9999;
 float max_x = -9999;
 float max_z = -9999;
 for ( BvhBone bone : parser.getBones() ) {
 /* draw lines between joints */
 if ( prev != null ) {
 strokeWeight( 6 );
 line( bone.absPos.x, bone.absPos.y, bone.absPos.z,
 prev.absPos.x, prev.absPos.y, prev.absPos.z);

 strokeWeight( 4 );
 translate( bone.absPos.x, bone.absPos.y, bone.absPos.z );
 ellipse( 0, 0, 10, 10 );

if ( !bone.hasChildren() ) {
 int bone_size = 15;

 /* if it's head, draw it big enough */
 if ( "Head".equals( bone.getName() ) ) {
 bone_size = 30;

 translate( bone.absEndPos.x + 20, bone.absEndPos.y + 20, bone.absEndPos.z );
 rotateX( rotations[0] );
 rotateY( rotations[1] );
 rotateZ( rotations[2] );
 scale(-1, -1, -1);
 text(id, 0, 0, 0 );

/* draw the joints as circles (including head) */
 translate( bone.absEndPos.x, bone.absEndPos.y, bone.absEndPos.z );
 ellipse( 0, 0, bone_size, bone_size );

prev = null;
 prev = bone;

/* update the size and position of the shadow */
 if ( min_x > bone.absPos.x )
 min_x = bone.absPos.x;
 if ( min_z > bone.absPos.z )
 min_z = bone.absPos.z;

if ( max_x < bone.absPos.x )
 max_x = bone.absPos.x;
 if ( max_z < bone.absPos.z )
 max_z = bone.absPos.z;

/* draw the shadow */
 fill( 100, 100, 100, 200 );
 translate( (max_x + min_x) / 2.0f, -8, (max_z + min_z) / 2.0f );
 rotateX( radians(90) );
 ellipse( 0, 0, (max_x - min_x + 15), (max_z - min_z + 15) );

GlobalManager: convenient class to manage all the Skeletons

import java.util.*;

/* Simple convenient class to encapsulate all the Skeletons */
public class GlobalManager {
 private List<Skeleton> skeletons = new ArrayList<Skeleton>();

/* Init by reading all the mocap data */
 public void init() {
 skeletons.add( new Skeleton( "aachan", color(153, 153, 255), "aachan.bvh" ) );
 skeletons.add( new Skeleton( "kashiyuka", color(153, 255, 153), "kashiyuka.bvh" ) );
 skeletons.add( new Skeleton( "nocchi", color(255, 153, 153), "nocchi.bvh" ) );

/* Update each stick figures' motion based on current time */
 public void update( int ms ) {
 for ( Skeleton skeleton : skeletons )
 skeleton.update( ms );

/* Draw each figures on screen */
 public void draw( float [] rotations ) {
 for ( Skeleton skeleton : skeletons )
 skeleton.draw( rotations );

Kinect-based Medical Image Viewer


This is from one of my master project. So what happened is that, My supervisor and I talked about how to leverage the use of Kinect in operating rooms, providing a hands free solution (thus more sterile) for surgeons when viewing thru X-Rays or MRI images. It was my first foray into OpenCV and OpenNI (open source SDK for Kinect).

In nutshell the application operates like this:

  • Use OpenNI to obtain skeleton and joint data from Kinect, from which I know the positions of wrists, shoulder, etc.
  • Since we know the position of wrist, I can roughly approximate where the palms are. So rather than trying to detect from the whole image, I detect the appearance (or the lack of) fingers in that region.
  • OpenCV is used to extract the hand region, apply polylines, and check the convexity of that shape. Thus I can figure out whether I am making a knuckle or open palm gesture.
  • The images is shown using OpenGL (and if I remembered correctly I used VTK / DICOM libraries to extract the MRI images)
  • I added some logic to emulate finger gestures on mobile phones, but this time using the palms instead. Each closed palm gesture is treated as if you’re holding down a mouse button, and open palm means letting the button go.

Here’s the first prototype to show that I could detect fingertips

Then it was extended so that it could manipulate images