Attention: open in a new window. PDFPrint

Window To A Different World

You probably have heard about Jhonny Chung Lee (http://johnnylee.net/projects/wii/), the man who connected a wiimote to his computer to use it as a "touch"  Device for a beamer projected. An other project of him was using the wiimote ad a headtracking device. He developed an application that used the position of

his head and used the tracking information within a program showing a simple 3D scene. With this he archieved the illusion of looking through a window as you are able to look around other objects that are near to the screen surface.

If you look at his Youtube video you will notice that it looks quite impressive. It looks even more impressive, if you use some 3D glasses like (red/green or red/cyan glasses or even a polarized 3D display) to show your scene.

In this article, I want to point out, what has to be done to create the illusion of such a window effekt. I will not point out all the programming stuff that is necessary, but I will show the theorie behind it so you should be able to attach it to your own applications in a few lines of code.

Tracking the head

First of all, you will need some kind of head tracking device. One of the cheapest options is just using a wiimote. If you want 3D tracking, which creates a much better illusion, you should at least use 2 wiimotes to triangulate the position of the head within a 3D space. I did not try to use a wiimote as I have had the possibility to use a iotracker device. This a a commercial marker tracking system based on infrared cameras that allows tracking the position and rotation (6 dimensions of freedom) of up to four markers within a range of a few meters.

I probably will show, how you create a 3D picture for anaglyph glasses (red/green or red/cyan, without quadbuffering) in some other article next weeks, as soon as I find the time and add an article about eye separation as well. I used the tracking together with an infitec based system to create a 3D projection on a quadbuffered nvidia card. I can tell you, the effect was quite impressive.

Give me a firm place to stand and I will move the Earth

Let's go ahead with some boring theorie that might prevent you from some headcrashes. One first thing to do is to define what we really want to do and what our fixed point in the 3D projection is as we will calculate everything around this point. In normal (non head tracking) environment the fixed position should be your camera. OpenGL developers know, that the camera object is emulated by reverse moving the world around the camera. Direct3D developers have a camera object that does mostly the same and only is a helper object. You might think: wait, if I play an ego shooter, the camera is moved around. This is a correct objection. This is more or less a type of definition. If you are familiar with OpenGL development you should know that moving the camera is moving the scene in the reverse direction and all rendering is done from a position as (0, 0, 0) in your virtual coordinate system.

In the following document, I will not use the camera as a fixed object but the "projection surface". The "projection surface" is your display within your real worlds coordinate system and the window you are looking through your virtual world. So the projection surface is a kind of a magical window in a virtual world behind it.

Now, as we have headtracking, why not leaving the camera as the central object everything moves around? The question to this question just is: you can do this, but the theorie is much easier to understand, if we use the projection surface as our fixed point. Your projection surface is a fixed point on your desktop or the wall (if you use a beamer). You most likely will not move your display around. If you use the camera as a fixed point, you would have to move the display relative to your head. If you try to write this behaviour down into some lines of codes this might result in some headakes. Okay. We have a fixed projection surface and are moving the camera behind it.

Irregularity of the units

Before you start developing your stuff you should think about which units to use in your virtual world. The easiest way is using meters withing your real and virtual world. So the point (1.0, 0.0, 0.0) is just 1 meter away from (0.0, 0.0, 0.0). The next thing is just a kind of definition. I will use a camera at with the up-vector in the y-direction so the projection surface has only (x, y)-coordinates and the depth is the z-coordinate. If we use this definitions, it is really easy to transform the position of the head into virtual coordinates or better: It is really easy to put your head into the virtual world. If your head is 1.5 meters behind the projection surface, your head is 1.5 meters behind the window into the virtual world. So you just extend the virtual world into the space behind your display (which makes also sense, if you want to allow objects to come out of your display and appear before it using 3D glasses).

Putting it all together

So what about our transformations?
shows the idea behind tracking the head relative to the projection surface
Take a look at the picture. It shows two different positions of the head behind the projection surface and the effect to the rendering you have to archieve. If you have the blue viewing angle and position, your are able to see the green and the yellow sphere. If you move to the red viewing position you are only able to see the yellow and the blue sphere. All we have to do is to simulate the behaviour you see in the picture. The word frustum might appear in your mind. We do not only have to fix the camera position but the frustum as well. So how to archieve this? Lets define some structs containing some information in pseudocode (might look as C++-code ;) ).

typedef struct Vector3f {
 float x;
 float y;
 float z;
};

Now, our tracking system returns us the position of the head relative to the center of the screen which we define as (0.0, 0.0, 0.0). If you get this position in the way I will use in the following code snippeds depends on your tracking system. In my case using iotracker, I had just to substract the origin of my tracking system from the tracked position. You might have to perform some rotations or some stuff like this as well depending on your hardware.

So, lets read our current tracking position:

Vector3f position;

position = TrackingSystem.getPosition();

To calculate the frustum, we need the witdh and height of the screen. As (0.0, 0.0, 0.0) is in the center of the screen the calculation of the frustum size is very easy:

float screenWidth = Screen.getWidth();
float screenHeight = Screen.getHeight();

float right = (screenWidth / 2.0);
float left = - right;
float up = (screenWidth / 2.0);
float down = -up;

All values describe the settings of a default frustum for a centered head. Now we have to move the frustum depending on our heads position:

right -= position.x;
left -= position.x;
up -= position.y;
down -= position.y;

The z-Coordinate is used to fix our near clipping plane which is the distance from the projection surface.

float zFar = Screen.getZFar();
float zNear = position.z;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(left, right, up, down, up, zNear, zFar);

Next, we have to set our new camera position. I will do this by performing some transformations that you perform before loading all the other camera transformations you might use:

glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(-position.x, -position.y, -position.z);
// stuff like gluLookAt or whatever

This is all the math to be done to use headtracking. Keep in mind that the object above like TrackingSystem and Screen are pseudo-objects in pseudo code. They should work in C++, if you have defined your tracking class and some screen class like used above but it will not work out of the box ;)

So, have fun with it...