Attention: open in a new window. PDFPrint

Some stereo separation theorie (Part I)

Well, after I have discussed, how you should combine your headtracking stuff with OpenGL I will show you, how you can generate some 3D stereo mode for your OpenGL applications. The eye separation will work with Direct3D in the same way, but I tend to use OpenGL as it is a portable graphics library and works great under linux. As not everyone of you might own a NVIDIA Quadro-Card and a stereo display, I will show, how you create anaglyph 3D pictures and extend it on quadbuffered cards afterwards.

I will also show you how you can archieve the effekt of objects appearing in front of your display. This is an effect that is very difficult to archieve as there are many things that can destroy the illusion.



Anaglyph and quadbuffer

Anaglyph glasses are theese well known red/green or red/cyan or red/blue glasses. There are many ways to get some of theese cheap glasses. I found some of them in a magazine some weeks ago, for example. Or you might buy them on the internet or find an optician who gives you some for free.

The ida behing rendering the scene for anaglyph glasses or on a quadbuffered video card is mostly the same:

  1. Initialize buffer or color-mask for the first eye
  2. Render the scene from the first eyes point of view
  3. Switch the buffer of color-mas for the second eye
  4. Render the scene from the second eyes point of view

The difference is, that you have two backbuffers (a left and a right one) on a quadro-card you switch for each eye, but for anaglyph stereo you have only a color map you will switch and draw the data into the only backbuffer your card has.

The eyes point of view (or: What is stereo viewing)

One important thing you should understand before we start up with the discussion, how you should perform the stereo separation is, why we are able to see the stuff in our world in  "stereo". The only reason that we can is, that we have two eyes, that look mostly into the same direktion and are separated by approximately 2 to 3 inches. So, each eye sees a slightly different image that is combined to a three dimensinal image by our brain. This allows us to recognize different distances. If you try closing one of your eyes, you will feel, that nothing really changed, as you brain keeps the illusion of a three dimensional viewing, but if you now turn around and try grabbing something off a table behind your back without knowing the actual size of the object and without touching it before you might fail at once. You will have to guess distances your brain does not know yet. And this is all the magic behind stereo visualisation.

We will use the anaglyph glasses to create the illusion of some eye separation that allows us to play tricks on our brain. This makes our brain think that it is seeing something three dimensional that is not really there. For this, we have to move our camera to the left or the right by some inches. This works best if you know the eye distance of the person looking at the screen. This is something you might not know. I have archieved good results using a value of 0.066 meters but you might play around with theese values a bit. One thing that is very nice is, that it does not matter, how far away the person looking a the screen actually is from the screen as the eye separation is the same for every distance. This does not apply for the so called "fusion point" we will discuss later on. This is the reason why the scene in a 3D cinema does not look the same from every seat in the cinema.

So, how is the magic done?

Lets start from the basic. We will use parallel eye-separation in our first step.

Shows the basic idea of parallel eye separation.

Take a look at the picture above. The blue lines show the field of view of the left eye and the red lines the field of view of the right eye. The difference between both of them can be described by a simple translation. Let d be the eye distance and the coordinate system be defined as follows: The x-axis goes from left to right, the y-axis from down to up and the z-axis describes the distance vector. So the translations can be defined as (-d/2, 0, 0) for the left eye and (d/2, 0, 0) for the right eye. So all we do is perform some translation on the camera before rendering one of the eyes. This is the basic behind stereo separation.

Basically this would mean that we have to perform our basic camera translation using gluLookAt and than apply our eye translation. The problem with it is, that we do not know the direction to the right of the camera after the camera has been rotated in our universe. This means that we have to perform the same transformations, we used on the camera, on our translation vector. But there is an easyer way to get the same result:

As the translation is a linear operation, we can apply the translation to our camera before using gluLookAt to translate our camera. To speak in pseudo-code this would look something like this:

bool left = false;	// true for left eye, false for the right
float d = 0.033; // set some eye distance to use

glTranslatef( (left ? -d : d), 0.0f, 0.0f);
gluLookAt(2, 3, 4, 0, 0, 0, 0, 1, 0);

// render your stuff

As you can see, the eye separation is nothing to be scared about.

How to do this eye switching stuff?

The above example would show your scene rendered from one eyes perspective. To get a stereo view you have to get both images at the same time (except for shutter glasses). I will start to show how it works on quadbuffered cards, as it is really straightforward, and then show the same using color masks for red/green glasses.

To allow quadbuffering you will need a quadbuffered video card (Nvidia Quadro) configured properly (under Linux using the stereo option in your xorg.conf in the proper mode for your display) and you have to enable quadbuffering for your application. The way to enable quadbuffering depends on the framework you use as drawing canvas. Most frameworks have a special switch to enable stereo mode. If you use glut. just append GLUT_STEREO to the display modes you use. As an example this might look like this:


This command will fail, if you do not have any quadbuffer support, and terminate with an error message.

After enabling the quadbuffer you kann render both eyes after an other like this:

// enable first buffer
// clear buffers

// render left eye with code above

// now, render the right eye
// clear buffers

// render right eye with code above


Anaglyph rendering is working the same way without all the quadbuffer stuff. Just do not use the GLUT_STEREO-flag and set a color mask before rendering each eye:

// first of all, we just clear all buffers

// set the color mask for the left eye (red)

// render the left eye here

// set the color mask for the right eye (green)

// render the right eye here

// swap buffers

What is missing?

In the next part I will describe, how you can archieve much better effects using some eye fusion point. The problem of the parallel translation is, that the eye fusion does not exist which is the same then looking to an object with an infinite distance. By doing this, we create some limitations:

  1. Our brain thinks that we look to objects that are very far away. So it tries to move our eye lense to get a sharp view of objects thus far away, but the projection surface is just some meters away on the other hand. This will lead to headaches if you use the projection for more that a few minutes.
  2. It is impossible to get objects out of the surface. If you try this, objects will stay behind the projection surface but you will get just some more headaches.
  3. It is impossible to feel the distance of objects in your world. You have a feeling that everything is a kind of three dimensional, but you can not say, how far an object really is away from your position.

We will cope with this problems in the second article.