RuTh's
RuThLEss
HomEpAgE


* INDEX *
3D game todo list *
3D Engine todo list
( chapter
1,
2,
3,
4,
5,
6,
7 ) *
* projection formula * transformation matrices * Bresenham algorithm * Scanline Polygonfill algorithm * * Spieldesign / game design * Troubleshooting 3D * Irrlicht 3D engine * Blender for Beginners * 3D Object ProjectionAnalytic Geometry?! Don't PanicThe first thing you want to be able to do in a 3D engine is to define and display 3D entities such as characters, buildings, weapons and treasures that populate your game. (The second thing is to transform and animate them, which will be discussed subsequently.) Well, if maths teachers had thought of telling us that analytic geometry is what computer games are made of, we surely would have listened more closely. Anyway. It's about time to make good use of it. :) I'm not going to go into details how vectors, vertices and the coordinate system work. For now, think of vertices as the cornerpoints of your entities in 3D space. Each corner or vertex is defined by a vector — three floating point numbers (xyz) that describe a fixed position in the 3D coordinate system. You can connect three vertices to form a triangle, four to form a square, or any number of them to form any polygon. You can then put together several of those polygons to form a 3D entity: The simplest example for that is to construct a cube out of 6 square polygons; other cases such as a house or person are more complex but possible. One more thing you need to keep in mind is that with the approach I am designing this 3Dengine, there are small limitations: Polygons must always be a) flat ("coplanar"), b) convex, and c) their vertices must be defined counterclockwise. This is an simplification which ensures that we don't get too many case differentiations so calculations will not get too timeconsuming. It does not limit the kinds of shapes you can create, since you can substitute any concave shape with two or more convex shapes! ProjectionProjection means the displaying of 3D images on a 2D screen  that's what your eyes do all day long on your retina and it's essentially what we need 3Dengines for. Projection requires three preparational steps: First you have to define the 3D entities, of course; next you might want to do some transformations on your entities as they walk or move; then you want to place them into your 3D game world. Finally, you project the coordinates to the 2D screen. In more detail:
The Projection Formula
Implementation in Objective C (Apple Cocoa)Now on to the implementation. Basically, you will need (at least) three C objects to describe 3D entities: A vertex object, a polygon object, and an entity object. Each entity consists of a list (array) of polygons; each polygon consists of a list of vertices and has a color; each vertex has three floating point number coordinates. The MYVertex ObjectWe heard about the four steps of projection  there are local, transformed, world and screen coordinates. The definition of coordinates happens in the vertex object. Let's call the vertex object MYVertex. MYVertex needs the following float instance variables:
Of course I also write the accessor methods for the vertex object: methods to get and set the instance variables, and (for convenience) methods adding a value to them or multiplying them by a value, respectively. The vertex initializer method sets the three local coordinates to the init:'s arguments, the four temporary variables to 1.0, and all the other variables to 0.0. No dealloc method is necessary, since the data is all just primitive floats. The MYPolygon ObjectThe next object to be implemented is the polygon object, let's call it MYPolygon. The polygon object has an array of vertices (given in counterclockwise order) and a color, which I represent by one of Cocoa's NSColor objects. Additionally, a polygon has a position in 3D space, the socalled origin, which is given by three floats ( oriX  oriY  oriZ ) and their accessor methods. MYPolygon also has a draw method, that loops through the screen (!) coordinates of each of the vertices in the array and draws a closed and filled NSBezierPath object. Don't forget the dealloc and the initializer which sets the initial color and the vertex list.  (void) draw:(NSRect)cliprect { NSBezierPath * polygon = [NSBezierPath bezierPath]; int v=0; [[self color] set]; /* Define start point */ [polygon moveToPoint:[self screenCoordOfVertexAtIndex:0]]; /* loop: Connect all points */ for (v=1;v<[self numOfVertices];v++){ [polygon lineToPoint:[self screenCoordOfVertexAtIndex:v]]; } /* draw and fill the polygon */ [polygon closePath]; [polygon fill]; /* or for wireframe use [polygon stroke]; */ [polygon removeAllPoints]; } The MYEntity ObjectLast we take care of the entity object which will be named MYEntity. MYEntity has a list of polygons that constitute the entitity, and also its own position, that is origin point ( oriX  oriY  oriZ ). It has a draw method that loops over all the polygons in the polygon array and calls their draw method, an obvious init and dealloc method, and a couple of necessary accessors. But that's not all  MYEntity is where the calculation of the transformed coordinates, the world coordinates and the projected screen coordinates is initiated. Therfore there are three special methods, transformation, toWorldCoordinates and projection. Transformation is a complex issue, since I need to introduce matrices first. I will explain the real transformation later, for now I only give you a temporary fake transformation method that does nothing but copy the local coordinates to the transformed coordinates. Nothing happens here...  (void) transformation { /* Does nothing yet, just copies local coordinates the vertix'es * transformation variables tx,ty,tz,tt. * The real transformation will be handed in later. */ int p,v; MYVertex* theVertex; MYPolygon* thePolygon; int numOfPolygons=[self size]; for(p=0;p < numOfPolygons;p++) { thePolygon = [self polygon:p]; int numOfVertices=[thePolygon groesse]; for(v=0; v < numOfVertices; v++) { theVertex=[thePolygon vertex:v]; [theVertex setTX:[theVertex lx]]; // only fake! [theVertex setTY:[theVertex ly]]; // only fake! [theVertex setTZ:[theVertex lz]]; // only fake! [theVertex setTT:1.0]; // only fake! } } } The conversion to world coordinates places the object in its position in the 3D game world. The calculations are easy: Just add the entity's origin coordinates ([self oriX], [self oriY], [self oriZ]) to the vertices' transformed coordinates (tx, ty, tz) and store the result in the world coordinate variables (wx, wy, wz).  (void) toWorldCoordinates { /* Converts the transformed coordinates to world coordinates. * Stores results in wx, wy, wz. */ int p,v; int numOfPolygons=[self size]; for(p=0;p < numOfPolygons;p++) { MYPolygon* thePolygon = [self polygon:p]; int numOfVertices=[thePolygon size]; for(v=0;v < numOfVertices;v++) { MYVertex* theVertex = [thePolygon vertex:v]; [theVertex setWX:([theVertex tx]+[self oriX])]; [theVertex setWY:([theVertex ty]+[self oriY])]; [theVertex setWZ:([theVertex tz]+[self oriZ])]; } } } Now the blowoff, the projection. This method implements the improved projection formula shown above (sx=wx*c/wz, sy=wy*c/wz). The constant c is defined with  (void) projection:(NSRect)rect { /* Converts 3D world coordinates to 2D screen coordinates * Stores results in the vertices' variables sx, sy. */ int p,v; float w = rect.size.width*0.5; float h = rect.size.height*0.5; int numOfPolygons=[self size]; for(p=0;p < numOfPolygons;p++) { MYPolygon* thePolygon = [self polygon:p]; int numOfVertices=[thePolygon size]; for(v=0;v < numOfVertices;v++){ MYVertex* theVertex = [thePolygon vertex:v]; float depth=[theVertex wz]; /* projection */ if(depth==0.0) depth=0.0000000001; /* don't div by zero! */ [theVertex setSX:(([theVertex wx]*c)/depth)]; [theVertex setSY:(([theVertex wy]*c)/depth)]; /* center */ [theVertex addToSX:w]; [theVertex addToSY:h]; } } } Culling of Backfacing PolygonsThat's almost it. If you defined a test entity now and drew it to the screen, you'd get a weird result: The back of the entity would be visible in the front. Why? Well, nobody told the drawing methods not to draw the backside of entities, right? What we need is one more step of optimization, which is called culling of backfacing polygons. The following method goes into the MYPolygon object: It looks at the first three vertices' world coordinates, calculates their cross product and dot product and thus determines whether the polygon is backfacing or not in relation to the viewer. It is considerd given that the viewer stands in the worlds origin (000) and looks down the zaxis. (If you don't know what a dot product or a cross products is  just trust Tieskoetter and Decartes.)  (BOOL) isBackfacing { float cullMe,x1,x2,x3,y1,y2,y3,z1,z2,z3; MYVertex* v0,*v1,*v2; v0=[self vertex:0]; v1=[self vertex:1]; v2=[self vertex:2]; x1 = [v0 wx]; x2 = [v1 wx]; x3 = [v2 wx]; y1 = [v0 wy]; y2 = [v1 wy]; y3 = [v2 wy]; z1 = [v0 wz]; z2 = [v1 wz]; z3 = [v2 wz]; cullMe = x3 * ((z1*y2)(y1*z2)) + y3 * ((x1*z2)(z1*x2)) + z3 * ((y1*x2)(x1*y2)) ; return (cullMe < 0.0); } Now adapt the draw method of MYEntity to test each polygon before drawing it; MYEntity has to reject drawing polygons which face away from the viewer and therfore are not visible at all. That's it! Define a test entity (for instance a cube), transform, convert and project it, then draw it to the screen from your custom NSView's drawRect method. Here is sample code for how to create a cube as a test entity. Object Creation Sample Codetypedef struct _MYPoint { float x; float y; float z; } MYPoint; + (MyEntity*) createCubeAt:(MYPoint)loc center:(MYPoint)j x:(float)x y:(float)y z:(float)z { // Create the eight corner vertices of the cube MYVertex* a=[[MYVertex alloc] initWithX:0j.x y:yj.y z:0j.z]; MYVertex* b=[[MYVertex alloc] initWithX:xj.x y:yj.y z:0j.z]; MYVertex* c=[[MYVertex alloc] initWithX:xj.x y:0j.y z:0j.z]; MYVertex* d=[[MYVertex alloc] initWithX:0j.x y:0j.y z:0j.z]; MYVertex* e=[[MYVertex alloc] initWithX:0j.x y:yj.y z:zj.z]; MYVertex* f=[[MYVertex alloc] initWithX:xj.x y:yj.y z:zj.z]; MYVertex* g=[[MYVertex alloc] initWithX:xj.x y:0j.y z:zj.z]; MYVertex* h=[[MYVertex alloc] initWithX:0j.x y:0j.y z:zj.z]; // initialize six lists with those vertices (anticlockwise) NSArray *vlist1 = [NSArray arrayWithObjects:a,e,f,b,nil]; NSArray *vlist2 = [NSArray arrayWithObjects:f,g,c,b,nil]; NSArray *vlist3 = [NSArray arrayWithObjects:d,c,g,h,nil]; NSArray *vlist4 = [NSArray arrayWithObjects:a,d,h,e,nil]; NSArray *vlist5 = [NSArray arrayWithObjects:b,c,d,a,nil]; NSArray *vlist6 = [NSArray arrayWithObjects:e,h,g,f,nil]; // construct six squares from those vertex lists MYPolygon *square1 = [[MYPolygon alloc] initWithVertexList:vlist1 color:[NSColor greenColor]]; MYPolygon *square2 = [[MYPolygon alloc] initWithVertexList:vlist2 color:[NSColor yellowColor]]; MYPolygon *square3 = [[MYPolygon alloc] initWithVertexList:vlist3 color:[NSColor orangeColor]]; MYPolygon *square4 = [[MYPolygon alloc] initWithVertexList:vlist4 color:[NSColor redColor]]; MYPolygon *square5 = [[MYPolygon alloc] initWithVertexList:vlist5 color:[NSColor magentaColor]]; MYPolygon *square6 = [[MYPolygon alloc] initWithVertexList:vlist6 color:[NSColor blueColor]]; // initialize an entity with this list of squares NSMutableArray *plist = [NSArray arrayWithObjects: square1,square2,square3,square4,square5,square6,nil]; // construct a cube from this list MYEntity *cube = [[MYEntity alloc] initWithPolygonList:plist]; [a retain]; [b retain]; [c retain]; [d retain]; [e retain]; [f retain]; [g retain]; [h retain]; [square1 retain]; [square2 retain]; [square3 retain]; [square4 retain]; [square5 retain]; [square6 retain]; [vlist1 retain]; [vlist2 retain]; [vlist3 retain]; [vlist4 retain]; [vlist5 retain]; [vlist6 retain]; [plist retain]; [cube retain]; [cube setOriX:loc.x]; [cube setOriY:loc.y]; [cube setOriZ:loc.z]; return cube; } The projection you just implemented displays static 3D entities on the screen. Next, read how to transform entities before projection. 

http://www.ruthless.zathras.de/ 