It would take many books to completely describe the magick of thought; hence, we're going to cover only the most basic of concepts as they apply to video games. However, using the simple techniques and algorithms within these pages, we can generate very complex "thinking models" that will be formidable adversaries.
For example, we could make a very complex knowledge base and then attach a natural language parser to it. If a user were to ask the knowledge base a question in its area of expertise, the knowledge base would seem almost human in its responses. However, the knowledge base is nothing more than text strings, rules, and algorithms. It isn't thinking, it's only responding. We are interested in the nature of thought. For example, if we study a human's mind or the mind of a bee, they both have a similar structure-that is, a collection of neurons with an interconnecting network. Both function by impulses transmitted (and inhibited) from neuron to neuron by means of electrochemical reactions induced by charge pumps. In any case, the results of this process are autonomous beings that can work, use tools, and have a sense of existence (in one way or another). This brings us to the question: is the bee aware of its own existence? Does it think like we do, but on a smaller scale? Maybe, maybe not. However, that's irrelevant. The mind of a bee, spider, ant, or any small animal still performs computations that we don't yet fully understand. But if we can model a portion of these simple brains found in insects and animals, we can use these wetware models to implement simple thinking in our games.
In the near future games are going to use more and more complex intelligence models. The models are going to diverge from the standard rule-based AI systems, which are boring and predictable, to models that use software to implement neural networks and small cell-based thinking structures that use simple laws to create very complex systems. We won't have time to go into this kind of modeling in detail, but we will learn how to model certain types of responses and behaviors that simple creatures have. Then we can control the creatures in our games with these models and produce very complex ecosystems. Before we begin discussing all the interesting concepts and techniques used to implement thinking machines, let me preface this by the following warning. This book is primarily designed to teach 3D polygon graphic techniques used in creating 3D games. However, using 3D graphics to illustrate many of these algorithms wouldn't be wise since we want to have a bird's-eye view in most cases. Hence, all the demos will be 2D, so we can see what's going on. At the end of the chapter, we will discuss how to extend the concepts into three dimensions for the techniques that are applicable.
The question is, how do we move the creature toward the player? The answer can be found by breaking the problem down into its component dimensions-meaning that we process the X- and Y-axis aspects of the problem separately. If the player and creature are at any given points on the screen or 2D plane, then the player is either to the right or left of the creature in the X axis and above or below the creature in the Y axis. If we create a conditional logic sequence that tests for these cases and then translate the creature based on the results, then, as a function of time (each frame), the creature will move toward the player no matter what the player does. Moreover, if we move the creature at a velocity (in units of pixels per frame) greater than that of the player, the player will be doomed at some point unless he can otherwise dispatch the creature! Algorithm 7-1 implements chasing with the model we have been discussing, where (dx,dy) are the rates at which we wish the creature to chase the player.
// test X axis position if (player_x > creature_x) creature_x+=dx; else if (player_x < creature_x) creature_x-=dx; // test Y axis position if (player_y > creature_y) creature_y+=dy; else if (player_y < creature_y) creature_y-=dy;
As you can see, the algorithm isn't something you would find on the Putnam Mathematics test, but nevertheless, it results in a creature that behaves much like the Terminator. It won't stop until it terminates its target! The only question that comes to mind is, what happens when the creature chases down and catches the player? Well, if you look at the logic, the case of equality isn't tested; hence, if the coordinates of the creature and player are equal, the creature will sit on top of the player (which is what we want).
The converse of chasing is of course evading (something we all wish we could do with the IRS!), which is implemented by inverting the conditional statements in Algorithm 7-1. Let's see how.
// test X axis position if (player_x < creature_x) creature_x+=dx; else if (player_x > creature_x) creature_x-=dx; // test Y axis position if (player_y < creature_y) creature_y+=dy; else if (player_y > creature_y) creature_y-=dy;You will notice the only difference between Algorithms 7-1 and 7-2 is that the conditional comparison signs are reversed.
As an example of the algorithms in action, I have created a program called LOCKON.EXE in which you use the arrow keys to move a little green alien around. However, the alien isn't alone; there's another, bigger, uglier alien in the same room. The second alien will either chase or run away from your alien depending on its internal logic state, which can be toggled by pressing SPACEBAR. The source for the program is called LOCKON.C, and it uses the library modules we have created thus far. You can review the source code in Listing 7-1.
// LOCKON.C - A demo of tracking and evasion algorithms // I N C L U D E S //////////////////////////////// #include <io.h> #include <conio.h> #include <stdio.h> #include <stdlib.h> #include <dos.h> #include <bios.h> #include <fcntl.h> #include <memory.h> #include <malloc.h> #include <math.h> #include <string.h> #include "black3.h" #include "black4.h" #include "black5.h" // D E F I N E S ////////////////////////////////// // these are the animation cell indices for the alien // walking in all the directions #define ALIEN_START_RIGHT 0 #define ALIEN_START_LEFT 4 #define ALIEN_START_UP 8 #define ALIEN_START_DOWN 12 #define ALIEN_END_RIGHT 3 #define ALIEN_END_LEFT 7 #define ALIEN_END_UP 11 #define ALIEN_END_DOWN 15 // these are the directions the alien can move in #define ALIEN_RIGHT 0 #define ALIEN_LEFT 1 #define ALIEN_UP 2 #define ALIEN_DOWN 3 // G L O B A L S ////////////////////////////////// pcx_picture image_pcx; // general PCX image used to load background and imagery sprite alien, creature; // the players alien and the creature // M A I N //////////////////////////////////////// void main(int argc, char **argv) { int index, // loop variable chase=1, // used to select creatures mode of operation, 1=chase, 0=evade done=0; // event loop exit flag char buffer[64]; // used to print strings // set the graphics mode to mode 13h Set_Graphics_Mode(GRAPHICS_MODE13); // create the double buffer Create_Double_Buffer(200); // load the imagery for the players alienPCX_Init((pcx_picture_ptr)&image_pcx); PCX_Load("lockaln.pcx", (pcx_picture_ptr)&image_pcx,1); // initialize the alien sprite Sprite_Init((sprite_ptr)&alien,32,164,8,12,0,0,0,0,0,0); // start the alien walking down alien.state = ALIEN_DOWN; alien.curr_frame = ALIEN_START_DOWN; // extract the bitmaps for the alien, there are 16 // animation cells for (index=0; index<16; index++) PCX_Get_Sprite((pcx_picture_ptr)&image_pcx,(sprite_ptr)&alien,index,index,0); // done with this PCX file so delete memory associated with it PCX_Delete((pcx_picture_ptr)&image_pcx); // load the imagery for creature PCX_Init((pcx_picture_ptr)&image_pcx); PCX_Load("lockcrt.pcx", (pcx_picture_ptr)&image_pcx,1); // intialize the creature sprite Sprite_Init((sprite_ptr)&creature,160,100,24,12,0,0,0,0,0,0); // extract the bitmaps for the creature, there are // 4 animation cells for (index=0; index<4; index++) PCX_Get_Sprite((pcx_picture_ptr)&image_pcx,(sprite_ptr)&creature,index,index,0); // done with this PCX file so delete memory associated with it PCX_Delete((pcx_picture_ptr)&image_pcx); // now load the background image PCX_Init((pcx_picture_ptr)&image_pcx); PCX_Load("lockbak.pcx",(pcx_picture_ptr)&image_pcx,1); // copy PCX image to double buffer PCX_Copy_To_Buffer((pcx_picture_ptr)&image_pcx,double_buffer); PCX_Delete((pcx_picture_ptr)&image_pcx); // scan under alien and creature before entering the event loop, this must be // done or else on the first cycle the "erase" function will draw garbage Sprite_Under_Clip((sprite_ptr)&alien,double_buffer); Sprite_Under_Clip((sprite_ptr)&creature,double_buffer); // install new keyboard driver Keyboard_Install_Driver(); // put up exit instructions Print_String_DB(96,2,9,"Press Q to exit",0); // main event loop, process until keyboard hit while(!done) { // do animation cycle, erase, move draw... // erase all objects by replacing what was under them Sprite_Erase_Clip((sprite_ptr)&alien,double_buffer); Sprite_Erase_Clip((sprite_ptr)&creature,double_buffer); // move player // test for right motion if (keyboard_state[MAKE_RIGHT]) { // move alien if ((alien.x+=2)>320) alien.x=-8; // first test if alien was already moving right if (alien.state==ALIEN_RIGHT) { // animate and test for end of sequence if (++alien.curr_frame==ALIEN_END_RIGHT) alien.curr_frame = ALIEN_START_RIGHT; } else { // set state and current frame to right alien.state = ALIEN_RIGHT; alien.curr_frame = ALIEN_START_RIGHT; } // end else } // end if right // test for left motion else if (keyboard_state[MAKE_LEFT]) { // move alien if ((alien.x-=2)<-8) alien.x=320; // first test if alien was already moving left if (alien.state==ALIEN_LEFT) { // animate and test for end of sequence if (++alien.curr_frame==ALIEN_END_LEFT) alien.curr_frame = ALIEN_START_LEFT; } else { // set state and current frame to right alien.state = ALIEN_LEFT; alien.curr_frame = ALIEN_START_LEFT; } // end else } // end if left // test for upward motion else if (keyboard_state[MAKE_UP]) { // move alien if ((alien.y-=2) < -12) alien.y=200;
However, one of the most prevalent uses of random numbers is to select the direction and speed of a game object in shoot-'em-up or combat-based games. Given that a creature is at a position (creature_x, creature_y), as shown in Figure 7-2, we can create a direction vector and speed to move the creature for a short period of time. Furthermore, the actual amount of time may be fixed, or it may be another random value. For example, if we wanted to compute a random direction for the creature in Figure 7-2, we could write something like this,
creature_x_velocity = -max_speed + rand() % (2*max_speed+1); creature_y_velocity = -max_speed + rand() % (2*max_speed+1); where max_speed is the maximum positive or negative velocity that the creature will have in either the X or Y axis.
Then using the above values, we could translate the creature every game cycle with something like this:
creature_x+=creature_x_velocity;
creature_y+=creature_y_velocity;
Of course, moving an object around based solely on random decisions wouldn't be very interesting. The objects may be hard to hit, but they show no signs of intelligence. This is because the objects are not basing their decision making on past information or any logical premises at all. Nevertheless, random motion or logic does have its uses and is another spell to add to our bag of tricks. As an example of using random logic to control the motion of an object, I have created a program called LOSTNSPC.EXE. The demo is set in space and features a spaceship that moves around based on a random variable. Who's to say that the ship doesn't have some sort of primitive intelligence? The ship moves on its own and it isn't predictable. However, it's also oblivious to the world around it, but then again, most life forms are! The source code for the program is LOSTNSPC.C, which is shown in Listing 7-2 for review.
// LOSTNSPC.C - A demo of random motion // I N C L U D E S ////////////////////////////////////// #include <io.h> #include <conio.h> #include <stdio.h> #include <stdlib.h> #include <dos.h> #include <bios.h> #include <fcntl.h> #include <memory.h> #include <malloc.h> #include <math.h> #include <string.h> #include "black3.h" #include "black4.h" // G L O B A L S ///////////////////////////////// pcx_picture image_pcx; // general PCX image used to load background and imagery sprite ship; // the alien ship // M A I N /////////////////////////////////////// void main(int argc, char **argv) { int index, // loop variable velocity_x=0, // used to control velocity of ship velocity_y=0; // set the graphics mode to mode 13h Set_Graphics_Mode(GRAPHICS_MODE13); // create the double buffer Create_Double_Buffer(200); // load the imagery for the ship PCX_Init((pcx_picture_ptr)&image_pcx); PCX_Load("lostship.pcx", (pcx_picture_ptr)&image_pcx,1); // initialize the ship Sprite_Init((sprite_ptr)&ship,160,100,18,16,0,0,0,0,0,0); // make ship sit for the first 1.5 seconds ship.counter_1 = 0; ship.threshold_1 = 25; // extract the bitmaps for the ship, there are 2 animation cells for (index=0; index<2; index++) PCX_Get_Sprite((pcx_picture_ptr)&image_pcx, (sprite_ptr)&ship,index,index,0); // done with this PCX file so delete memory associated with it PCX_Delete((pcx_picture_ptr)&image_pcx); // now load the background image PCX_Init((pcx_picture_ptr)&image_pcx); PCX_Load("lostback.pcx",(pcx_picture_ptr)&image_pcx,1); // copy PCX image to double buffer PCX_Copy_To_Buffer((pcx_picture_ptr)&image_pcx,double_buffer); PCX_Delete((pcx_picture_ptr)&image_pcx); // scan background before entering event loop Sprite_Under_Clip((sprite_ptr)&ship,double_buffer); // put up exit instructions Print_String_DB(80,2,9,"Hit any key to exit",1); // main event loop, process until keyboard hit while(!kbhit()) { // do animation cycle, erase, move draw... // erase all objects by replacing what was under them Sprite_Erase_Clip((sprite_ptr)&ship,double_buffer); // BEGIN RANDOM MOTION LOGIC ////////////////////////////////////////////////// // test if ship is complete with current trajectory and // needs a new one selected if (++ship.counter_1 > ship.threshold_1) { // select new direction vector velocity_x = -5 + rand()%10; velocity_y = -5 + rand()%10; // select a random number of frames to stay on new heading ship.threshold_1 = 5 + rand()%50; // reset counter ship.counter_1 = 0; } // end if // move ship ship.x+=velocity_x; ship.y+=velocity_y; // test if ship went beyond screen edges if (ship.x > 320) ship.x = -18; else if (ship.x < -18) ship.x = 320; if (ship.y > 200) ship.y = -16; else if (ship.y < -16) ship.y = 200; // END RANDOM MOTION LOGIC ///////////////////////////////////////////////////// // animate ship if (++ship.curr_frame == 2) ship.curr_frame = 0; // add some special effects via a vapor trail if (rand()%10==1) Write_Pixel_DB(ship.x+rand()%20,ship.y+12+rand()%4,24+rand()%4); // ready to draw ship, but first scan background under it Sprite_Under_Clip((sprite_ptr)&ship,double_buffer); Sprite_Draw_Clip((sprite_ptr)&ship,double_buffer,1); // display double buffer Display_Double_Buffer(double_buffer,0); // lock onto 18 frames per second max Time_Delay(1); } // end while // exit in a very cool way Screen_Transition(SCREEN_DARKNESS); // free up all resources Sprite_Delete((sprite_ptr)&ship); Delete_Double_Buffer(); Set_Graphics_Mode(TEXT_MODE); } // end main
This program is a bit simpler than the previous one since there is no player interaction. As usual the double buffer is created, and the PCX files containing the spaceship and star background are loaded. The program initializes the ship sprite and enters the main event loop. Once the ship has been erased, the main portion of the control logic is performed and moves the ship in a random direction. However, the movement will only last for a finite period of time related to the values in counter_1 and threshold_1.
If you recall, the sprite structure we originally came up with had some extra fields that were to be used as counters and so forth. This program is a perfect example of the use of the counter fields. In this case, counter_1 and threshold_1 are used to count the number of frames that the ship will travel in a single direction. When the counter reaches its threshold; that is, when counter_1 is greater than threshold_1, a new trajectory is selected and the counter and threshold are reset to new values.
The counter allows the ship to move in random directions for random amounts of time instead of a fixed number of cycles. Moving on in the code, after the main portion of the logic executes, the standard screen boundary collision tests are performed and the ship is "warped" from edge to edge if it moves off the playing field. Finally, there is a vapor trail effect (cool!) implemented with another random variable. The following code fragment generates a vapor trail during run time:
if (rand()%10==1)
Write_Pixel_DB(ship.x+rand()%20,ship.y+12+rand()%4,24+rand()%4);
The fragment directs the system to draw a pixel within a small radius (centered about the exhaust nozzles) with a probability of 10 percent. Therefore, every 10 frames the ship expels a single vapor pixel. As an experiment, try decreasing the 10 in the if statement; this will increase the density of the vapor trail. Random variables by themselves are useful, but when coupled with state machines they result in very complex thinking models. But we'll get to that later. At this point, let's see how to implement large-scale attacks (and retreats).
In many games we may want the enemies to perform actions that are somehow related to the goal of converging or diverging from a single point or area of interest. For example, Figure 7-3 shows a top view of a battleground with a collection of aliens and a single player. If the aliens were to group together, they might have a better chance of defeating the player. So, how can we tell them to do this?
If the mechs are using a complex intelligence model, they may all decide to do their own thing and stray too far from each other. However, by coupling some convergence logic with their standard logic every now and then, the mechs can be kept in close formation. This may be desirable when pitting them against a far superior human mind. The question is, how do we compute headings for the mechs or objects so they move toward a central meeting place?
Well, a bit of vector algebra is needed, but not too much. We can compute a direction vector by subtracting the terminal point (convergence position) from the initial point (center of the object). Referring to Figure 7-3, we see that each object or creature that is to be moved can be thought of as an initial point. And the terminal point can be thought of as the convergence point. Then to compute the trajectory vectors, all we need to do is subtract the initial points from the terminal points one by one and come up with a set of direction vectors. Hence, if an object is at (object_x,object_y) and the point that we wish the object to move toward is at (center_x, center_y), the direction vector is computed by:
heading =
The result of the computation is a vector that originates from the object, points in the direction of the center point, and has length equal to the distance between the object and center point. Figure 7-3 shows this graphically. If we were to use the heading vector to translate the object, it would work, but it would only take one frame for the object to reach the center point! To rectify this, we must normalize the heading vector to a length of 1.0 and then translate the object toward the center point by a multiple of the normalized heading. A vector is normalized by scaling its magnitude to 1.0, where magnitude means its overall length. Take a look at Figure 7-4 to see these concepts graphically.
Somehow we must shrink the heading vector so that when we compute its magnitude the result is 1. This can be accomplished by dividing each component of the heading vector by the length of the original heading vector. Let's take a look at the whole process. First, we compute the heading vector between each object and the center point at which we want all the objects to converge. This is done by subtracting each object's position from the center position a coordinate at a time. The result of this computation is a direction vector, which we'll call V=
vector magnitude of V = sqrt(x2 + y2) = |V |
The vertical bars around V mean "magnitude."
Then we divide V by its own length, like this:
V
Unit vector of V = -- = v
|V |
Now if we were to compute the length of v with the length equation, we would find that it has a length of 1. Thus, if we use v to translate the object toward the center point, it will move one pixel per frame. Therefore, to speed this up, we can multiply v by a constant (which is a scaling factor) such as (speed*v.) If speed were equal to 5, then v would be scaled to a length of 5, but it would still be in the direction of the center point. Figure 7-5 shows the relationship between V, v, and 5*v.
The selection of a center point can be anything we wish, such as the player, the center of mass of all the enemies, a fuel dump, a launch pad, or whatever. Moreover, if the center point moves as a function of time, the heading vectors will have to be recomputed each frame (which can be time consuming). The final caveat on computing heading vectors and vector math in general concerns the precision needed to accomplish it. The demo that follows shortly uses floating point computations. Unfortunately, floating point calculations are slow on the PC even with a math coprocessor, 486, or 586. Thus, when we finally delve into 3D graphics and write a high-speed library, we'll cover fixed point math (which we'll learn about later on in the book). But for now, just keep in mind that floating point calculations should be avoided during run time.
As an example of convergence (otherwise known as flocking), the program named CRITTERS.EXE forces a group of green dots (critters) to converge about a single point. The point they converge upon is their own centroid, but could just as easily be any point we wish. If you're not familiar with the term centroid, it is simply the average position. It can be computed by summing all the X components of the critters and calling that center_x, and doing the same for the Y components and calling that center_y. Mathematically, given that a set of N objects are defined by position vectors Pi , then the centroid position is,
n
Cp = (1/n) * S Pi
i=0
which simply says, "add up all the points and divide the result by the number of points."
Before we fall off the deep end with a bunch of math, let's take a look at the program. It's much easier to understand. The source is named CRITTERS.C and is shown in Listing 7-3.
Notice that both the position and velocity fields of the critter structure are floating point. This is necessary for accuracy. However, we could use other techniques, such as fixed point math, if this were a real game. For instructional purposes floating point will suffice for now. Finally, the main event loop is entered and the convergence begins. The interesting thing to note about this program is that it is the first program that supports replication, in other words, there are many copies of the same object. This is implemented with multiple data sets that use the same logic. This is a very important concept to grasp. When we write a game that has 100 aliens in it, we don't create separate code for each alien, but a set of functions that work on a set of data (usually elements in an array) that represent the 100 aliens. In the case of CRITTERS.EXE, the array critters[] contains all the critters, and for loops are used to process each critter in the same way.
The critters are first erased by replacing the pixels previously under them. Then the convergence logic is executed, which moves each critter toward the centroid. The motion of each critter stops once the critter has moved within a specific radius of the centroid (so they don't bunch up too much). However, instead of using the standard distance calculation, sqrt(x2+y2), which is slow, the Manhattan distance is used, which is also known as the 1-norm. It's calculated as the sum of components (x+y) instead of the square root of the squares. The results, of course, are in incorrect, but they will be close enough for our purposes. The interesting thing about Manhattan distance is the shape of the boundary that is generated by the tests. Figure 7-6 shows the results of using the standard distance equation and that of the Manhattan distance. Notice that the shape of the Manhattan distance bounding box is a diamond or square on its side! Manhattan distance can be used to speed up collision detection if you're willing to lose the accuracy of the standard 2-norm or sqrt(x2 + y2) calculation.
The next phase of the program scans under the critters, making use of the Read_Pixel_DB() function, and then draws the critters as green dots. The program executes until a key is pressed. Pay close attention to the final resting place of each critter. They will actually outline the region defined by a (x+y) distance decision variable (very cool!).
Before moving on to the next topic, patterns, we should at least touch upon the opposite of convergence, which is divergence. Similar to evasion, divergence is accomplished by selecting a point of divergence and then computing the negative heading vectors and using these to move the objects. Therefore, all the calculations are performed as before except that the vector V is negated with (-1*V) to arrive at the opposite direction vector. Then the vector is normalized, and so forth, as before.
The beauty of convergence and divergence is that once the heading vectors are computed, the objects can be made to converge or diverge during run time simply by multiplying the heading vectors by -1 to toggle the direction. As an exercise, try adding this ability to the CRITTERS.C program, but be careful to test for each critter moving off the screen since there is no such test as the program stands. Otherwise, the critters might get into memory!
We have seen how random variables and conditional logic can be used to drive the motion of a game object; however, there is another classic technique that was and still is very popular in action games. This technique is based on the use of prerecorded patterns. Patterns are a way of mimicking complex strategies without any underlying logic. For example, when two foes are engaging each other and one of them starts to take the upper hand, the defensive foe might back off in a zigzag pattern to minimize the number of hits he takes during his retreat. Figure 7-7 shows this scenario graphically.
Conditional logic and probability could be used to generate movements with much the same effect, but using patterns is easier to implement.
As another example, take a look at Figure 7-8.
Here we see four patterns, each of differing length and complexity. If we encode these patterns into some data structure (possibly an array or linked list), we can select one of the patterns based on the state of the game, a random variable, or a combination of the two. Then the data in the pattern is used to control the motion or other aspects of an object in the game.
The demo that we will see shortly uses a very simple encoding system. A pattern is defined as a string of characters that consists of the language r, l, u, d, . (period), and x. Table 7-1 defines the meaning of each character.
As you can see, by concatenating strings of the pattern commands, complete sequences can be made. For example, if we wanted to encode a pattern of a square, the following string would work:
"rrrrrrrrrrrruuuuuuuuuuuuuulllllllllllllllllldddddddddddddd."
Notice the "." at the end of the pattern to signify that pattern data is complete. Other methods can be used to terminate the sequence, but an end-of-pattern character is fine for our discussions. So how would we implement the use of patterns in a game? Well, we could store a set of patterns in arrays, then a pattern could be selected by a random variable, and an object would then use the pattern data to control its movement. Figure 7-9 shows this process schematically.
A random number selects one of four patterns, then the selected pattern is fed into the motion control logic of an object (a ship in this case), and the object moves as commanded by the pattern.
When the pattern data is exhausted, a new pattern can be selected, or possibly the object can use a different AI logic such as random motion or something else. In general, you may want to mix techniques to create an overall thinking model.
As an example of using patterns to control a game object, the following program, JUMPER.EXE, uses patterns to control the motion of a juicy spider. The spider will randomly select a pattern and then execute it until the pattern is complete. At this time a new pattern is selected and the process repeats. The program also displays a small information area at the bottom indicating the current pattern and data being interpreted. The source for the program is called JUMPER.C and is shown in Listing 7-4 for your review.
The program does all the usual stuff: allocate the double buffer, load the imagery, initialize the sprite, and so forth. The action begins in the main event loop with the test to see if the current pattern is complete by means of a "." character being read. If this is true, then one of the six patterns is randomly selected and started up. Notice the use of the sprite elements counter_1 and state. The state is used to hold the index of the currently active pattern, and counter_1 is used to hold the current element being processed of the current pattern. Again, we see that the extra elements of the sprite structure come in handy for tracking and counting. As a matter of fact, I wish I had added a couple more variables such as velocity and a scratch pad area, but, oh well!
Moving right along in the code, the next section following the pattern logic is the animation code, which simply increments the current frame and tests whether all the frames have been displayed. If so, the current frame number is reset and the animation cycle repeats. Finally, the spider is drawn and the double buffer is displayed. I hope you are becoming very comfortable with the animation cycle. By the end of this chapter, you will have the necessary tools to write simple 2D games, and by the end of the next chapter, I bet you could start your own shareware enterprise! We are definitely making a lot of progress. Next, let's talk about the techniques used to move objects in unison.
Synchronous motion simply means a group of objects moving together as a single entity. This is very simple to accomplish if we look at the way we implement multiple objects in a game. As mentioned before, multiple objects are nothing more than multiple data structures that are each processed by the same logic (see Figure 7-10). Each object may move in a different way depending on its position and current state, but we can move all the objects together if the situation calls for it.
There is no reason why we can't select one object as the controller. This object would do its own thing based on its position and state, but the other objects wouldn't perform any of their own calculations, they would simply mimic the leader's motions for a period of time and break off after some premise was met. This technique is very common in 3D combat games: a group of enemies makes an attack run in formation and then at some point, each breaks away from the pattern and attacks on its own.
Thus far the thinking models we have covered lack two main attributes: memory and rules. To create a robust thinking model, the model must have some kind of memory to remember its past, along with a set of rules or premises to guide its logic. The tracking algorithms, random motion, and patterns can be thought of as the resulting action that a high-level thought process might select, but how was the selection made in the first place? In other words, the final motion or action of a game object is a "motor" operation. We are interested in modeling the thought process that selected the motor operation in the first place.
State machines can bring us closer to the goal of an autonomous thinking model. State machines are machines (software, hardware, or biological) that have a set of states and a set of transition arcs. Take a look at Figure 7-11.
Here we see a general state machine diagram. There are five states, labeled S0 through S4. Also, we see that there are transition arcs from one state to another. These transition arcs are the events that must occur for the state machine to move into the state at the terminal end of any transition arc. For example, we see that for the system to move from S0 to S4, attack must equal true. Whatever that means is relevant to the context of the game the state machine runs in, but nevertheless is a rule that must be followed.
State machines are used widely in the control of synthetic game creatures to make them perform in certain ways based on the environment of the game, state of other game objects, and the current state of the machine. As an example, imagine that we create a monster that can be in one of the states listed in Table 7-2.
The transition arcs and associated rules are only for illustrative purposes, but you get the picture. The notion of "state" and software versions of a state machine can easily be implemented in software. For example, the state field of the sprite structure is for just such a purpose. To create a software state machine that drives a sprite, we might place one of the possible state constants in the state field of a sprite. Then we would use the counter variables along with the threshold variables to control how long the software state machine would remain in a specific state. We could further embellish this with some extra conditional logic that took into consideration other aspects of the game.
The transition arcs are implemented as conditional statements in software. For example, if the creature is in any of the states and is hit by a torpedo, it moves into the DYING state. The DYING state will execute for a number of cycles and then move into the DEAD state. The fragment of code that implements this follows:
As you can see, the software implementation of a state machine consists of a series of if statements (usually one for each state) coupled with other supporting logic within each state that determines if a state transition is to be made. There are other structures that can be used to implement state machines, such as switch() statements and look-up tables, but the if version is the clearest. Most of the logic is missing (since it would take pages to complete), but we do see some of the implementation of the dying test sequence. The test is made to see if the creature has been hit; if so, the state of the creature is set to DYING, and then the code continues. Next time around the main event loop, the state machine will fall into the DYING state, which will continue until the dying sequence is complete, at which point the state of the creature is set to DEAD, and the creature is no longer processed.
Once we have designed the possible states that a creature or game object may take on, we must decide how the states are to be selected. We can use a strict decision system that is deterministic, but a better way would be to allow some probability into the equation so that the creatures aren't predictable.
Take a look at Figure 7-13.
Here we see a state machine that is driven by a probability density function. The probability density function directs the control logic to select a new state based on a set of probabilities. The sum of the probabilities should equal 1.0 so that the probability selection makes sense. For example, Table 7-3 lists the four possible states.
ATTACK SO
RETREAT S1
RANDOM S2
PATTERN S3
Of course, there may be other states, such as DYING and DEAD, but these are the only states that are going to be selected by random probability. Then we could assign a probability to each state, as shown in Table 7-4.
ATTACK .5
RETREAT .2
RANDOM .2
PATTERN .1
S=.5+.2+.2+.1=1.0
Now if we write some logic that takes these probabilities into consideration to select a new action state, the results would be a game object or creature that has a "personality." This personality is defined by the probability of each state occurring. The implementation of selection is as simple as:
// this table has elements which are the states themselves
int state_table[0,0,0,0,0,1,1,2,2,3];
new_state = state_table[rand()%10];
Since each state is in the state_table[] a number of times equal to the probability of the state multiplied by 10, each state will be selected, on average, the proper number of times as a function of time. For example, the ATTACK state is supposed to have a .5 probability. Since 0 is in the state_table[] five times and there are a total of ten elements, when the following line is executed,
new_state = state_table[rand()%10];
the ATTACK state will be selected 50 percent of the time. The other states are selected in the same way. Hence, we use a look-up table to convert probabilities into final states. There are other ways to do this, but this way is very clean and works well. Moreover, it avoids floating point calculations, which are an absolute no-no in AI algorithms. We can't select new states at random all the time; that defeats the whole purpose of a state machine. Sometimes we want to select the next state based on some conditions. Let's talk about that now.
The whole idea behind state machines is to make state transitions based on the current state and a set of rules or conditions that are met. However, we have learned that sometimes an initial state can be selected at random using probability. In any case, what rules should we use to make state transitions and how do we derive them? This is a complex question and there is no single correct answer. The answer is - shall we say - fuzzy. In general, use common sense when you model your state machines that drive the game objects. For example, if a state machine is controlling the flight path of a ship, the determination of the next state should use relevant information such as the environment, what the other ships are doing, and what the current objective of the ship is.
If we look at real ecosystems and study the results of their interactions without trying to understand the mechanism of their reactions, we can come up with a lot of useful information that we can use to help model state machines in games. For instance, instinctive rules and reactions are very easy to model. If a creature in a game is being shot at, then it should simply transition into the retreat state; on the other hand, if a creature is low on fuel, then it might transition into the fuel searching state (if there are no higher priority tasks at the moment). Hence, state transition rules themselves should have an element of randomness in them.
A perfect example of randomness in human decision making is the following: you are walking and an obstacle is directly in front of you. You can avoid it by altering your course either to the left or to the right (see Figure 7-14). So which one do you pick, and why? The answer is, you will pick the one that is most convenient. However, if both are equally convenient, then the decision is usually random. Of course, we should temper this statement by noting that if you had a previous experience of avoiding the obstacle to the right, then there is a high probability that you will use the same tactic, since you have learned and performed it once before with success.
This brings us to the final aspect of state machine design: local memory systems that update state transition probability tables.
We have seen how probability tables can be used to select the next state of a state machine, and how random variables can be used to select directions of motions, and so forth. However, the probabilities are always fixed. Agreed, they are random, but the distributions themselves never change. Hence, an opportunity for learning is possible. What if we were to record a specific subset of events that occur as the game runs? In essence, we keep track of the environment of the game in some way as a function of time.
Take a look at Figure 7-15.
Here we see probability tables being used as driving inputs into a state machine. However, there is an added property: the probability tables themselves are driven by a set of game variables that are derived from the game itself. We can think of this as an interpretation of environment. Just as humans may change their actions and behaviors based on environmental conditions, we can use the environment of a game to alter the probability distributions and decision making that is done by the state machines controlling the game objects. As a solid example of this, imagine that a game uses a random variable to decide whether or not to attack or retreat from the player with the following probability distribution:
ATTACK 60%
RETREAT 40%
However, the state machine logic will never "look" at the state of the game and decide if this probability distribution should be altered. A simple way of altering this static probability distribution would be to use the current health of the creature versus the player. If the creature was very strong with little damage and the player was weak and ready to fall, then the probability of attacking should be increased. On the other hand, if the creature was weak, then it should retreat more often. This type of information can easily be modeled into the system via altering probability distributions.
The next topic of discussion, although a bit off the subject of pure intelligence modeling, is that of terrain following. If we're going to write games that have objects flying around in the air and driving on the ground, then some means of control must be thought up so the objects don't crash! Since we're primarily still discussing 2D implementations at this point, we will use terrain following as the illustrative example instead of the more general 3D case of flying through a collection of objects, but the fundamental concepts are the same. We must derive a technique to keep the game object on the game surface and away from collisions.
Although terrain following can be done using AI or synthetic intelligence, a much simpler implementation can be achieved by looking at the geometry of the problem in two dimensions. Take a look at Figure 7-16.
Here we see a mountain range and a ship. The idea of the game is to fly the ship as close as possible to the terrain without hitting it. Many solutions come to mind. We could have a database of the heights of each vertical strip of the terrain and then, depending on the position of the ship, we could index into a table and modify the vertical position of the ship as a function of time. Another solution would be to maintain a rough vector map of the terrain that has the main topology in it without all the detail, as shown in Figure 7-17. We could then compute the normal vectors or perpendicular vector to each segment and make sure that the ship flies in a way that is parallel to the normal vectors of each segment while staying above the segments themselves.
The method of using segments and normals is very important in 3D systems and is usually used for terrain following. But as an example of simple tactics, we are going to implement a terrain follower using image space techniques. Or in other words, we're going to make the ship "look down" at the pixels under it to see if the mountaintop is too close, and if so, the ship will gain altitude, otherwise, it will descend. To accomplish this crude vision, we will simply scan the pixels under the midsection of the ship using the Read_Pixel() function. If the pixel read isn't black, there must be land there; hence, the ship should remain at its current altitude. However, if the scanning probe doesn't report any solid pixels, the altitude can be dropped. By performing this process each frame, the ship will track the terrain, but its orientation will never change. To accomplish that, we need to use the normal segmented method.
Unfortunately, image space algorithms-that is, algorithms that function in the frame buffer itself-don't work very well in 3D. Most of the time 3D algorithms must work in object space, or in other words, the 3D mathematical space that the objects exist in. However, as a simple example of 2D image-space terrain following, the program below, named FLOATER.EXE, will show how it's done. The program begins by querying you to enter the roughness of the terrain and the tracking distance the ship will fly above the terrain. The roughness of the terrain is 1 to 10, where 10 is the roughest. The minimum height at which the ship can track is 1 pixel, but I suggest you don't go lower than 2. Anyway, the name of the source is FLOATER.C and it's shown in Listing 7-5.
The floater program has a couple of neat effects other than the terrain following itself. The program draws a fractal 2D terrain using a random variable. The terrain is drawn by using a random variable to select whether or not the next pixel of the terrain should be above or below the previous pixel. The number of times these changes are made as a function of horizontal position is related to the "jaggedness" of the mountain. Also, note how the random number generator is seeded with the current time by the following line, which extracts the current time from the real-time clock in seconds.
srand(*(int far *)0x0000046CL);
Since the time will always be different, it is a good number to use as a seed for the random number generator so that the random sequences will be different every time the program is run. Remember, the random number generator does not produce random numbers. It only produces a sequence of numbers that seem to be random and exhibit the proper distribution and noncorrelation that random numbers must have. If the random number generator isn't seeded with a different number at program startup, the sequence of random numbers will always be the same. Thus, by seeding the random number generator with the time (as one example), a new set of random numbers will be generated each time the program is run. This is a very important fact to know when writing games. If you use random numbers, seed the random number generator at the beginning of the game, otherwise, the computer will select the same moves every game!
Anyway, after the terrain is drawn on the screen, the main event loop is entered and the ship sprite appears at the right edge of the screen. It will then start descending until it "sees" the mountain terrain under it via its vision probe, which is nothing more than this single line:
if (Read_Pixel_DB(speeder.x+4,speeder.y+12+hover_height))
speeder.y-=2;
Depending on the results of the vision probe, the ship will be translated either up or down to follow the contour of the terrain. After this, the sprite is drawn and the cycle repeats. As an addition to the program, try creating two more animation frames depicting the ship climbing and falling, as in Figure 7-18.
Then rewrite the program so the ship logic scans forward and if an ascent is going to occur, the ship tilts upward, and if a descent is inevitable, then the ship tilts downward.
Once we have designed a rudimentary thinking model that will drive some object or creature in a game, another enhancement to the model can be realized using simulated sensory organs. The creatures in a video game can't really see because they are only projections. However, since we know the position and state of every object in a game, we can simulate all the senses if we wish. Vision, for example, is the most important of the senses, so let's concentrate on how we can implement simulated vision.
Take a look at Figure 7-19, where we see a top view of a battleground.
There are a few tanks and the player. The tanks are all trying to find the player. Obviously, we know the position of the player and the tanks and we can simply use one of the tracking algorithms to lock the tanks onto the player. But wait a minute, is this really fair? The answer is no. Paying close attention to the position of the tanks and that of the player, we see that if the tanks had eyes they wouldn't be able to see the player and therefore, they shouldn't be able to "cheat" by looking at his coordinates.
A more realistic solution to controlling the tank logic would be to implement a vision probe or scan from the viewpoint of a tank. The probe would scan out a certain distance and a certain angle, as shown in Figure 7-20.
During the scan, if any objects were detected, they would be marked, inserted into a list, and processed as "visible" by the tank logic. Using this technique, the tanks will act in a more realistic manner, since they will use vision as feedback into their logic controls instead of cheating. Admittedly, the player has a big advantage since he can see the tanks when they can't see him because of the 2D aspect of this example. However, if the game were 3D and the viewpoint of the player were first person, then the player couldn't see the tanks through a wall, so the tanks shouldn't be able to see the player through a wall either!
The question is, how do we implement this scanning process? Well, it depends on the situation. In a top-down 2D game, we can divide the game grid into squares, as shown in Figure 7-21.
Then for each tank, a small pyramid shape of squares would be scanned, and any objects in the squares would be processed. In essence, we are only scanning the field of view of the object. On the other hand, a 3D game would need a more complex scanning method. Maybe instead of using a 2D scan of squares, 3D cubes would be the scanning primitive, as shown in Figure 7-22.
Or rays might be cast out from the object's viewpoint. The technique is up to you, but the main idea is to simulate vision in some way (as inaccurate as it may be) and use the data for feedback into the thinking logic of the game objects.
Vision systems are very important when a game object is trying to navigate in an area that has a lot of solid objects and obstacles. In this case, having an idea of what's in front of the moving game object can be used to make course changes before it's too late.
We now have all the elements needed to create reasonably complex thinking models for our games. The question is, what do we use and when? Every game has creatures and objects that are of varying intelligence. I suggest that objects and creatures that aren't the main attraction in a game use simple methods to implement their intelligence such as tracking algorithms, random variables, and patterns. Game elements that the player will be interacting with that are supposed to be challenging should use state machines with probabilistic state selection and maybe even environmental feedback via global indicators.
The main objects in the game that really are supposed to have brains should add vision and memory systems. The memory systems should record simple indicators of the player's actions and reactions to try to learn strategies to beat the player. In general, building thinking models is an art form and it will take time for you to get the hang of it, but by using the techniques outlined here, you have more than enough information to create game opponents that can match or beat their human counterparts!
Although we have focused mainly on 2D representations during this chapter, all the concepts translate to 3D, in most cases, simply by adding another variable-the Z coordinate. For example, the chasing algorithm we covered can be altered to function in 3D with the addition of a couple lines:
The material on state machines is dimensionally invariant so there won't be many changes in there. In general, once we have covered all the aspects of 3D space, math, and transformations, all the topics discussed here will be easily convertible to their 3D counterparts, in most cases with a couple lines of code.
Within this chapter we have discussed and detailed the main techniques used to create thinking models for video games. We have in fact learned to bring the images on the screen to life to do as we command. Remember, the goal of a Cybersorcerer is to create a real universe within the computer, not just to mimic one. Thus, the algorithms and techniques we have learned, such as tracking algorithms, state machines, probability distributions, and sensory simulation, are only the starting point for more complex research. Who knows, maybe a video game programmer will create the world's first living machine. Hmmm, while you're thinking about that, get ready for the next step in our training, and that's possession.
This site is maintained by [David Middleton], Waite Group MIS Director
Copyright © Waite Group Press, 1994, 1995, 1996 (All Rights Reserved).
Listing 7-3 Demo of convergence
// CRITTERS.C - A demo of convergence or flocking
// I N C L U D E S ///////////////////////////////////////////////////////////
#include
The program begins as usual by creating a double buffer and loading the graphics. However, in this case there is only a background, since all the critters are drawn with green dots instead of sprites. Then the critters[] array is generated by randomly selecting a position on the screen for each critter. Once all the critters have been randomly positioned, their centroid is computed by averaging all their positions. At this point, the centroid is used as the center meeting point to compute all of their heading vectors. The heading vectors are all computed and then placed in the critter structure fields xv and yv, which are the velocity or translation factors of the critter.
Divergence
Moving in Patterns
ASCII Character Meaning
r Move right
l Move left
u Move up
d Move down
x Do nothing
. End of pattern
Listing 7-4 Demo of using patterns to control motion
// JUMPER.C - A demo of spider jumping around using patterns
// I N C L U D E S ///////////////////////////////////////////////////////////
#include
Synchronous Motion
State Machines
Name of State Numeric Representation
MORPHING S0
EATING S1
HUNTING S2
MOVING_RANDOMLY S3
CONVERGING S4
DYING S5
DEAD S6
Using the state table as a foundation, a state diagram can be generated, as seen in Figure 7-12.
// make sure the creature is alive
if (creature.state!=DEAD)
{
// process each possible state of the creature
if (creature.state==MORPHING)
{
// perform next iteration of morphing logic
} // end if in morphing state
else
if (creature.state==EATING)
{
// perform next iteration eating logic
} // end if in eating state
else
if (creature.state==HUNTING)
{
// perform next iteration of hunting logic
} // end if in hunting state
else
if (creature.state==MOVING_RANDOMLY)
{
// perform next iteration of random logic
} // end if in random state
else
if (creature.state==CONVERGING)
{
// perform next iteration of convergence sequence
} // end if in convergence state
// test if creature is hit
if (creature.state!=DYING && creature.state!=DEAD && Creature_Hit())
{
// set state of creature to dying
creature.state = DYING;
// set up all other auxiliary variables
//....
} // end if hit
if (creature.state==DYING)
{
// perform next iteration of death sequence
if (death_sequence_complete)
creature.state=DEAD;
} // end if in dying state
} // end if creature is not dead
Probabilistic State Selection
State Name Value
State Name Probability of Occurrence
Rule-Based State Selection
State Machine Probability Tables
Terrain Following
Listing 7-5 Demonstration of 2D image-space terrain following
// FLOATER.C - A demo of 2D terrain following
// I N C L U D E S ///////////////////////////////////////////////////////////
#include
Implementing Vision
Building the Perfect Predator
Extending to the Third Dimension
// test X axis position
if (player_x > creature_x)
creature_x+=dx;
else
if (player_x < creature_x)
creature_x-=dx;
// test Y axis position
if (player_y > creature_y)
creature_y+=dy;
else
if (player_y < creature_y)
creature_y-=dy;
// added code!
// test Z axis position
if (player_z > creature_z)
creature_z+=dz;
else
if (player_z < creature_z)
creature_z-=dz;
Summary
[Home]
[News]
[Upcoming]
[All]
[Subject]
[Order]
[FTP]
[Inside]
[Extra]
[Links]
[Tech]
[eZone]
Site design by Mitchell Waite. Site development by [TAGE Multimedia]