Binary Space Partitioning Tutorial Part II
for Mr-GameMaker.com


Creating a Solid Leaf Tree , Portal Generation , Precalculating a PVS and Adding Frustum Rejection

written by Gary Simmons
Code by Gary Simmons & Adam Hoult


Special Thanks to: Adam Hoult, Matthias Liertzer & Klaus Hartmanm

API : DirectX 8


Download Bsp_PVS.zip


VERY IMPORTANT

This Tutorial is the Part II of the Mr-Gamemaker BSP tutorial series and assumes that you have read and fully understood Part 1.If you have not read Part 1 then I STRONGLY suggest you go there now and read this tutorial first because much of the code has been reused (with small alterations) and that code will NOT be explained again in detail in this tutorial. This tutorial will assume that you have read, used and fully understood the code from Part 1 and will often make reference to it. If you have not read the first tutorial or at least have a VERY strong grasp on Solid/Empty BSP trees then this tutorial will make little sense to you as it builds on from "Part 1: Creating a Solid Node BSP Tree".

Introduction

Wow, this tutorial has taken me forever to write, thats not because I am lazy either but because it has taken me two months just to get the code working.Infact this tutorial (as you can probably tell from the title) is really four tutorial in one.The reason though that I have covered all these topics in one tutorial is because they are all related to each other and are all needed to create a fast BSP engine . We will start by looking at the problems we are trying to solve and exactly how the different sub topics of this tutorial relate.For example, we need to know how the PVS data will be stored in order to render the BSP tree but we can not do that until we know what a PVS actually is.So first of all we will look at the different tasks we need to solve (Building a Leaf Tree,Generating Portals for the tree and Calculating the PVS for the tree) in theory first and then will go back over each one showing exactly how to implement it in code.

What is a LEAF BSP Tree?

You cannot have been into 3D Graphics very long if you have never heard people talking about leaf BSP Trees.They were used in games such as Quake and Unreal and yet vitrually every BSP resource on the internet (including my Part 1 BSP tutorial) only covers NODE BSP trees in detail. If you understand how to create a Node Compiler (I hope you do) then you will have no difficulty understanding Leaf Trees. So whats the difference?

You may remember that when we created our Node Based Compiler in part 1 that we fed a large list of polygons into the compiler function and the compiler would subdivide the level until every single polygon was assigned to a Node.This meant that every polygon had also been used as a splitting plane.A Leaf tree compiler takes advantage of the fact that certain batches of polygons within a level will not be obscuring one another in any way (assuming backface culling is being used) so there is no need to further subdivide.The way it works is like this:-

1). First we choose a splitter from the Polygon List as we did before BUT instead of storing the polygon itself at the Node we only store the polygons Plane.The polygon is marked as having been used as a splitter but unlike the Node based compiler we do NOT remove the polygon from the list.We send it down its own front list and this polygon may still continue to be split by other Polygons Planes.If a polygon that has already been used as a Plane is split into two further down the tree then the polygon is removed as before and two new polygons are created but it is very important to realise that these splits inherit whether or not they have been used as a splitter from the parent.In other words, if "Polygon A" is used as a splitter we set a variable in that polygon that specifies it had been used as a splitter and should not be selected again,the polygon is still processed down the tree however and can still be split by planes further down the tree.Now if "Polygon A" then gets split into Polygon B and Polygon C then Polygon A is deleted and Polygon B and Polygon C inherit the fact that Polygon A has already been used as a splitting plane.This means that although polygon B and C are further processed down the tree they can never be selected as a Splitter.

At each stage through the Compiler loop we check the Front list and the Back list and if all the polygons in any of the lists are ALL infront of one another then this list forms a convex hull and no longer has to be subdivided because if they are all in front of one another then none of them can be intersecting each other.This List (Front or Back) then becomes a Leaf.In other words all the polygons in the list whether they have all been used as splitters or not are all written to a Leaf structure and attached to the current Nodes Front or Back Pointer.When we traverse the tree we know that when we hit a leaf we can just render the polygons in that leaf in any order.

If your head is already starting to hurt lets have a look at the two diagrams below.Figure "A" shows an "L" shaped room compiled into a BSP level using a Node based Compiler (the compiler we created in Part 1) while Figure "B" shows the same geometry being compiled using a Leaf Compiler.Look how much smaller the tree compiled using the Leaf based compiler is.In fact it only has one Node and two leafs.It important to realize also that in the Node compiler the Polygons themselves were stored at each node where as in the Leaf compiler only the PLANE of the polygon selected is stored in the node and the polygon is then passed down its own front list.



Can you see how the Leaf compiler works.It chooses Polygon B as a Splitting Plane and then subdivids all polygons as either being infront or behind this plane.Polygon E straddles this plane so is split into E1 and E2. The actual Polygon B is sent down its own front list.First we check the Polygons in Node B's front list (A,B,E1,F) and discover that all the polygons in this list are ALL facing each other.This means it is a convex hull and can be rendered in any order as long as back face culling is enabled.We simply write all these polygons to the Leaf structure (which is just a structure that holds a list of polygons for that leaf) and attach it to Node B's front.We then check Node B's back List and find the same thing.Polygons C,D and E2 are all infront of one another making another convex Hull so the polygons are all written to another Leaf structure and attached to Node B's back.

You can hopefully see how much quicker traversing and rendering the Leaf tree would be.In the above example to render the leaf tree we just say :-

Are we infront of Node B?

if Yes then Render Back Leaf Polygons and then Render Front Leaf Polygons
else
Render Front Leaf Polygons and then Render Back Leaf Polygons

Now I am not going to go into to much detailed about the Normal Leaf tree shown above because we are not going to use one. WHAT?. Yep you heard right, looking at the above example there is one big disadvantage with the Normal Leaf Tree.It has no Solid / Empty information.You can see that by traversing the Node Tree in the above example that we either end up in SOLID space marked with an "-" (in a wall or something) or in Empty space marked with an "E" but the Leaf tree has no such information.This is a huge bummer for me because I loved the way the Solid Node compiler could be used for simple and fast "Line Of Sight" determinations and "Collision Detection" without having to test a single polygon.

Now I find the Solid Node tree a lot more useful so I should just use that but there is a big problem.We MUST use a Leaf tree in order to pre calculate a PVS (Potential Visibility Set, more on this in a moment).As calculating a PVS to make our engine fast is the whole point of this tutorial it seems like loosing the solid/empty information is a sacrifice we must make. Not so!!! Enter the "Solid Leaf BSP Tree".

What is a "SOLID LEAF BSP Tree"?


The Solid Leaf Tree is really just a mix of the Node tree and the Leaf tree.We will really look in detail at how to create one later but for now just bear in mind the fact this this is simply an overview.
The ONLY way to accurately divide the world into Solid and Empty space is to do what our Node compiler did in the previous tutorial and use EVERY polygon in the Level as a split plane.However we also want to only store the Polygons Planes in the Nodes like the leaf tree and end up with all the polygons batched together in convex leafs.This is actually fairly easy to do so lets have a look first of all at the steps we must take and then once again have a look at what the "L" shaped room used in the above examples would look like if compiled by a Solid Leaf BSP compiler.

The Compiler Works Like this:-

1.) At each recursive call of the Compiler function, the function is passed a linked list of polygons and a newly created empty Node structure.The compilers task is to choose a polygon from the list to use as the splitter.A polygon is selected from the list and a Plane is created using the polygon info and stored in the newly created Node.This Plane is now the splitter for the current Node.We also mark the polygon that was selected and from which the Plane was created from as having been used as a splitter already so that it can not get choosen again.

2.) Our Next step is to loop though every polygon in the list (including the one that was used as the splitter) classify the polygon against the splitting Plane.Just as before if the Polygon is Behind the plane it gets added to the Back List and if it is in front of the plane it gets added to the Front List.If the polygon is ON (sharing) the plane then the Normal of the polygon is compared to the Normal split Plane.If the Polygons Normal is facing the same way as the Split Plane then the polygon is added to the Front List.If the Polygon Normal is facing in the opposite direction to the Split Plane then the polygon is added to the back list. There should be nothing really new here for you, this is very similar to the Node tree except for the fact that in the Node compiler we always Sent ON PLANE polygons down the Front list regardless of which way the polygons normal was facing.This time as I say above we test the polygon Normal and assign it to a List depending on the way it is facing.

3)At this point we now have our Front and Back lists successfully built for this call to the compiler function but now comes a major difference.We check each Polygon in the Front list and if ALL the polygons have already been used as a Split Plane then this is a Convex Leaf and the polygons can be written to a leaf structure and attached to the current Nodes front.If there are still polygons in the front list then this means that all the polygons have not yet been used as splitter and this must be done to accurately subdivide space into Solid-Empty areas.So if there is at least 1 polygon in the front list which has not been used as a splitter then we create a new Node, attach it to this nodes front list and the function recursively calls itself with this new node and the Front List of polygons.

4)Just like our Node compiler in Part 1 this compiler also works by never having back Leafs.In other words a Leaf will always be attached to a Nodes front.We will see this in action in a moment.We now check the back list,if there are NO polygons left in the back list then we simply store a -1 in the Nodes Back Pointer.This means if we ever try to go down the back of this node we will know that the -1 means we can not because it is solid space.If however there are any polygons left in the back list we create a new node and the function recursively calls itself passing in this new node as the current node and passing in the Back list as the list of polygons.

Thats it ,thats how its done, it is virtually identical to the Solid Node tree except we have managed to pass the polygons down the tree after they have been used as splitter planes so that they all end up in convex batches.

Lets now have a look at this working with the "L" shaped room used in earlier examples:-

Can you see how it works. It looks virtually identicle to the solid Node example in Figure "A" except that because we do not remove the polygon that has been used as a splitting plane and pass it down the tree you can see that we are pumping the polygons further down the tree until they end up in leafs.Its also funny how you do not have to do any explicit test to see whether the remaining polygons are convex or not because using our method we will always end up with a convex leaf when all the polygons in a FRONT list have been used as a splitting plane.

In order for you to better understand I will now show another diagram of some slightly more complex geometry that causes extra splits so that you can see how polygons already used as split planes can still get split.

Ok then here you can see that we choose polygons F's plane as the splitter first which splits polygon D into D1 and D2. F's front list contains polygons F,E,D1 .Lets go down the front list first.Next the function creates a new node and calls itself with this new list of polygons.Next we choose polygon E.E's front list becomes F,E,D1 and because there are NO polygons Behind E's plane Node E's back pointer gets set to -1 indicating that if we travel down the back of E we will be in solid space.The function then calls itself again with a newly created node attached to Node E's front pointer and a once again a polygon is selected as a split plane.Note at this point that both F and E have been marked as having already been used as split planes so the only polygon in the list left to be used as a split plane is D1.D1 is then choosen.There are no polygons behind D1 so Node D1's Back pointer is set to -1.Node D1's front list at this time is F,E and D1 still.However,all the polygons in this list have now been used as splitter which means that the polygons in this list form a convex hull that bounds empty space.This means Polygons E,F and D1 are written into a Leaf and attached to Node D1's front.Study the tree in the above diagram until you understand completely what I am saying.
Studying the Back List of F is perhaps more revealing as to how the whole process works.In F's initial Back list are polygons A,B,C,D2. Polygon C is choosen as the Next Splitting Plane.There are no polygons behind C so this is marked as being Solid Space.C's front list is A,B,C,D2. Remember that C goes down its own front list but is marked as having been used already so can never be used as a splitter again.The next splitter choosen from C's Front List is Polygon A.Even though C has already been used as a split plane Polygon C is still split by Polygon A to create C1 and C2 BUT polygons C1 and C2 inherit from the parent Polygon C that it has been used a splitter so C1 or C2 can never be selected as a splitter.Here is Figure D again so you can keep it on screen as you read.


Polygon A's front list now consists of polygons A,C2,D2 and the back list contains B and C1.Next in A's front list we choose D2 because it is the ONLY one left in the list which has not been chosen as a split plane yet (remembering that the parent polygon D was never used as a split plane so D1 and D2 inherted its status). D2 has no polygons in its back list so this is marked as being solid space and D2 has polygons d2,c2 and A in its front list.However these have all been used as splitters which means we are now at a leaf and these polygons must form a convex hull.The polygons in the list are written to the Leaf structure and attached to D2's front. You can see in the diagram that this leaf represents the space bounded by polygons A,C2 and D2 so if we ever traverse the tree and end up here we are somewhere in the empty space between these polygons.
We only have Node A's back list which we still have not dealt with.This consists of polygon B and C1.We select a new splitter which must be polygon B because C1 has already been used as a split plane (when it was the parent polygon C) and B has no polygons in its back list so Node B's back list is set to -1 to represent solid space.Node B's front list consists now of polygons B and c1 but both have been used as a splitter already which means this is another leaf.These two polygons are written to a Leaf structure and attached to Node B's front.

PHEW, you still with me? For those of you that are still awake you may be thinking "Isnt this tree going to be just as large as the Node Tree?" and the answer is yes.We have to use every polygon as a split in order to divide the space up into solid & empty areas but this is not such a big deal and here is why.Usually even if using a Normal Leaf tree many programmers will compile a secondary BSP tree for use with collision detection, we do not have to because we have all our information in our main tree.The other Big question you must be asking is won't this Solid Leaf Tree be much slower to Render than a Normal Leaf tree because we have to traverse the whole tree to render it.And the answer will be quite suprising, get this, "We are No Longer going to Traverse the Tree to render it" & "we will no longer be using the BSP tree for back to front ordering".You are probably very confused so lets talk about this some more.

Rendering the Tree with a PVS and Z-Buffer

It is time we talked about exactly what a PVS is.It stands for "Potential Visibility Set" and is an ARRAY stored for EACH leaf which is used to hold the visibility information of all other leafs from that leaf.We look at how the PVS data is stored later and will also look at how to generate PVS data for a BSP level but for now just understand what it is.Im sorry to have to get into this now instead of continuing with our Solid Leaf Tree implementation but the two are quite tightly linked.We do not yet need to know how to calculate the PVS in order to write our Solid Leaf Tree but we DO need to know how the PVS will be stored in memory because our Solid Leaf Tree will use this Data to Render the Tree

For simplicity, at the moment just imagine that each Leaf in the tree has an array of bytes equal to the number of leafs in the tree.Imagine for now that we are examining the PVS array for one of those leafs, which we shall call the current leaf. Each byte in this array is set to either 1 or 0 for every other leaf in the tree and determines whether that leaf is visible from the Current Leaf.Byte 0 represents leaf 0, Byte 1 represents Leaf 1 etc.For example, imagine we are in Leaf 10.We loop through Leaf 10's PVS data and first of all check byte 0,if this is set to 1 then Leaf 0 is visible so should be Drawn otherwise it should be skipped.I hope you see where this is starting to lead.In our previous tutorial we used the BSP for Back to front ordering and how ever large the game world was the Entire world was drawn each frame leading to loads of Overdraw and a poor frame rate. By precalculting a PVS for each leaf ahead of time not only do we not have to render the unseen polygons but we don't even have to process them in ANY way. What I mean is we do not even have to do costly checks to see if polygons are within the view fustrum or not because if they are not in that leafs PVS set then that leaf is simply skipped over and the poylgons in that leaf are never processed.

So the general rendering algorithm works like this.We traverse the BSP tree each frame just to find out which leaf the camera is currently in which will be very quick because we will only be traversing a fraction of the tree. Then we loop through that Leafs PVS data.Any bytes in the array that are set to one means we render the Leaf (polygons in the leaf) with the same number.In other words if Byte 23 is set to one we render the polygons in Leaf 23.Do not worry for the moment how we access Leaf 23 or how it is stored in memory let me just say that all our leafs will be in an Array so accessing Leaf 23 will be as simple as Leaf[23] but we will talk about this in a lot more detail later when we actually start to build the tree.
I hope you are starting to understand how good this is starting to look for us.Instead of having to step through the entire level we just go to the Leaf the camera is in and simply do a for/next loop through that Leafs PVS array.The Leafs in the array will just be rendered as they are encountered which means we will lose our depth sorting information so we will simply enable the Z-Buffer.We will most likely need the Z Buffer enabled for the other dynamic and moving objects in our scene any way. So then our typical rendering algorithm will look like this.This is not the actual code but is just to give an idea of how it works:-

void DrawTree (long CameraLeaf)
{
for (int i=0;i<NumberOfLeafs;i++)
{
long offset=CameraLeaf*numberOfLeafs;
if (PVSData[offset+i]==1)
{
RenderLeaf(i);
}
}
}



In the above example all the PVS data for all the leafs are stored in one master PVSData array.This means we simply pass in the number of the leaf the camera is currently in and find the offset into the PVSData array by multiplying the NumberOfLeafs by the Leaf we are currently in because each leaf in the PVSData array has NumberOfLeafs*BYTE allocated for it.

An important thing to remember is that the PVSData array is view independant.When we calculate the PVS it will contain all the Leafs that are visible from ANYWHERE inside the current leaf.If you look below at figure E you can see that Leafs 2,3,5 * 6 would all be potenially visible from Leaf 1.This does not mean however that during our game the camera can see into all those leafs just because it is in that leaf.You can see below that the camera infact may not be able to see into any of those leafs if its current position was to be stood at the south wall of leaf 0 facing south so its back was facing all doorways.I hope you realize what I am saying here as it is quite important.In other words the PVS determines which Leafs are visible from EVERY possible point inside the current leaf and this is not the same as what the camera can currently see as the camera could be anywhere in that leaf.

The PVS set reduces the amount of polygons in the level down to an amount that can be managed , that is the amount of polygons that could possibly be seen from this leaf. We can then perform Frustum rejection on the PVS set during rendering to determine exactly what the camera can not see. This means even if our Game Level contained 10000 leafs (thats big!) we would still only have to process and render a handful of leafs.This means the Frame Rate of our level will no longer be related to the Size of our level as it was in the Part 1 of this tutorial. Even if like I said there were 10000 leafs in your level, in some frames (depending on your Geometry) you might only render 3 or 4 leafs.This means your Polygon Count has just fallen by about 1000% or more.Now thats really cool.

If you do not understand much of what I just said then I suggest you download the demo that accompanies this tutorial.It allows you to see the PVS set working on a top down view of a game level.You can see that the PVS set ignores ALL polygons except those in the immediate area that could possibly be seen from the current leaf.If you then enable Frustum Rejection by pressing F2 you can see how this further refines the PVS set down to only leafs that are in the view frustum.

There is a bit of a problem though, the PVSData array would be HUGE if we used the methods described above.If our game world had 5000 leafs the total memory needed for just the PVS Data alone would be:-

5000x5000= Bytes = 25000000 Bytes = 24 MB (approx)


WOW, thats way too big, thats before we even talk about allocating memory for textures and polygons etc or even store the BSP tree itself. So what can we do? Well first off, because a leaf is either visible or not we do not need a whole byte per leaf thats is just a waste of bits. Instead we will store a Bit per Leaf in the PVS set which knocks the PVS array down to about 3 MB approx because each byte would now store the visibility information for 8 leafs instead of one but it is still a bit large if you ask me.

Now there is a way that we can not only save bucket loads of space but also speed up the rendering loop a little more as well.In the above Code example we had to loop through the leafs PVS array.This means if there were 5000 leafs although we would no longer have to process,test or render them we still have to do 5000 checks of the PVS Data array which ok is not a huge deal considering how much time we have already saved ourselves but whats great is that we can compress the PVS Data using "Zero Run Length Encoding" and this will actually speed up the rendering loop by quite a large amount as well as free up loads of memory.

Zero Run Length What ?


Ok just try and imagine a game world in your head as if you were looking down on a large installation of some kind with the roof taken off.Imagine this complex had thousands of rooms and imagine each room is a leaf. It should be obvious that each leaf will only ever be able to see a handful of other leafs at most.In other words if there were 1000 rooms in your installation and we were in room 0, we probably at most could see into perhaps rooms 1 and 2 for example.OK this is a bit of a simple example as doors may be open which would allow you to see into other rooms adjoining 1 and 2 but the point is , for each leaf in the tree MOST leafs will not be able to be seen from it.This means the PVS data for each leaf will be mostly set to zero which is a huge waste.

Now imagine that our level has 1000 rooms and we are in leaf 0 and leaf 0 can only see into rooms 1,2,3,20 & 35.The PVS Data for this leaf would have to be large enough to hold a Bit per leaf so (1000+7)>>3=126 Bytes (We add 7 and then divide by 8 to deal with the truncation that happens when dealing with a long.For example, if we had 9 leafs we know we need two bytes to hold this information.Byte 1 would hold the first 8 bits and byte 2 would hold bit 9.It is a common mistake to assume that to calculate the space needed we can simply do:- 9/8=1.This would only give us 1 Byte which is in correct.However if we add 7 and divide by 8 we get this:- 9+7=16/8=2.This will always make sure that we round UP to the nearest whole number and not Down).Anyway, back to what I was saying, what a waste to use 126 bytes for this leafs PVS set when it can only see into 5 rooms.Remember also that we are now using a BIT per Leaf in each leafs PVS array, so that the visibilitly information for leafs 1,2,3 would all be in Byte 0 of the PVS array. The PVS data for leaf 0 without Zero Run Length Encoding would look like Diagram F in memory.



As long as you know your binary (and I hope you do) you should have no difficulty understanding the above diagram.In this example the Leaf can see Leafs 0(itself),1,2 and 3 and this information is all contained in the first byte (Byte[0]).So we set bits 1,2,3,4 to 1 which means that byte now holds a value of 15.We can not see any of the leafs ranging from 8-15 so all the bits in Byte[1] are left at zero giving us a Zero value.The next byte in the array (Byte [2]) represents leafs 16-23 and we can see one of these leafs, leaf 20.Leaf 20's visibility bit is BIT 5 in Byte [2] (16=bit1,17=bit 2,18=bit3....etc) so we set the 5th bit in Byte[2] to give it a value of 16. The Next Byte in the array represents Leafs 24-31 and none of these leafs are visible to the current leaf so we leave this byte set to zero.Byte[4] however represents leafs 32-39 and we can see leaf 35 so the appropriate bit in Byte[4] is set (bit 4) giving it a value of 8.Now from this point on all the other bytes in the PVS Data array would be zero because no other leafs are visible.On a very large level this would be a tremendous waste of memory and it would also be a waste during the rendering of our level to check all the bytes when we know before hand that the rest of the array is set to zero.

Look below and you will see the Same example being represented using Zero Run Length Encoding (ZRLE) which compresses this very simple data set from 126 Bytes right down to 9 BYTES.Now this is a very simple level but just imagine how much memory could be saved on large levels and also imagine how much quicker it would be to loop through 9 bytes at every leaf instead of 120.Magnify that when using a large data set and you can see the real advantage.Take a look and see if you can see what is happening.

15 , 0 , 1 , 16 , 0 , 1 , 8 , 0 , 121


When a Byte does not equal zero we simply write it to the array as normal.You can see that byte zero which was equal to 15 does not get compressed in any way and is instead just written to the data array.However,when a Zero is encountered,instead of just writing it to the array we count how many zeros follow it.Now in our example there is only one zero at byte 1 so we write in the Zero byte followed by an additional byte which describes how many zeros are in a run .You can see the saving more clearly at the end were we would have had a run of 121 additional bytes instead just represented by two bytes (0 and 121).On very large data sets you would get many runs throughout the data set, the maximum single run of zeros can be at most 255 because that is the most a single byte can hold to represent the run length.

Now with our PVSData represented like this we can speed up rendering using something like the code below.DO NOT panic , you are not supposed to understand the following code yet as we have not even looked at the code to build the Solid Leaf Tree yet but I have highlighted the green section to show what happens in the Render loop if the Current PVSData is NON zero and highlighted the Blue section to show how the Zero byte runs can be quickly skipped passed during the rendering process.Do not worry all this will make sense later on.:-

void DrawTree(long leaf)
{
POLYGON *CurrentPoly;
int i;
long PVSOFFSET=LeafArray[leaf].PVSIndex;
BYTE *PVSPointer=PVSData;
PVSPointer+=PVSOFFSET;
long currentleaf=0;
while (currentleaf<NumberOfLeafs)
{
if (*PVSPointer!=0)
{
for (i=0;i<8;i++)
{
BYTE mask=1<<i;
BYTE pvs=*PVSPointer;
if (pvs&mask)
{
Render The Polygons in Leaf [ currentleaf ]; }
}// end for if pvsdata
currentleaf++;
}// end for i;
PVSPointer++;
}

else

{// we have hit a zero so read in the next byte and see how long the run of zeros is
PVSPointer++;
BYTE RunLength=*PVSPointer;
PVSPointer++;
currentleaf+=RunLength*8;
}

In the above code just know for now that each leaf has a PVSIndex variable that describes how far into the Master PVSData array its own PVS set begins.Remember that each leaf has a PVS Set all of its own but all the leafs will store their PVS Sets together in one master array.It is this master array which is commonly known as a PVS and describes the visibility off all leafs from any other leaf.

Also in the above code, if a Non zero byte is encountered (green code) then we loop through all 8 bits in that byte rendering the polygons in any leaf which has its bit set to one.Remember that each bit in that byte represents a leaf . However if a zero is encountered in the PVSData array we simply adjust the CurrentLeaf variable to compensate for how many leafs would have been represented by the bits in the zero run, which speeds us past all the zeros and therefore helps us speed up the main rendering loop which exits once CurrentLeaf > NumberOfLeafs.

Anyway, enough of all this for now, it was important for you to realize how the PVS Data would be stored because we are now going to go back and start to create the Solid Leaf Tree and we have to consider this during Tree creation.

We will look at how to create the PVS Data later on after we have created the Solid Leaf tree but for now lets not worry about it and just get stuck into the task at hand.The Creation of a Solid Leaf BSP Tree compiler.

Creating a Solid Leaf BSP Compiler


For those of you that have either implemented a BSP compiler and renderer based on my code in Part 1 of the tutorial or have created BSP Compilers based on linked lists yourself you are going to have to make a leap of faith from this point on . In the previous tutorial the tree we created consisted of Nodes and polygons all linked together in a linked list form and we simply traversed the tree by jumping through the front and back pointers.Although this worked fairly well there were a few disadvantages.First of all with our PVS we will need to be able to get to a Leaf directly to render the polygons in that leaf and the linked list system was not too good for that. Unless you stored pointers for every leaf in every other leaf (thats a no no) the only other way would be to store an ID in each leaf and traverse the tree until you found it.Thats pretty stupid because if we know that Leaf 10 is visible we want to access Leaf 10 straight away and render it. Also using a linked list system makes the BSP and all it's relative information a little tricky to save out to Disk.I have had several emails from people who wanted to know how to save the BSP tree in our previous tutorial and it isnt that simple.The real pain is that each node was linked to every other node using pointers to memory blocks but when the file was saved to disk these pointers were no longer useful because they pointed to absolute memory addresses.When you loaded the tree back in the nodes would never be stored in exactly the same locations so you had to implement some sort of scheme where the tree could be reconstucted (not recompiled just rebuilt) at load time.Although this was not really that hard to do it was a little messy especially with all the new information we are going to need in this tutorial ,so now we are going to use Arrays to store the nodes,planes,Leafs and polygons etc in linear memory blocks so that every node in the tree for example could be saved out to disk with just one call to a Write command.The same will be true for an array of Leafs which will be stored in a Leaf Array and the same for the Polygons and Planes etc etc .Every leaf in the tree could be written to disk with just one call.There is another big advantage of this also.If we are rendering and we find that Leaf 10 is visible we can access the leaf by doing something as simple and as direct as Leaf[10] instead of having to traverse the tree.In other word then, out BSP Tree will actually be just a Set of Arrays,A Node Array which will hold all the Nodes in the BSP Tree,a Leaf array which will hold all the Leafs in the tree and so on.This means that instead of a Node pointing to another Node in memory (as we did in part 1) a Node will now link to another node using an Array Index instead.For example, if Node 1 in the array linked to Node 5 with its front pointer (pointer being used loosely here) , instead of having Pointer Pointing to Node 5's memory(as we did before) ,Node 1 would simply have a value of 5 in its Front Index value.What I am saying here is that Nodes are no longer linked by pointers but now use indices into arrays instead.We will see some diagrams of this in a moment.

Memory Allocation

Lets start of by looking what the NODE structure for our compiler looks like.It is shown below:-

struct NODE
{
unsigned char IsLeaf;
unsigned long Plane;
unsigned long Front;
signed long Back;
BOUNDINGBOX BoundingBox;
};

Looking at the above structure you can see that there are no Front and Back pointers but instead there are front and back "long" variables.Instead of having pointers pointing to memory addresses that hold the front and back nodes instead we simply store a long value that is the index of another Node in the Node array.If Back is set to -1 then it does not point to another node in the array but instead indicates that there is solid space behind this Node (remembering that you never have Back Leafs).If IsLeaf=0 then the Front value is the index of another Node in the Node array however if IsLeaf=1 then this means that to the front of this node is a Leaf.In this case "Front" does not hold the value of an index of a node in the node array but instead holds the index of a Leaf in the Leaf array.Also remember that we no longer use the Polygons themselves as splitters but instead just the Planes of the polygons are stored.Therefore the "Plane" variable in the Node structure is an index in to the Plane Array which is an array where all the Nodes Planes are stored.Please do no not worry about what the BoundingBox variable is for we will look at this in a moment.The Plane structure looks like this:-

struct PLANE
{
D3DXVECTOR3 PointOnPlane;
D3DXVECTOR3 Normal;
} ;

The Plane structure just contains two Vectors that describe the plane for the node.Node 1 in the array will use Plane 1 in the array and Node 2 in the Node array will use Plane 2 in the plane array etc.Figure G below better decribes the memory organization we will be employing.

In this example Node[0].Front holds a value of 1 because it points to Node [1] in the array.Node[0].Back points to Node[3] or rather holds the value of 3 meaning if we go down the back of this node we are at Node 3. This is really no different from using the Linked List except we are now just using Linked arrays and storing indexes instead of pointers.Each Node is a split plane obviously so its Plane variable holds an index of a Plane in the Plane Array. Above Node[0].Plane=0 because its Plane is stored in the Plane Array at position Plane[0] .You can see that several Nodes in the Node array have '-1' as the value in their Back variables which means there are no more polygons behind it so it is solid space. Node 7 is interesting.If the Node has its IsLeaf variable set to "1" then the Front Variable holds the index of a Leaf in the leaf array.If you went down the Front of node 7 you would be in empty space and you would also be in leaf 0.

Lets have a look at what the Leaf structure looks like:-
struct LEAF{
long StartPolygon;
long EndPolygon;
long PortalIndexList[50];
long NumberOfPortals;
long PVSIndex;
BOUNDINGBOX BoundingBox;
};

There will be a few variables in there that you do not understand yet such as NumberOfPortals and PortalIndexList but we do not need those for tree creation so will ignore them for now.They will be used later to calculate the PVS set. "PVSIndex" will eventually hold the position in to the Master PVS array where this leafs visibility information begins.That will all become clearer later.For now we are only interested in StartPolygon and EndPolygon. As with the Node based compiler we developed in Part 1 we will pass in our polygon data into the Compiler as a linked list of polygons just like before.The difference is that once we find we have a list of polygons that are convex (they have all been used as splitters) we can delete it in the linked list and copy its memory in to a Master Polygon array.The great thing about this is that because a list of polygons in a Nodes Front list will only get copied into the Polygon array when they form a leaf it means the polygons get written into the array Leaf at a time.This means Leaf[0]'s polygons all follow each other in the array followed by Leaf[1]'s polygons etc. This means all we have to store in the Leaf structure is where in the array it's polygons start and end. Actually 'EndPolygon' as you can see from the diagram is the first polygon in the Next Leaf. This is because I render them in a loop using " For (poly=Leaf[n].StartPolygon;poly<Leaf.[n].EndPolygon;poly++)". In other words we render from Start to End-1 but thats just standard c++ loop stuff.

The BoundingBox variable above is simply a structure that holds two vectors describing in world space the dimensions of a bounding box that encompasses all the polygons in the leaf. We will need this information during rendering to test if a leafs Bounding Box is within the Frustum.Checking an Entire Leafs bounding box is much quicker than checking each polygon in the leaf and the tests can be done very quickly.This Bounding Box will be calculated in the main compiler function. You may have noticed above that the Node Structure also has a bounding box variable.This will hold the Bounding Box needed to encompass all Leaves in its front and back trees.Unlike the Leaf structure the Nodes Bounding Box information will not be needed after the tree has been compiled,in other words it is not needed at runtime but instead is used to compute the PVS Data as we will see later.This Bounding Box will also be calculated during BSP Compilation but once the PVS Array is built there is no need for it any more.

Look at the diagram below. The Green Box shows the Bounding Box for Node E , notice that it has to be big enough to encompass all polygons in its front and back tree (although Node E has no back tree) in others words large enough to contain polygons F,E & D1 because these polygons are in a leaf that is in Node E's front tree.



The Blue Bounding Box above is Node B's Bounding Box.Notice that it only has to be big enough to hold polygons B and C1 because there is only one leaf in Node B's front tree that contains those polygons. Also remember that I said we will want to store a Bounding Box for each Leaf as well as each Node. If you look at the tree above you can also look at a leafs Bounding Box as being the the Bounding of the Parent Nodes front list.You can look at the Nodes Bounding Box as being the Bounding Box large enough to contain the nodes front and back list. We will not be needing these Bounding Boxes yet but it is important you realise exactly what a Bounding Box around a Leaf and a Node looks like because although we will not be using them until later , we will have to calculate the Bounding Boxes at the same time as we compile the BSP Tree.

Memory and Speed Considerations


Now something you may be wondering about with this whole array approach is how to know how much space to allocate for each of the Node,Leaf,Polygon and Plane arrays.We certainly do not want to statically allocate the memory to some huge number because this is very wasteful and risky just hoping we have allocated enough. Instead we will dynamically allocate the memory for these arrays at the start to some thresh hold and when the number of the elements in that array go past this threshold we can just reallocate (resize) the memory block .You could for example allocate an initial Node array with room for just one Node and then when a second node is needed we could resize the Array to contain 2 elements instead of one.We will be using the C function 'malloc" to dynamically create the array which gives us a nice way to reallocate the array using 'realloc' if our number of elements needed exeed the size of the array.The problem is with resizing the array every time a new element is added is that this can be very slow.If the 'realloc' function can not resize the array in its current position in memory it will have to create a new memory block of the desired size and copy all the data over from the old position to the new position.Doing this every time a new node is needed or every time a new polygon,leaf or plane is needed is going to be dog slow.Although this is a development and not a runtime process so speed is not so critical it can still be a pain in the **** waiting for a large level to compile.

So what we will do is set some threshold.I have allocated enough memory initially to hold 100 elements in each array and set each arrays threshhold to 100. Everytime this threshold is reached I resize the array by another 100 elements and add 100 to the threshold so now the threshold is at 200.Then when we try to add the 200th element to the array it will be resized again this time adding enough room for another 100 elements on the end and once again the threshold will be increased by another 100 to 300.Once the BSP tree is compiled I will finally resize the arrays one final time to the correct size needed because only at the end of the compilation process will we now exactly how many leafs,nodes and polygons we have.

Its now time to examine the code in the demo at last.All the "Solid Leaf BSP Tree" code is in the module 'BSPTree.CPP' and memory allocation functions are in the module 'MemoryAlloc.CPP'.These are the only two modules we need to concentrate on for now except the module 'main.cpp' which is just the module that loads in the polygons, sets uo the D3D environment and controls the camera in the demo.This tutorial is not going to cover the Camera Movement code or the D3D Setup code as these are all covered by other tutorials elsewhere on this site.Anyway enough chit chat already lets write some code.

Coding the Solid Leaf BSP Tree Compiler


The first function to be executed of any interest is 'InitPolygons' in 'Main.CPP'. Lets have a look at it:-

void InitPolygons(void)
{
ReserveInitialMemoryForArrays();

PolygonList=NULL;
PolygonList=LoadMWM("demolevel.mwm");
LoadTextures();

BuildBspTree(0,PolygonList);
...
...
...
}

The InitPolygons function is called just after the D3D environment has been set up.The first thing it must do is call 'ReserveInitialMemoryForArrays' function which is in MemAlloc.Cpp.We will see this function in a moment after we have just had a look at the net couple of lines.The Next 'InitPolygons' does is sets the PolygonList pointer to NULL for safety.The PolygonList pointer is a global pointer that will point to the head of a linked list of polygons that will be fed into the compiler.This is virtually exactly the same as the previous tutorial except the Polygon Structure is a little different.Here is the Polygon Structure.

struct POLYGON
{
D3DLVERTEX *VertexList;
D3DXVECTOR3 Normal;
WORD NumberOfVertices;
WORD NumberOfIndices;
WORD *Indices;
POLYGON * Next;
bool BeenUsedAsSplitter;
long TextureIndex;
};

This is almost identicel to the last tutorial.The Polygons will be linked together by the 'Next' pointer and the Polygon itself will be made from an indexed triangle list.The 'AddPolygon' function we will look at in a moment is responsable for taking an n sided polygon and breaking it down into indexed triangle.This was covered in detail in the last tutorial.Each polygon also has a texture index.In the demo I load in 25 textures into an array of IDirect3dTexture8 objects so this is basically just an index in to this array describing what texture the polygon uses.Also note the 'BeenUsedasSplitter' flag that is set during compilation of the bsp tree to signify that this polygon has already been used so can not be used as a split plane again.
Unlike the last tutorial where we generated some basic polygons within the code itself this time I load the geometry in from an external file.I made the level with a simply Polygon Builder I knocked up in an evening and the polygons are just saved in the file in a very simple file format.Basically they are just lists of D3DLVERTEX structures.

API Notice for DX8:
With DX8 the D3DLVERTEX built in vertex structure is no longer present.You now specify vertex types using the flexible vertex format.I have named my Custom Vertex structure 'D3DLVERTEX' after the structure that it emulates from Dx7.It is defined in 'pvs.h' as follows:-

struct D3DLVERTEX
{
float x;
float y;
float z;
D3DCOLOR color;
D3DCOLOR specular;
float tu;
float tv;
};

The Flexible Vertex Format Flags for this vertex are as follows:-

>#define D3DFVF_LVERTEX ( D3DFVF_XYZ | D3DFVF_DIFFUSE | D3DFVF_SPECULAR | D3DFVF_TEX1 )


The next line in the 'InitPolygon' function is:-

PolygonList=LoadMWM("demolevel.mwm");

This calls the LoadMWM function which loads in each polygon from the file and returns them all nicely broken down into triangle lists and links together in a linked list.The function returns a pointer the the first Polygon in the list.All we have to do then is pass this pointer into our BuildBSPTree function.

Before we get ahead of ourselves then lets just back track a few lines and have a look at the 'ReserveInitialMemoryForArrays' function.(memalloc.h) and some global variables that are used to control and handle the arrays.

GLOBALS

long MAXNUMBEROFNODES =100;
long MAXNUMBEROFPOLYGONS=100;
long MAXNUMBEROFPLANES =100;
long MAXNUMBEROFLEAFS =100;
long MAXNUMBEROFPORTALS =100;


These are the Thresholds we talked about earlier.Because each array will initially be allocated with 100 elements we set these to 100 also.When the Number of Nodes reaches MAXNUMBEROFNODES then we know it is time to resize the array and then add another 100 on to MAXNUMBEROFNODES which will take the MAXNUMBEROFNODES up to 200 which is the next threshold at which the array will need to be resized.In other words, these variables keep track of the current size of each array.

Next up are the Global Pointers for each array.These will store all the information for our BSP tree.Once the memory has been allocated for them we will access them more like normal arrays (ie NodeArray[1],PlaneArray[2] etc).

POLYGON *PolygonArray;
NODE *NodeArray;
LEAF *LeafArray;
PLANE *PlaneArray;
PORTAL **PortalArray;
BYTE *PVSData;

Do not worry about the PortalArray for now this is something we will discuss way on down the tutorial.The last few globals to be declared in memalloc.cpp are the actual variables to keep track of how many Planes,Nodes,Polygons etc we have in our array.Remember that how many NOdes we have in the Node Array is not the same as MAXNUMBEROFNODES as this contains how many nodes we can store in our array before we need to resize.In other words if we have just added our 101st Node to the node array then the Number Of Nodes will be 101 but the MAXNUMBEROFNODES will be 200 as it is resized on the 100 boundry.

long NumberOfPolygons=0;
long NumberOfNodes=0;
long NumberOfLeafs=0;
long NumberOfPlanes=0;
long NumberOfPortals=0;

Once again do not worry about NumberOfPortals yet.OK now we know what the Global arrays look like lets have a look at that 'ReserveInitialMemoryForArrays' function.

void ReserveInitialMemoryForArrays()
{
NodeArray= (NODE *) malloc (MAXNUMBEROFNODES *sizeof(NODE));
PolygonArray=(POLYGON *)malloc (MAXNUMBEROFPOLYGONS *sizeof(POLYGON));
PlaneArray =(PLANE *) malloc (MAXNUMBEROFPLANES *sizeof(PLANE));
LeafArray =(LEAF *) malloc (MAXNUMBEROFLEAFS *sizeof(LEAF));
PortalArray =(PORTAL**) malloc (MAXNUMBEROFPORTALS *sizeof(PORTAL *));

ZeroMemory(NodeArray, sizeof(NODE) *MAXNUMBEROFNODES);
ZeroMemory(LeafArray, sizeof(LEAF) *MAXNUMBEROFLEAFS);
ZeroMemory(PlaneArray, sizeof(PLANE) *MAXNUMBEROFPLANES);
ZeroMemory(PolygonArray,sizeof(POLYGON) *MAXNUMBEROFPOLYGONS);
ZeroMemory(PortalArray, sizeof(PORTAL*) *MAXNUMBEROFPORTALS);
}

Nothing to explain here.Each array is dynamically initiated as being large enough to hold 100 elements of their type.Then the arrays are initialized to zero for safety.So we now have a Node Array, and Plane Array a Leaf Array and a Polygon Array each large enough to hold 100 elements but for the moment are empty.

So after this function is called by 'InitPolygons' the next function to be called is 'LoadMWM'
Note: PolygonList is a Global of type POLYGON *

PolygonList=LoadMWM("demolevel.mwm");

OK well we already know what this function does.It returns a linked list of all the polygons in the file 'demolevel.mwm.".This is exactly what we need to feed into our BSP compiler. I know you will al have your favourate World Modellers etc but in the interest of being complete let me just explain the very simple file format I have used to hold the geometry.It is basically just a list of polygons.Here is the file template:-

WORDNumber of Polygons in File
For Every Polygon N
WORDNumber of Vertices in Polygon N
For Every Vertex M in the Polygon N
D3DLVERTEX Vertex Number M

WORD Texture Index for Polygon N


Thats it.Just the number of polygons in the file, followed by the vertex count of the first polygon, followed by the list of vertices for that polygon, followed by the Texture index that identifies which texture the Polygon Uses.Then the Vertex Count for the Second polygon in the file followed by the vertex list for the Second poly followed by the second polys texture index etc etc etc.Note that the polygons in the file have not been triangulated and can be any sided convex polygons.We will break the polygons down into triangles during load time.

Ok lets have a look at the code to the LoadMWM function.MWM stands for Mr World Maker which I called my little home made world editor for a joke.It is an extremely bad world editor but I did make it in one night just to quickly let me whip up some polygons for this tutorial.The polygons are stored in a file called 'demolevel.mwm'. The loading code looks like this.

POLYGON * LoadMWM(char *filename)
{
FILE *stream;
POLYGON *Root=NULL;
POLYGON * Child=NULL;
WORD PolyCount,numofverts;
int i,b;
D3DLVERTEX xyzBuffer[50];
WORD TextureIndex;
DWORD uselessinfo;

stream = fopen(filename, "rb");
fread(&PolyCount, sizeof( WORD ), 1, stream );
for (i=0;i<PolyCount;i++)
{
fread(&numofverts,sizeof(WORD),1,stream);
for (b=0;b<numofverts;b++)
{

fread(&xyzBuffer[b].x, sizeof(float),1,stream);
fread(&xyzBuffer[b].y, sizeof(float),1,stream);
fread(&xyzBuffer[b].z, sizeof(float),1,stream);
fread(&uselessinfo, sizeof(DWORD),1,stream );
fread(&xyzBuffer[b].color, sizeof(D3DCOLOR),1,stream);
fread(&xyzBuffer[b].specular,sizeof(D3DCOLOR),1,stream);
fread(&xyzBuffer[b].tu, sizeof(float),1,stream);
fread(&xyzBuffer[b].tv, sizeof(float),1,stream);
}
fread(&TextureIndex,sizeof(WORD),1,stream);

if (i==0)
{
Root=AddPolygon(NULL,&xyzBuffer[0],numofverts);
Child=Root;
}
else
{
Child=AddPolygon(Child,&xyzBuffer[0],numofverts);
}
Child->TextureIndex=TextureIndex;
}
fclose (stream);

return Root;
}

As you can see there is nothing fancy about the file extraction.We simply read in the Number of polygons in the file and then do a for next/next loop for each polygon.For each polygon we read in its number of vertices and then the vertex list which is just a bunch of D3DLVERTEX structures. Notice how I load in one DWORD of useless information.This is because my Polygon Editor was written in DX7 where the built in D3DLVERTEX structure had a reserved DWORD value after the first vector. The D3DLVERTEX is no longer internal to DX8 so I have emulated it using the Flexible Vertex Format. Our Custom D3DLVERTEX structure though does not have that Reserved DWORD so we simply read it in and discard it just as a simple means of moving the file read counter on to the next useful bit of information.

The last thing we read in for each polygon is the number of the texture it uses.This Texture index corresponds to the order in which we will load our textures in a moment.

As with Part 1 of this tutorial , the real guts happens in the "AddPolygon" function.This function takes an array of D3DLVERTEX structures and a WORD value describing the number of verts in the array and breaks the polygon down into a indexed triangle list.The first parameter to this function though is of type POLYGON * and this is the Parent polygon for which the newly created polygon will be attached.In other words,the AddPolygon also builds the Linked list.Because we always pass in the previously created polygon as the parent and get returned a pointer to the newly created polygon which can then be used as the parent in the next call, you can see that every new polygon created is added to the end of the linked list by using the Parents Next pointer.
Notice in the above code though that if this is the first polygon in the list ( i=0) then there is no parent and NULL is passed as the parent informing our "AddPolygon" function not to try and add it to the end of another polygon.We also make a backup of the pointer returned when creating the first polygon because this is the root polygon in the list so this is the pointer our loading function will return.This is also the pointer we will feed into our 'BuildBSPTree function'

Now, I might be starting to sound like a scratched record here but I am not going to describe the 'AddPolygon' function in any detail here because it is nearly identical to the 'AddPolygon' function in Part one of this tutorial where we described in detail how to break an n sided convex polygon into a indexed triangle list.I am not going to go over all that again here so instead I will just show you the code so you can see any differences.

POLYGON * AddPolygon(POLYGON* Parent,D3DLVERTEX *Vertices,WORD NOV)
{
int loop;
POLYGON * Child=new POLYGON;
Child->NumberOfVertices=NOV;
Child->NumberOfIndices=(NOV-2)*3;
Child->Next=NULL;
Child->BeenUsedAsSplitter=false;
// Reserve space for Vertex and Index Lists
Child->VertexList =new D3DLVERTEX [Child->NumberOfVertices];
Child->Indices =new WORD [Child->NumberOfIndices];

for (loop=0;loop<NOV;loop++)
{
Child->VertexList[loop]=Vertices[loop];
}

//calculate indices
WORD v0,v1,v2;
for (loop=0;loop<Child->NumberOfIndices/3;loop++)
{
if (loop==0)
{
v0=0;
v1=1;
v2=2;
}
else
{
v1=v2;
v2++;
}

Child->Indices[loop*3]=v0;
Child->Indices[(loop*3)+1]=v1;
Child->Indices[(loop*3)+2]=v2;
}

// generate polygon normal
D3DXVECTOR3 * vec0=(D3DXVECTOR3 *) &Child->VertexList[0];
D3DXVECTOR3 * vec1=(D3DXVECTOR3 *) &Child->VertexList[1];
D3DXVECTOR3 * vec2=(D3DXVECTOR3 *) &Child->VertexList[Child->NumberOfVertices-1];// the last vert

D3DXVECTOR3 edge1=(*vec1)-(*vec0);
D3DXVECTOR3 edge2=(*vec2)-(*vec0);
D3DCVec3Cross(&Child->Normal,&edge1,&edge2) // child->normal=CrossProduct of edge1/edge2
D3DXVec3Normalize(&Child->Normal,&Child->Normal);// and then Normalize

if (Parent!=NULL)
{
Parent->Next=Child;
}

return Child;
}

If you have read the previous tutorial (and I really hope you have) then this code will be instantly familiar to you. The only difference here is that I now dynamically allocate the memory for both the Vertex and Index lists. Like I said before this code was explained in detail in Part 1 of this tutorial.

API Notice for DX8/D3DX:
For those of you not familiar with the D3DX functions that have now become almost an integral part if DX8 (replacing most of the framework) some of the above function may be a little strange to you.I have used them throughout this tutorial to calulate the CrossProduct,DotProduct and for Normalizing Vectors.The functions to do this are as follows:-

CrossProduct(v1,v2)=D3DXVec3Cross(&result,&v1,&v2);

DotProduct(v1,v2)=D3DXVec3Dot(&v1,&v2);

Normalize(v1)=D3DXVec3Normalize(&result,&v1);



So taking another look at a snippet of code from the 'InitPolygons' function again we can see that at this point we have now allocated the memory for the arrays (100 elements each for the moment) and have now loaded in the geometry such that every polygon in the scene is now stored in a linked list (all pointing to next one in the list via the 'Next' pointer).Here is a reminder once again of exactly where we are in terms of program execution. You can see below that when we call the 'LoadMWM' function the root (first polygon) of the linked list is returned and stored in the global POLYGON * PolygonList.This is the list that we will now pass into our Build BSP Compiler.

void InitPolygons(void)
{
ReserveInitialMemoryForArrays();

PolygonList=NULL;
PolygonList=LoadMWM("demolevel.mwm");
LoadTextures();

BuildBspTree(0,PolygonList);
...
...
...
}

You can see that the next line of code to be executed is called LoadTextures and there are no prizes for guessing what this function does. We have a global array of Direct3DTexture8 surface pointers defined like so:-

LPDIRECT3DTEXTURE8 lpTextureSurface[25];

Now you may remember that each polygon also had a texture index.In my demo this index will be between 0-24 because there are 25 textures in the level.Therefore if a polygon has a texture index of 5 then we know that before we render that polygon we must set lpTextureSurface[5] as the current texture in stage 0.
We use the 'D3DXCreateTextureFromFileA' function which is one of the new and very exciting ways that directx has been made easier to use.This single call will load an image (bmp,png,jpg,tga and many more formats supported), create a texture in the closest match to the image format or peform color conversion if needed, and will automatically create a MipMap chain of textures and resample to image to each MipMap level.The function returns a pointer to a texture all ready to be rendered with the MipMap chain created for you. COOL. . Here is the function that loads the textures into the array.

void LoadTextures(void)
{
D3DXCreateTextureFromFileA(lpDevice,"checkered_floor1.jpg",&lpTextureSurface[0]);
D3DXCreateTextureFromFileA(lpDevice,"brick2.jpg", &lpTextureSurface[1]);
D3DXCreateTextureFromFileA(lpDevice,"metalrustyfloor1.jpg",&lpTextureSurface[2]);
D3DXCreateTextureFromFileA(lpDevice,"brick3.jpg", &lpTextureSurface[3]);
D3DXCreateTextureFromFileA(lpDevice,"curvyfloor.jpg", &lpTextureSurface[4]);
D3DXCreateTextureFromFileA(lpDevice,"doomfloor.jpg", &lpTextureSurface[5]);
D3DXCreateTextureFromFileA(lpDevice,"crate.jpg", &lpTextureSurface[6]);
D3DXCreateTextureFromFileA(lpDevice,"stones1.jpg", &lpTextureSurface[7]);
D3DXCreateTextureFromFileA(lpDevice,"wood1.jpg", &lpTextureSurface[8]);
D3DXCreateTextureFromFileA(lpDevice,"wood2.jpg", &lpTextureSurface[9]);
D3DXCreateTextureFromFileA(lpDevice,"celtic.jpg", &lpTextureSurface[10]);
D3DXCreateTextureFromFileA(lpDevice,"celtic1.jpg", &lpTextureSurface[11]);
D3DXCreateTextureFromFileA(lpDevice,"rock1.jpg", &lpTextureSurface[12]);
D3DXCreateTextureFromFileA(lpDevice,"oldmetalriveted.jpg", &lpTextureSurface[13]);
D3DXCreateTextureFromFileA(lpDevice,"stone2.jpg", &lpTextureSurface[14]);
D3DXCreateTextureFromFileA(lpDevice,"brick1.jpg", &lpTextureSurface[15]);
D3DXCreateTextureFromFileA(lpDevice,"concrete1.jpg", &lpTextureSurface[16]);
D3DXCreateTextureFromFileA(lpDevice,"brickz2.jpg", &lpTextureSurface[17]);
D3DXCreateTextureFromFileA(lpDevice,"construct2.jpg", &lpTextureSurface[18]);
D3DXCreateTextureFromFileA(lpDevice,"construct2c.jpg", &lpTextureSurface[19]);
D3DXCreateTextureFromFileA(lpDevice,"doomgrey1.jpg", &lpTextureSurface[20]);
D3DXCreateTextureFromFileA(lpDevice,"doomgrey2.jpg", &lpTextureSurface[21]);
D3DXCreateTextureFromFileA(lpDevice,"granitefloor.jpg", &lpTextureSurface[22]);
D3DXCreateTextureFromFileA(lpDevice,"stained_glass1.jpg", &lpTextureSurface[23]);
D3DXCreateTextureFromFileA(lpDevice,"stained_glass2.jpg", &lpTextureSurface[24]);
}


Now, we have everything set up.Textures Loaded,memory allocated for the initial arrays and a POLYGON * (PolygonList) that points to the root of a linked polygon list of every polygon in our level.

I would just like to say something here that may be confusing you.We have a linked list of POLYGON structures and we also have an Array of 100 POLYGON structures.At the moment the array is totally blank. When our BSP compiler finds a convex leaf as its front list it will copy those polygons into the PolygonArray (resizing the array if needed) and will then delete the original polygons from the linked list to free up their memory because they are no longer needed. You can think of the Linked list as the unsorted data and as our BSP compiler finds the polygon that make up convex leafs they will be removed from the list and copied over to the PolygonArray ordered by the leaf they are in. In other words we never have two duplicate sets of polygons in memory at the same time.Polygons are added to the Array as they are removed from the linked list.


OK then looking at our 'InitPolygons' function again we can see that the next function call is to the main BSP Compiler itself 'BuildBSPTree' (AT LAST) .When this function returns the tree will be completely built and stored in the Plane,Node,Leaf and Polygon Arrays. These four arrays will completely describe the tree which also means if we wanted to save the tree to disk (more on this later) all we would have to do is write these four arrays straight to a file. How easy is that ? Basically just four writes to save and four reads to load. Now that is much nicer than the Linked List BSP tree isnt it?

Lets Code that compiler.

The 'BuildBSPTree Function' : The Heart of the Compiler


The line that calls this function initially with the root node is in the InitPolygons function and looks like this:-

BuildBspTree(0,PolygonList);


This line calls the function passing in zero as the current node and also passes in the complete linked list of polygons. Node 0 is obviously the first node in the array that we have to find a split plane for so this is why we pass in zero.This function will ofcourse recursively call itself passing in the Current Node as the Node parameter similiar to what we did in the last tutorial. Remember that at the moment our Node array is completely empty so we wish to start by filling out Node 0.

We will now look at the BuildBSP function a few lines at a time.

void BuildBspTree(long Node,POLYGON * PolyList)
{
POLYGON *polyTest=NULL;
POLYGON *FrontList=NULL;
POLYGON *BackList=NULL;
POLYGON *NextPolygon=NULL;
POLYGON *FrontSplit=NULL;
POLYGON *BackSplit=NULL;
D3DXVECTOR3 vec1,vec2;
D3DXVECTOR3 a,b;
float result;

NodeArray[Node].Plane=SelectBestSplitter(PolyList);

polyTest=PolyList;
// set up dummy bounding boxes for the node so it can be altered

NodeArray[Node].BoundingBox.BoxMax=D3DXVECTOR3(-40000,-40000,-40000);
NodeArray[Node].BoundingBox.BoxMin=D3DXVECTOR3(40000,40000,40000);

First we set up some local variables to hold the front and back lists etc just like in the previous tutorial and then we call the SelectBestSplitter function to choose a polygon from the list to use as a split plane.Once again the code to this function is nearly identicle to the previous tutorial but we will look at it in detail a little later because there are a few differences.They are:-

1.)The function does not return a polygon but instead creates a Plane from the choosen polygon and stores it in the master PlaneArray.This means the first time this function is called NumberOfPlanes=0, so the function stores the plane in PlaneArray[0] and increments the NumberOfPlanes variable.

2.) The Polygon that was used to Create the Planes is Marked as having been used as a split plane.In other words its BeenUsedAsSplitter variable is set to true.This means this polygon (or any child polygons that result from splits of this polygon) can not be used as a splitter again.

3.)The 'SelectBestSplitter' function does not return the Plane itself because this has already been stored in the PlaneArray (by the 'SelectBestSplitter' function) BUT instead returns the index of the Plane in the Plane Array.This is a word value that we can store in the Current Nodes 'Plane' value. In other words the Plane USED by Node[5] (fifth node in the Node array) could be accessed using:-

PlaneArray[NodeArray[5].Plane]


If this confuses you refer back to the Memory Allocation diagram we saw earlier.

Next we make a backup of the Pointer to the List ( 'polyTest=PolyList') so that we can manipulate the pointer without losing the Linked Lists Root address and then we do something that will look very strange to you probably.
As described earlier each Node has a Bounding Box which is large enough to contain all the Polygons (in all the leafs) in its Font and Back tree.In other words then, we can create this bounding box by simply doing a Min / Max test on every vertex of every polygon in the current Nodes front and back lists.Now I know we have not created the front and back lists yet but all these two lines do is set the Bounding Box for the current Node to impossible values.This means when we create the Bounding Boxes later some vertices will definitely by higher than our BoxMax vector and some polygons will definitely be smaller than our BoxMin vector which have been set initially very low and very high respectively. As a matter of interest the BoundingBox structure is much as you would expect.:-

struct BOUNDINGBOX
{
D3DXVECTOR3 BoxMin;
D3DXVECTOR3 BoxMax;
};

This is pretty much as you would expect.Just a means to hold a MIN x,y,z points and MAX x,y,z points for the current Bounding Box.

Next up we enter the main compiler loop.This loop will iterate through each polygon in the passed in list (including the polygon which has been used to create a split plane) and classify it against the split plane. Just as we did in the previous tutorial we assign the polygons to either being in the Front List or the Back List of this Node.If the Polygon is Spanning the Split Plane we split the polygon into two and attach the front bit to the Nodes Front list and the Back bit to the Nodes Back List.Remember that when a Polygon Gets split now the two child splits will inherit the state of the parent polygons BeenUsedAsSplitter Flag.
Another difference in the following code from the code in the previous tutorial is what happens when a polygon is on the split plane. You may remember that in the NODE BSP tree from 'Part 1' we always sent on plane polygons down the front list so we hard coded our 'ClassifyPoly' function to return CP_FRONT even if it was on the plane.We can not do this any longer so we will have to make a slight alteration to our ClassifyPoly function so that it returns CP_ONPLANE.We have also had to alter it to handle our new Array based approach but this is all trivial stuff. We will look at this code after we have finished looking at the 'BuildBSPTree' code but just bear in mind while looking at the code to this function that it can now return CP_ONPLANE as well now.

Ok then lets get back to the main loop of thr compiler now where we classify each polygon in the List (the list that is sent into the function) against the clip plane, we will look at each 'case' at a time:-

while (polyTest!=NULL )
{
NextPolygon=polyTest->Next;// remember because polytest->Next will be altered

switch (ClassifyPoly(&PlaneArray[NodeArray[Node].Plane],polyTest))
{
case CP_ONPLANE:
a=PlaneArray[NodeArray[Node].Plane].Normal;
b=polyTest->Normal;
result=(float)fabs ((a.x-b.x)+(a.y-b.y)+(a.z-b.z));
if (result<0.1)
{
polyTest->Next=FrontList;
FrontList=polyTest;
}

else

{
polyTest->Next=BackList;
BackList=polyTest;
}
break;
First we initiate a 'while' loop that will continue on through the linked list of polygons until there are no more left.This loop will be the what takes all the polygons passed into the function and will group them in two Linked lists (FrontLists & BackLists) depending on their relation to the split plane.In other words when this loop exits we will have two more linked lists Front and Back containing the polygons that lay in that sub space. Then we make a backup copy of the Current Polygons 'Next' pointer.This is because the current polygon will be added to either the Front or Back list using its next pointer whichl means we will loose any way of accessing the next polygon in the initial linked list.This is no different from what we did in the first tutorial.
Next we classify this Polygon against the Plane of the current Node.Look at how the Plane is accessed from the master PlaneArray.Remember that 'Node.Plane' is an index into the Plane Array.Make sure you really understand this before you read on. Just remember that NodeArray[Node].Plane returns the Plane index of node [Node] in the master Node Array.

The first case we cover here is the one we did not need to deal with in the previous tutorial.When a polygon is on the same Plane as the splitter the polygon must be added to the front list IF the Polygon Normal is facing the same way as the Split Plane Normal or added to the Back list if they face opposing directions.In other words "if Polygon.Normal=SplitPlane.Normal then FrontList else Back list". Unfortunately we can not use simple logic like this because of the little inaccuracies that pop up when dealing with floating point numbers.In other words the two normals should be the same but may be different by a very VERY small amount which would still make the '==' comparison test fail. So instead we have to do a Fuzzy Compare'. Above I subtract each component of vector 2 from vector 1 and add them together and then take the absolute value.If the vectors are facing the same way then the value should be APPROX zero (with tolerance for floating point error). As an example of this working look below. Imagine two Vectors both facing the same way (0,1,0) gets result1 and then two vectors pointing different ways (1,0,0) & (-1,0,0) gets result2 .

result1=(0-0)+(1-1)+(0-0)=0 ; abs(0)=0;
result2=(-1-1)+(0-0)+(0-0)=-2; abs(-2)=2;

NOTE: 'fabs' is a special version of the abs function for dealing with floats that ships with your compiler.

As you can see in the above code we add the Polygon to the front or back list depending on whether or not the Fuzzy compare results in 'nearly' zero or not. As you can see to add the current polygon to either the front or back list we set its 'Next' Pointer to point at the Root of the List and then set the Old pointer to the Root of the List (FrontList or BackList) to point at the current polygon so that it is now the Root of the list.Polygons get pushed down the list every time a new polygon is added to the top. This is exactly why we had to back up this polygons next pointer before we removed it from the master list.We have now altered its Next Pointer to point at the front list of polygons which means next time through the while loop we would have lost anyway of accessing the Next polygon in the original master list.

Next up we can look at the Next two 'cases' together as they are so small.If the current polygon is to the front of the current Nodes Split Plane then we add it to the Front List just as above.If if it is to the back of the Split Plane then we add it to the Back List ,once again using the same code we used above.

case CP_FRONT:
polyTest->Next=FrontList;
FrontList=polyTest;
break;

case CP_BACK:
polyTest->Next=BackList;
BackList=polyTest;
break;

Nothing to really talk about there then. The last 'case' we need to consider is if the current Polygon is Spanning the Current Nodes Split Plane.In this case we have to split this polygon in two and attach the Front half to the Front List and the back half to the back list. Once again, if you have read the previous tutorial then all this will be instantly familiar to you as the code is very similiar. Below shows the code for the last 'case' as well as the tail end of the main 'while' loop.

case CP_SPANNING:
FrontSplit=new POLYGON;
BackSplit=new POLYGON;
ZeroMemory(FrontSplit,sizeof(POLYGON));
ZeroMemory(BackSplit,sizeof(POLYGON));
SplitPolygon(polyTest,&PlaneArray[NodeArray[Node].Plane],FrontSplit,BackSplit);
FrontSplit->BeenUsedAsSplitter=polyTest->BeenUsedAsSplitter;
BackSplit->BeenUsedAsSplitter=polyTest->BeenUsedAsSplitter;
FrontSplit->TextureIndex=polyTest->TextureIndex;
BackSplit->TextureIndex=polyTest->TextureIndex;

DeletePolygon (polyTest);

FrontSplit->Next=FrontList;
FrontList=FrontSplit;
BackSplit->Next=BackList;
BackList=BackSplit;
break;
default:
break;
} //switch
polyTest=NextPolygon;
}// end while loop

I have highlighted bits that are new from the last tutorial.The SplitPolygon function is virtually the same so although I will show you the code later I will not talk about it in depth as I did this in painstaking detail in the previous tutorial.So whats happening? We create two new polygons FrontSplit and BackSplit.We then call SplitPolygon to split the polygon by the current Nodes Split plane.The two splits will be returned in FrontSplit and BackSplit. The next bit is VERY IMPORTANT. Before we delete the parent polygon (because we no longer need it) we copy over the state of its 'BeenUsedAsSplitter' variable to its children.This means that if the polygon that has just been split has itself been used as a split plane in the past, then we dont want to divide the world by the same Split plane twice.So we copy it over to the children so they can not be selected as splitter again also.Remember that if this polygon has NOT been used as a split plane then that state is carried over also, meaning that the Child polygons can be used as split planes in the future. We also have to copy over into the FrontSplit and BackSplit polygons what texture the parent used because obviously they will use the same texture also. Remember from the previous tutorial that the split polygon function takes care of recalculating each splits new texture coordinates and also takes care of retriangulating (is that a word ? :) ) the polygon to still make sure it remains as a triangle list.

Once we have copied over the useful information from the original polygon it can be deleted from memory because it will no longer be needed.We now have two split polygons to take its place. Bear in mind though that these two polygons may themselves get split into two further down the tree so these triangles themselves may be replaced by their child splits and so removed from memory. We finish up this 'case' by adding the BackSplit to the BackList and the FrontSplit to the FrontList as is to be expected.

At the end of each iteration of the while loop we move on to the next polygon in the original (master list passed into the function).Remember we made a backup of the next polygon in the original list before we started messing around with the current polygons pointers so we can jump back to where we were in the original list by using this pointer.

Ok then where are we now. Well at the moment we have chosen a Polygon to use as a split plane and have built two lists front and Back and have divided ALL the polygons in the level (including the polygon we used as the split plane) into these two lists. Now we did talk about Bounding Boxes and for reasons that will become clear later we need to calculate the Bounding Box that encompasses all the Polygons in this Nodes front and Back tree as well as store a Bounding Box in each Leaf bounding the polygons in that leaf.Well lets forget about leafs for now, I did say earlier that you can think of a Node Bounding Box as being nothing more than a Box that is large enough to contain ALL the polygons in the Front and Back List. This is perfect because we have just divided all our polygons into Front and Back lists so we have the information at hand.

The Function "CalculateBox" accepts a List of Polygons and a Bounding Box as its parameters.The Vertices of every polygon in the list are checked against the bounds of the Box and the Box is adjusted in size if the needed to fit all vertices inside the box.Obviously the Bounding Box we want to pass in is the current Nodes Bounding Box and we also want to pass in both the Front and Back Lists.Therefore this function is going to be called once passing the Front list in and again passing the Back list in. Both times through we still pass in the current nodes Bounding Box as the one to be altered if need be.After calling these two functions the box has been adjusted so that it is large enough to contain both the polygons in the front and back lists.
Before we look at the code you are probably thinking I am a complete nut case for calling the 'CalculateBox' twice. Why not just do it at the begining of the 'BuildBSPTree' function before the Polygons have been divided into front and back lists? You would still get the same bounding box for the same polygon set but would only have to call the function once. This is true BUT we also need to make a backup copy of the Bounding Box after ONLY the Front List polygons have been bounded because if it turns out that every polygon in this front list has been used as a split plane already then this is a Leaf and the Bounding Box for the Leaf will have already been calculated as it is ofcourse the same as the Bounding Box for just the Front List.:-

CalculateBox(&NodeArray[Node].BoundingBox,FrontList);
BOUNDINGBOX LeafBox=NodeArray[Node].BoundingBox;
CalculateBox(&NodeArray[Node].BoundingBox,BackList);


You can see we call CalculateBox for the Front List first of all so that this Nodes Bounding Box is adjusted so that it is large enough to hold the Front List polygons. We then make a backup of the Nodes Bounding Box in this state just in case we find out later that this front list is a leaf, in which case we will already have the Leafs Bounding Box calculated because basically the Leaf is the Front List. Then we call 'CalculateBox' again to adjust this Nodes Bounding Box again so that it is also large enough to contain the Back List of polygons as well as the Front List.The Nodes Bounding Box now holds the correct information and we also have a copy of just the Front List Bounding Box stored in 'LeafBox' that we can use if we find out that this Front List is a Leaf.We will look at the calculate Bounding Box as well as all the other support functions used by the BuildBSPTree function in a moment.

So at this point the current Node has its Plane and Bounding Box fields correctly filled in and we have all this nodes polygons in front and back lists. Now it is time to see if the Front list is a Leaf.The test is very simple. We iterate through the Front List (it's a linked list remember) and if we find that EVERY polygon in the Front List has already been used as a split plane then this is a Leaf. In the following code we set a variable 'count' to zero .If this variable is still zero at the end of the loop then this front list is a leaf.

int count=0;
POLYGON * tempf=FrontList;
while (tempf!=NULL)
{
if (tempf->BeenUsedAsSplitter==false) count++;
tempf=tempf->Next;
}

So now it is time to deal with the front and back lists and see if we have to create more Nodes and carry on splitting etc.First we check if 'count==0' in which case the current Front List IS a leaf. Remember when looking at the following code that both 'NumberOfPolygons' & 'NumberOfLeafs' are global variables that keep track thoughout the building of the BSP tree how many polygons and leafs have been added to the Polygon and Leaf array respectively. Remember also that at start up we allocated enough space for 100 elements in each array.Also you can see that the on the first recurse of this code both NumberOfPolygons and NumberOfLeafs would be zero.

if ( count==0)
{
POLYGON *Iterator=FrontList;
POLYGON *Temp;
LeafArray[NumberOfLeafs].StartPolygon=NumberOfPolygons;

while (Iterator!=NULL)
{
PolygonArray[NumberOfPolygons]=*Iterator;
IncreaseNumberOfPolygons();
Temp=Iterator;
Iterator=Iterator->Next;
delete Temp;
}
LeafArray[NumberOfLeafs].EndPolygon=NumberOfPolygons;
LeafArray[NumberOfLeafs].BoundingBox=LeafBox;
NodeArray[Node].Front=NumberOfLeafs;
NodeArray[Node].IsLeaf=1;
IncreaseNumberOfLeafs();
}

OK its time to fill out a new Leaf Element in the Leaf Array.Obviously this will be Leaf[0] for the first leaf created.We set its StartPolygon value to the current number of polygons added to the Polygon Array.This once again will be zero if this is the first leaf because polygons are only added to the polygon array once they end up in leafs.Infact this is the code that adds them. We then loop through every polygon in the front list and copy it into the Polygon Array using 'PolygonArray[NumberOfPolygons]=*Iterator'. Next up we call the function 'IncreaseNumberOfPolygons' which needs explaining. This function basically just adds 1 on to the NumberOfPolygons variable BUT if 'NumberOfPolygons' gets larger than MAXNUMBEROFPOLYGONS (which initially starts off at 100 remember) then the Array is resized using realloc. Each of the global arrays (PolygonArray,NodeArray,PlaneArray etc) has a corresponding function like this.
After we have copied the Polygon Data from the Linked list into our master Polygon Array and Incremented the Polygon Count we move on to the Next Polygon in the Linked List, removing the Current Polygons because we no longer need it. After all Polygons in the front list have been added to the polygon array we then store the current value of 'NumberOfPolygons' in the Leaf structures 'EndPolygon'. Start Polygon and End Polygon now describe a range in the master Polygon Array where its polygons are stored. Actually EndPolygon is never actually rendered but is actually where the Next Leaf will start. Remember (for i=start;i<end;i++) only polygons from 'start' to 'end-1' rendered.

Now because this is a leaf we also want to store the Bounding Box of the leaf so we can perform Frustum Rejection (more on this later) at render time.Remember we already have the Bounding Box for the Leaf(the Leafs Polygons are just the Front List Polygons) stored in LeafBox so we copy this over into the Leaf structure also.

Remember we said above that if a Nodes Front List is a Leaf then the Node structures 'Front' index will hold an index into the Leaf array instead of pointing at another node.So we copy the current index of the leaf being created (NumberOfLeafs) into the 'Front' field of the Current Node and also set the Nodes 'IsLeaf' variable to 1.Setting the Nodes 'IsLeaf' variable to 1 instead of the defualt value of 0, tells us when traversing the tree NOT that this node is a leaf BUT that to the front of this node is a leaf so we can interpret this Nodes 'Front' variable as an index in to the 'Leaf' array and not an index into the 'Node' array as usual. This Leaf in the Leaf Array has now been filled in correctly and is pointed to by the Current node so finally we Increase the 'NumberOfLeafs ' variable so that the next leaf that gets created uses the Next position in the Master Leaf Array. Notice how I call the funtion 'IncreaseNumberOfLeafs' which adds 1 to 'NumberOfLeafs' and Resizes the Array if needed.

OK so thats what happen if the FrontList is a leaf (count=0) but if the front list is NOT a leaf (in other words there are still some polygons in the list that have not yet been used as splitters then we have to make this function call itself passing in a new node that is pointed to by the Current Nodes front pointer.

else
{
NodeArray[Node].IsLeaf=0;
NodeArray[Node].Front=NumberOfNodes+1;
IncreaseNumberOfNodes();
BuildBspTree(NumberOfNodes,FrontList);
}

You can see that we set 'IsLeaf' to zero because the Current Nodes 'Front' field will not contain the index of a leaf but instead an index of another node in the Node Array.So we set this Nodes 'Front' to index the 'Next Node we are About to Create(NumberOfNodes+1)' and then increase the NumberOfNodes variable which is now holding the index of this new node.Therefore the function calls itself passing in the New Node (CurrentNode+1) index in the array and passing in the CurrentFront list as the New Nodes PolygonList. Thats the Front List dealt with.

Now we have to deal with the Back List.The Back list can never be a Leaf because of the way the tree is built so if there is a Back List of polygons we create a new Node and attach it to this Nodes back , or if there are NO polygons in the back list (BackList==NULL) then to the back of this Node is Solid Space so simply put a -1 in this Nodes Back Variable.Here is the code to handle the back list.

if (BackList==NULL)
{
NodeArray[Node].Back=-1;
}
else
{
NodeArray[Node].Back=NumberOfNodes+1;
IncreaseNumberOfNodes();
BuildBspTree(NumberOfNodes,BackList);
}
}// end function

BuildBSPTree function complete. I will now show the function in its entirety so you can view it in its complete form and after that we will talk about the 'support' functions that BuildBSPTree uses.

BuildBSPTree Function

void BuildBspTree(long Node,POLYGON * PolyList)
{
POLYGON *polyTest=NULL;
POLYGON *FrontList=NULL;
POLYGON *BackList=NULL;
POLYGON *NextPolygon=NULL;
POLYGON *FrontSplit=NULL;
POLYGON *BackSplit=NULL;
D3DXVECTOR3 vec1,vec2;
D3DXVECTOR3 a,b;
float result;

NodeArray[Node].Plane=SelectBestSplitter(PolyList);
polyTest=PolyList;
NodeArray[Node].BoundingBox.BoxMax=D3DXVECTOR3(-40000,-40000,-40000);
NodeArray[Node].BoundingBox.BoxMin=D3DXVECTOR3(40000,40000,40000);

while (polyTest!=NULL )
{
NextPolygon=polyTest->Next;// remember because polytest->Next will be altered

switch (ClassifyPoly(&PlaneArray[NodeArray[Node].Plane],polyTest))
{
case CP_ONPLANE:
a=PlaneArray[NodeArray[Node].Plane].Normal;
b=polyTest->Normal;
result=(float)fabs ((a.x-b.x)+(a.y-b.y)+(a.z-b.z));
if (result<0.1)
{
polyTest->Next=FrontList;
FrontList=polyTest;
}
else
{
polyTest->Next=BackList;
BackList=polyTest;
}
break;

case CP_FRONT:
polyTest->Next=FrontList;
FrontList=polyTest;
break;

case CP_BACK:
polyTest->Next=BackList;
BackList=polyTest;
break;

case CP_SPANNING:
FrontSplit=new POLYGON;
BackSplit=new POLYGON;
ZeroMemory(FrontSplit,sizeof(POLYGON));
ZeroMemory(BackSplit,sizeof(POLYGON));
SplitPolygon(polyTest,&PlaneArray[NodeArray[Node].Plane],FrontSplit,BackSplit);
FrontSplit->BeenUsedAsSplitter=polyTest->BeenUsedAsSplitter;
BackSplit->BeenUsedAsSplitter=polyTest->BeenUsedAsSplitter;
FrontSplit->TextureIndex=polyTest->TextureIndex;
BackSplit->TextureIndex=polyTest->TextureIndex;

DeletePolygon (polyTest);

FrontSplit->Next=FrontList;
FrontList=FrontSplit;
BackSplit->Next=BackList;
BackList=BackSplit;
break;
default:
break;

} //switch
polyTest=NextPolygon;
}// end while loop

int count=0;
POLYGON * tempf=FrontList;
while (tempf!=NULL)
{
if (tempf->BeenUsedAsSplitter==false) count++;
tempf=tempf->Next;
}

CalculateBox(&NodeArray[Node].BoundingBox,FrontList);
BOUNDINGBOX LeafBox=NodeArray[Node].BoundingBox;
CalculateBox(&NodeArray[Node].BoundingBox,BackList);

if ( count==0)
{
POLYGON *Iterator=FrontList;
POLYGON *Temp;
LeafArray[NumberOfLeafs].StartPolygon=NumberOfPolygons;
while (Iterator!=NULL)
{
PolygonArray[NumberOfPolygons]=*Iterator;
IncreaseNumberOfPolygons();
Temp=Iterator;
Iterator=Iterator->Next;
delete Temp;
}
LeafArray[NumberOfLeafs].EndPolygon=NumberOfPolygons;
LeafArray[NumberOfLeafs].BoundingBox=LeafBox;
NodeArray[Node].Front=NumberOfLeafs;
NodeArray[Node].IsLeaf=1;
IncreaseNumberOfLeafs();
}
else
{
NodeArray[Node].IsLeaf=0;
NodeArray[Node].Front=NumberOfNodes+1;
IncreaseNumberOfNodes();
BuildBspTree(NumberOfNodes,FrontList);
}

if (BackList==NULL)
{
NodeArray[Node].Back=-1;
}
else
{
NodeArray[Node].Back=NumberOfNodes+1;
IncreaseNumberOfNodes();
BuildBspTree(NumberOfNodes,BackList);
}

}// end function

You can see above that I have highlighted in Red where support functions have been used.Read the above code and make sure you understand how the tree is being built.It does not matter whether or not you know how the support functions work we will look at these in a moment but make sure you can see above what they are being used for and what they are returning to the calling function.

Most of the Support functions above are virtually identicle to the Support function used in the previous tutorial.Because they were explained in detail in that tutorial I will not explain them in detail again here but will instead just show them and point out where they have been changed to support the Solid Leaf Tree and our new Array based stratagy.

The the first function call I have highlighted above is the call :-

NodeArray[Node].Plane=SelectBestSplitter(PolyList);

This function is passed the list of polygons and selects the best one to be used as the current split plane.The plane is created and added to the master PlaneArray and the funtion returns the Index of that plane in the Plane Array so that the Node can store it.Here is the SelectBestSplitter Funtion.

The 'Select Best Splitter' Function


The Job of this function is to find the Best Polygon to use as a split plane.As discussed in the previous tutorial this completely depends on the Polygon Choosing Heuristic used.In my code I have used (frontlist)-(backlist)+(splits*3). This seems to give a nice balance between keeping a relatively balanced tree and minimizing Splits as much as possible to keep the polygon count down.Although you may not think the Balance of a tree is important it is in fact Vitally important.If a tree is unbalanced down one side then that side will have a much deeper depth.This could make tree traversal in certain areas slow and inconsistant. Perhaps Balance is even more important than splits, I do not know I have never really benchmarked it but you do want to keep the tree as balanced as possible.Here is the code:-

long SelectBestSplitter(POLYGON *PolyList)
{
POLYGON* Splitter=PolyList;
POLYGON* CurrentPoly=NULL;
unsigned long BestScore=1000000;
POLYGON * SelectedPoly=NULL;

while (Splitter!=NULL)
{
if (Splitter->BeenUsedAsSplitter!=true)
{
PLANE SplittersPlane;
SplittersPlane.Normal=Splitter->Normal;
SplittersPlane.PointOnPlane=*(D3DXVECTOR3 *)&Splitter->VertexList[0];
CurrentPoly=PolyList;
unsigned long score,splits,backfaces,frontfaces;
score=splits=backfaces=frontfaces=0;

while (CurrentPoly!=NULL)
{
int result=ClassifyPoly(&SplittersPlane,CurrentPoly);
switch ( result)
{
case CP_ONPLANE:
break;
case CP_FRONT:
frontfaces++;
break;
case CP_BACK:
backfaces++;
break;
case CP_SPANNING:
splits++;
break;
default:
break;
}// switch

CurrentPoly=CurrentPoly->Next;
}// end while current poly

score=abs(frontfaces-backfaces)+(splits*3);
if (score<BestScore)
{
BestScore=score;
SelectedPoly=Splitter;
}

}// end if this splitter has not been used yet
Splitter=Splitter->Next;
}// end while splitter != null

SelectedPoly->BeenUsedAsSplitter=true;
PlaneArray[NumberOfPlanes].PointOnPlane=*((D3DXVECTOR3 *)&SelectedPoly->VertexList[0]);
PlaneArray[NumberOfPlanes].Normal=SelectedPoly->Normal;
IncreaseNumberOfPlanes();
return (NumberOfPlanes-1);
} // End Function

You should recognize most of this stuff from the previous tutorial but there are some differences so I will summerize.
We loop through every polygon in the list and test it (as a split plane) against every other polygon in the list.We do not want to select any polygons we have already used as a split plane in the past so we have to check the Polygons BeenUsedAsSplitter variable to see if we want to test this Polygon as a split plane.We also make sure we are not testing this polygons against itself. Just as before we use each polygon as a split plane and keep a score based on how many splits it caused and how balanced the front and back polygon sets would be. It is worth noting that the LOWEST score is best so every time we find a new Lowest score we store a pointer to that polygon so come the end of the while loops 'SelectedPoly' points to the Polygon that we will actually use to create a split plane from.

Once we have decided that this is the polygon we are going to use the first thing we do is set that polygons 'BeenUsedAsSplitter' variable so that this polygon (or any child splits that may occur from this polygon in the future) can never be selected as the SplitPlane again.
Next we actually enter the Plane Data into the PlaneArray.If with all the Master arrays NumberOfPlanes holds the Next plane to be created.We fill out the Plane structure by using the first vertex in the polygon (as the point on the plane) and the polygons Normal obviously is the Planes normal also. We then call 'IncreaseNumberOfPlanes' to increment the NumberOfPLanes counter so that the next time this function is called it will be indexing the Next element in the plane array. We end up by returning the Index of the Plane we have just created in the plane array so that the return value can be stored in the current nodes 'Plane' variable. As I said before this is very similar to the function covered in Part 1 of this tutorial series so please read this first as it gives much more information about what exactly selecting the Best Splitter means.

The Next support function we call in 'BuildBSPTree' is the SplitPolygon function which splits a polygon two seperate polygons. This function will not be explained AT ALL in this part of the tutorial because the function is mainly the same with just a few alterations to handle functions instead of pointers.I give a VERY detailed explanation of splitting polygons in Part 1 so read this first and then check out the downloadable source code for the small changes I have made.

The next support funtion however is new to this tutorial.'CalculateBox' simply takes a list of polygons and a box and adjusts the Bounding Box to fit the polygons.The 'CalculateBox' does NOT initiate the BoundingBox values before the test .This is by design because we have to call this function twice remember once for the Front list and once for the Back list. Because the Bounding Box structure passed in is not initialized by this function, this is why we initialized it ourselves at the start of the 'BuildBSPTree' function and set the box's Maximum and Minimum value to impossible values.

The actual code to the function iteslf is very simple. Simply loop through every Polygon in the list and every vertex in that polygon and check to see if the vertex's X,Y or Z is smaller or larger than the Bounding Box current Minimum X,Y or Z or Maximum X,Y and Z respectively.If so then make then set this as the new Bounding Box value. Here is the code.

void CalculateBox(BOUNDINGBOX *Box,POLYGON *Polylist)
{
WORD i;
POLYGON *PolyPointer=Polylist ;
while (PolyPointer!=NULL)
{

for (i=0;i<PolyPointer->NumberOfVertices;i++)
{
// check Minimum Bounds
if (PolyPointer->VertexList[i].x<Box->BoxMin.x) Box->BoxMin.x=PolyPointer->VertexList[i].x;
if (PolyPointer->VertexList[i].y<Box->BoxMin.y) Box->BoxMin.y=PolyPointer->VertexList[i].y;
if (PolyPointer->VertexList[i].z<Box->BoxMin.z) Box->BoxMin.z=PolyPointer->VertexList[i].z;

// check Maximum Bounds
if (PolyPointer->VertexList[i].x>Box->BoxMax.x) Box->BoxMax.x=PolyPointer->VertexList[i].x;
if (PolyPointer->VertexList[i].y>Box->BoxMax.y) Box->BoxMax.y=PolyPointer->VertexList[i].y;
if (PolyPointer->VertexList[i].z>Box->BoxMax.z) Box->BoxMax.z=PolyPointer->VertexList[i].z;
}

PolyPointer=PolyPointer->Next;
}
}

Another support function used by the BuildBSPTree function is one called 'DeletePolygon' and this function can be found in 'memoryalloc.cpp'.We call this function whenever we want to remove a polygon from memory.Remember that calling the c++ 'delete' command will remove the memory allocated by the Polygon structure itself BUT will not remove the memory allocated dynamically for the Vertex and Index arrays that are pointed to by the polygon structure.This would cause a memory leak.In short then we have to delete the Index and Vertex List memory first (while we still have pointers to the memory) and then delete the Polygon structure. Here is the DeletePolygon function.

void DeletePolygon (POLYGON *Poly)
{
delete Poly->VertexList;
delete Poly->Indices;
delete Poly;
}

Another thing that can look quite confusing is the fact that when we copy over the FrontList polygons over into a new Leaf (if count==0) we do infact call the c++ command 'delete' instead of our function 'DeletePolygons'.This is because although we no longer need the polygon structure (the one in the front list) itself any more the memory for the Vertex Array and the Index Array will still be pointed at by the polygon structure in the PolygonArray.There for we do not want the vertex or index memory released.

IncreaseNumberOf....?

We have just one more function to look at before we have finished looking at the actual compiler itself. Every time we increased the current number of Polygons,Planes,Nodes or Leafs etc, instead of just using "NumberOfLeafs++" we called 'IncreaseNumberOfLeafs'.There is one of these functions for EACH master Array (Planes,Nodes,leafs etc) and for the most part all they do is Increment the count like "NumberOfLeafs++", however if this takes the total number of leafs over the current size of the array then the Array is resized by calling 'realloc' to add another 100 elements to the array in question. All these functions are stored in 'memoryalloc.cpp' but they are all exactly the same as each other really except for the fact that each one deals with a specific array.Because of this we will only have a look at the code for the 'IncreaseNumberOfNodes' function below but just remember that all the other 'IncreaseNumberOf...' functions are nearly exactly the same.

void IncreaseNumberOfNodes(void)
{
NumberOfNodes++;
if (NumberOfNodes==MAXNUMBEROFNODES)
{
MAXNUMBEROFNODES+=100;
NodeArray=(NODE *)realloc (NodeArray,MAXNUMBEROFNODES * sizeof(NODE));
}
}

You can see that we increase NumberOfNodes variable and also check if this takes the number of nodes past the current size of the array.If so we add 100 to the size of the array and resize the memory block calling the C function 'realloc'.

'ClassifyPolygon changes from Part I'


Before we move on I would just like to say that the 'ClassifyPoly' routine used in the above functions has had to be altered slightly from the previous tutorial as mentioned earlier. In the last tutorial we hard coded our 'ClassifyPoly' function to return CP_FRONT even if the polygon lay on the test plane.This was because we did not need the CP_ONPLANE case in the Node compiler so it stopped us having to duplicate code in the switch statement in the BuildBSP function.The new version also returns CP_ONPLANE and is listed below with the altered line highlighted. Note that the ClassifyPoint function is unchanged (except that it now uses d3dx math functions for computing Dotproduct) and that both 'ClassifyPoint' and 'ClassifyPoly' are described in detail in Part 1.

int ClassifyPoly(PLANE *Plane,POLYGON * Poly)
{
int Infront=0;
int Behind=0;
int OnPlane=0;
float result;
D3DXVECTOR3 *vec1=(D3DXVECTOR3 *)&Plane->PointOnPlane;
for (int a=0;a<Poly->NumberOfVertices;a++)
{
D3DXVECTOR3 *vec2=(D3DXVECTOR3 *)&Poly->VertexList[a];
D3DXVECTOR3 Direction=(*vec1)-(*vec2);
result=D3DXVec3Dot(Direction,Plane->Normal);

if (result>0.001)
{
Behind++;
}
else
if (result<-0.001)
{
Infront++;
}
else
{
OnPlane++;
Infront++;
Behind++;
}
}

if (OnPlane==Poly->NumberOfVertices) return CP_ONPLANE;
if (Behind==Poly->NumberOfVertices) return CP_BACK;
if (Infront==Poly->NumberOfVertices) return CP_FRONT;
return CP_SPANNING;
}

The last function in the BuildBSPTree function that we have not talked about is the SplitPolygon function.As I said earlier there really was a quite large and detailed explanation about this function in the Part 1 tutorial so please refer back to this tutorial if you are having trouble remembering or if you do not understand the code. For those of you that can remember I will just show it here as a refresher and to show whats changed (as of writing I have not yet updated Part1 from dx7 to dx8 so here it is in its new form using the D3DX functions instead of the D3D Framework from dx7. Apart from these changes it is exactly the same).

Split Polygon

void SplitPolygon(POLYGON *Poly,PLANE *Plane,POLYGON *FrontSplit,POLYGON *BackSplit)
{
D3DLVERTEX FrontList[40],BackList[40],FirstVertex;
D3DXVECTOR3 PlaneNormal,IntersectPoint,PointOnPlane,PointA,PointB;
WORD FrontCounter=0,BackCounter=0,loop=0,CurrentVertex=0;
float percent; // used for return storage of percentage

// Find any vertex on the plane to use later in plane intersect routine
PointOnPlane=Plane->PointOnPlane;

// first we find out if the first vertex belongs in front or back list
FirstVertex=Poly->VertexList[0];

switch (ClassifyPoint( (D3DXVECTOR3 *)&FirstVertex,Plane))
{
case CP_FRONT:
FrontList[FrontCounter++]=FirstVertex;
break;
case CP_BACK:
BackList[BackCounter++]=FirstVertex;
break;
case CP_ONPLANE:
BackList[BackCounter++]=FirstVertex;
FrontList[FrontCounter++]=FirstVertex;
break;
default:
PostQuitMessage(0);;
}

for (loop=1;loop<Poly->NumberOfVertices+1;loop++)
{
if (loop==Poly->NumberOfVertices)
{
CurrentVertex=0;
}
else
{
CurrentVertex=loop;
}
PointA=*(D3DXVECTOR3 *)&Poly->VertexList[loop-1];
PointB=*(D3DXVECTOR3 *)&Poly->VertexList[CurrentVertex];

PlaneNormal=Plane->Normal;

int Result=ClassifyPoint(&PointB,Plane);
if (Result==CP_ONPLANE)
{
BackList[BackCounter++]=Poly->VertexList[CurrentVertex];
FrontList[FrontCounter++]=Poly->VertexList[CurrentVertex];
}
else
{
if (Get_Intersect(&PointA,&PointB,&PointOnPlane,&PlaneNormal,&IntersectPoint, &percent)==true)
{
// create new vertex and calculate new texture coordinate
float deltax,deltay,texx,texy;
deltax=Poly->VertexList[CurrentVertex].tu-Poly->VertexList[loop-1].tu;
deltay=Poly->VertexList[CurrentVertex].tv-Poly->VertexList[loop-1].tv;
texx=Poly->VertexList[loop-1].tu+(deltax*percent);
texy=Poly->VertexList[loop-1].tv+(deltay*percent);
D3DLVERTEX copy;//=D3DLVERTEX(IntersectPoint,D3DCOLOR_ARGB(255,255,255,255),0,texx,texy);
copy.x=IntersectPoint.x;
copy.y=IntersectPoint.y;
copy.z=IntersectPoint.z;
copy.color=D3DCOLOR_ARGB(255,255,255,255);
copy.specular=0;
copy.tu=texx;
copy.tv=texy;

if (Result==CP_FRONT )
{
BackList[BackCounter++]=copy;
FrontList[FrontCounter++]=copy;
if (CurrentVertex!=0)
{
FrontList[FrontCounter++]=Poly->VertexList[CurrentVertex];
}
}

if (Result==CP_BACK)
{
FrontList[FrontCounter++]=copy;
BackList[BackCounter++]=copy;
if (CurrentVertex!=0)
{
BackList[BackCounter++]=Poly->VertexList[CurrentVertex];
}
}
}// end get intersect

else

{
if (Result==CP_FRONT)
{
if (CurrentVertex!=0)
{
FrontList[FrontCounter++]=Poly->VertexList[CurrentVertex];
}
}
if (Result==CP_BACK)
{
if (CurrentVertex!=0)
{
BackList[BackCounter++]=Poly->VertexList[CurrentVertex];
}
}
}
}// end for not onplane
}//end for loop

//OK THEN LETS BUILD THESE TWO POLYGONAL BAD BOYS :)

FrontSplit->NumberOfVertices=0;
BackSplit->NumberOfVertices=0;
// Reserve Memory for Front and Back Vertex Lists
FrontSplit ->VertexList= new D3DLVERTEX [FrontCounter];
BackSplit ->VertexList= new D3DLVERTEX [BackCounter ];

for (loop=0;loop<FrontCounter;loop++)
{
FrontSplit->NumberOfVertices++;
FrontSplit->VertexList[loop]=FrontList[loop];
}
for (loop=0;loop<BackCounter;loop++)
{
BackSplit->NumberOfVertices++;
BackSplit->VertexList[loop]=BackList[loop];
}

BackSplit->NumberOfIndices=(BackSplit->NumberOfVertices-2)*3;
FrontSplit->NumberOfIndices=(FrontSplit->NumberOfVertices-2)*3;

// Reserve Memory for Front and Back Index Lists
FrontSplit ->Indices= new WORD [FrontSplit->NumberOfIndices];
BackSplit ->Indices= new WORD [BackSplit ->NumberOfIndices];

// Fill in the Indices
WORD v0,v1,v2;
for (loop=0;loop<FrontSplit->NumberOfIndices/3;loop++)
{
if (loop==0)
{
v0=0;
v1=1;
v2=2;
}
else
{
v1=v2;
v2++;
}

FrontSplit->Indices[loop*3]=v0;
FrontSplit->Indices[(loop*3)+1]=v1;
FrontSplit->Indices[(loop*3)+2]=v2;
}

for (loop=0;loop<BackSplit->NumberOfIndices/3;loop++)
{
if (loop==0)
{
v0=0;
v1=1;
v2=2;
}
else
{
v1=v2;
v2++;
}

BackSplit->Indices[loop*3]=v0;
BackSplit->Indices[(loop*3)+1]=v1;
BackSplit->Indices[(loop*3)+2]=v2;
}
FrontSplit->Normal=Poly->Normal;
BackSplit->Normal=Poly->Normal;
}

We have Compiled Solid Leaf BSP Tree. What Next?


We now have Compiled BSP Tree but have not yet written any functions to traverse or render the tree.Traversing the Tree is exactly as it was before but with one big difference.You may remember that with the Solid Node compiler we had Solid or Empty 'Leaf Nodes' attached to each tree which contained no polygons but did signify that if we end up in the node then we are in solid or empty space. I thought this was a huge waste of memory which is why we now use the Node Structures Front and Back variables for dual purpose. The Front variable either points to another Node in front of this one or IF the Nodes 'IsLeaf variable equals 1 then it is an index into the leaf array.This means we have to be checking before we move instead of after we move as we did before. In other words, instead of moving down the front tree and saying 'Is this a Leaf we are currently in' we must check 'Is it a leaf or node?' BEFORE we move down. This is the same with the 'Back' variable which is either the index of another node (we can go down there then) OR it is set to '-1' which is solid space so we can not. If we didnt check before we moved then our renderer would try and access NodeArray[-1] when travelling down the back of a Node which has no children (is solid space).

Showing you the actual rendering function is one of those chicken and egg situations.It uses the PVS set to render which we have not created yet although you could ofcourse write a render just as we did before which traverses the tree checking every node and either rendering behind or in front of it depending on the position of the camera as regards to the node. We are NOT going to do that however and this is why I covered earlier how the PVS set is stored. If you have forgotten then please refer back to remind yourself .It is not important yet to know how the PVS set is generated but it is important to understand how the PVS set is stored once it is generated.

Rendering the level works like this. First of all we call the RenderTree function passing in as a parameter the position vector of the camera (or player etc).This position will then traverse the BSP tree (using the methods described above , check first and then moving) until it finds the current Leaf the camera is in. After that it calls 'DrawTree' which will draw the level using the Current Leafs PVS set.This means we dont traverse the entire level as we did before visiting every single node and drawing it but instead just traverse the tree until we find the Leaf the camera is in and call DrawTree to draw the Polygons that can be seen from that leaf. Below I show the code to the 'RenderTree' function that traverses the tree using the "check first and then go' method to find the Leaf the camera is in.Once the Leaf is found it calls 'DrawTree' which we will look at in a moment. Note that I do NOT use recursion to traverse the tree in this tutorial but instead perform the traversal in a 'while' loop. This should give us a bit of a speed advantage because we are loosing all that STACK Pushing and Popping and should also get rid of any chance of STACK overflow during the rendering of really large levels. It still does exactly the same as the recursive functions we used in Part1 but just does it in a loop.I will take you through this function a few lines at a time.

void RenderTree(D3DVECTOR pos)
{
int Node=0;
int leaf=0;
This function will be called once per frame to render the tree and will be passed in the Position of the Camera/Player in the game world.. This code finds the LEAF the camera is in and then calls DrawTree to render that leaf.We set 'Node' to zero at first obviously because we need to start testing our camera position at the root of the tree.
We now enter an infinite loop that will only terminate when the Leaf is found that the camera is.The function basically just checks the Camera Position against the Current 'Node' and then adjusts 'Node' to the new Node/Leaf and loops round again.When a leaf is found the leaf is drawn (using 'DrawTree') and this function returns.The level will be completely rendered at this point.We will look at each 'case' one at a time.

while(1)
{
switch(ClassifyPoint(&pos,&PlaneArray[NodeArray[Node].Plane]))
{
case CP_FRONT:
if (NodeArray[Node].IsLeaf!=0)
{
leaf=NodeArray[Node].Front;
DrawTree(leaf);
return;
}
else
{
Node = NodeArray[Node].Front;
}
break;

If the camera position is infront of this 'Node' then we need to travel down this Nodes front tree to further refine the search.HOWEVER, the Front of this tree may not point to another Node and may in fact be the Leaf we are looking for so we check the Nodes 'IsLeaf' variable to see of this is the case.If 'IsLeaf!=0' then we are at the end branch of this tree and this Nodes Front pointer actually contains the Index(into the LeafArray) of the Leaf we are in.This means our search is over.We simply call 'DrawTree' and pass in this Index so that the DrawTree function can render the Leafs PVS Set. Then we return from this function because the job is done.

If IsLeaf is Zero however then this means we can safely travel down the front of this tree (because there is another node there) so we set the 'Node' variable to the new Node index and and do another Iteration of the 'while' loop using the new 'Node' to compare our camera position against.

If we are not in front of this Node then we check to see if the camera position is behind the 'Node' instead.If it is then we do something very similiar as shown below.

case CP_BACK:
if (NodeArray[Node].Back==-1)
{
return;
}
else
{
Node = NodeArray[Node].Back;
}
break;

As with the 'CP_FRONT' case we have to check to see that we can go down the Back of this node.If this Nodes 'Back' variable equals '-1' then there are no more Nodes to travserse and the camera is infact in solid space. This should NEVER happen becuse it would mean that your camera is inside a wall or something like that but we dont want any crashes before you implement your collision detection because then you could walk through a wall accidentally.To keep things safe I simply return if the camera is in solid space which means nothing gets rendered.Like I said though, this should never happen.
If there is a Back Node though then we adjust the Node variable to the New Node and do another loop of the while 'loop' with the new node being used as the split plane to test.

The last case to test is the CP_ONPLANE case which means the camera point is actually right on the split plane.In this case I just treat it exactly the same as CP_FRONT meaning the code is the same as CP_FRONT.

case CP_ONPLANE:
if (NodeArray[Node].IsLeaf!=0)
{
leaf=NodeArray[Node].Front;
DrawTree(leaf);
return;
}
else
{
Node = NodeArray[Node].Front;
}

default:
break;
}
}// end while
}// End Function


Thats it.Quite a small function really considering what it does.Here is the function in its entirety:-

void RenderTree(D3DVECTOR pos)
{
int Node=0;
int leaf=0;
while(1)
{
switch(ClassifyPoint(&pos,&PlaneArray[NodeArray[Node].Plane]))
{
case CP_FRONT:
if (NodeArray[Node].IsLeaf!=0)
{
leaf=NodeArray[Node].Front;
DrawTree(leaf);
return;
}
else
{
Node = NodeArray[Node].Front;
}
break;

case CP_BACK:
if (NodeArray[Node].Back==-1)
{
return;
}
else
{
Node = NodeArray[Node].Back;
}
break;

case CP_ONPLANE:
if (NodeArray[Node].IsLeaf!=0)
{
leaf=NodeArray[Node].Front;
DrawTree(leaf);
return;
}
else
{
Node = NodeArray[Node].Front;
}

default:
break;
}
}// end while
}// End Function


Rendering a Leaf : (A detailed look at the 'DrawTree' function)

Although we have not yet discussed how to create the actual PVS Data for the Solid Leaf Tree we have discussed how the PVS Data will eventually be stored in memory.If you remember we said that there will be one master array called 'PVSData' that will hold ALL the PVS visibility information for all the leafs.The PVS data is stored in the array in Leaf order. In other words Leaf 1 has its visibility information first followed by Leaf 2 etc right up to the total number of leafs in the tree. Also remember that each leafs visibility information will be a run of bytes containing 1 bit for each leaf visible from that leaf. Now you might think to access a leafs visibility information we could simply access the array using something like:-

BYTE * LeafVis=&PVSData[ThisLeaf*(NumberOfLeafs/8));

This would assume that each leaf has exactly the same amount of information in BYTES.After all each Leaf has to have a bit per leaf so you would think that because one Byte can hold 8 bits we simply divide the Number Of Leafs by 8 to find out how many bytes each Leaf has for its visibility information and then multiply that by the Current Leaf we are trying to access the data for. This is NOT going to work because remember each Leafs Visibility information will be 'Zero Run Length Encoded' as discussed earlier.This means each leaf could have a varying number of bytes depending on how many leafs that leaf can see.To get around this problem we record where each Leafs Visibility Information begins in the array when we are compressing and building the PVS array. We will look at this later when we actually create the PVS but in order to understand the following function code (DrawLeaf) just know that the 'PVSIndex' field of the Leaf structure will hold the index of the BYTE into the Master PVS array that that Leafs Visibility information starts. We touched on the code for this function earlier but lets have another look now that you know how the Solid Leaf Tree has been compiled and how the Leaf data structures have been filled out. The following function is the ONLY render function.This function is the sole function responsable for putting the polygons on the screen. Everything that gets rendered gets rendered in this function.This function will only get called once per frame and was called by the previous function we just looked at.Remember this function is called by the 'RenderTree' function after the Current Leaf the camera is in has been located.The RenderTree function called this function with the index of the Leaf the camera is currently in. Here is the code that renders everything you see in the demo, a bit at a time.

void DrawTree(long leaf)
{
POLYGON *CurrentPoly;
int i;
long PVSOFFSET=LeafArray[leaf].PVSIndex;
BYTE *PVSPointer=PVSData;
PVSPointer+=PVSOFFSET;
long currentleaf=0;

Above you can see the code creates a BYTE pointer to point to the first BYTE of this Leafs Visibility information.The Leaf itself holds the Number of Bytes in from the beginning of the array (stored in PVSIndex) so this is the offset into the master PVSData array where this leafs visibility information starts.So we create a pointer to the PVSData array and simply add the offset on.PVSPointer now points at this Leafs PVS information. Current Leaf is set to zero. This variable will keep track of which leaf in THIS leafs PVS information we are currently checking the visibility BIT of. When Current Leaf equals NumberOfLeafs we know we have checked all the leafs in this leafs PVS information and nothing more has to be rendered.

The Next bit will take a little explaining. Batching polygons in groups of the textures they use can be very important. Changing textures is one of the most expensive renderstates D3D performs as it can often cause much swapping of data from system memory to video memory when space is limited.When we find a polygon that is visible we can't just render it because that would mean Setting Textures for EVERY polygon in the PVS Data set.This is very costly.Instead we need to batch the Polygons in groups of textures that they use so for example we would render all the polygons that use Texture1 together followed by all the polygons that use Texture2 etc and so on. To get around this problem I have created an array of polygon pointers (one for each texture) that will be used to construct a linked list. I have 25 Textures used in the demo so I have an array of 25 Polygon Pointers that are defined Globally like this:-

POLYGON * pTexturePolygonList[25];

You may remember that when we created our BSP we fed our polygons into the compiler using a linked list where every polygon was linked together using the Polygons 'Next' pointer.Now because those polygon have since been copied into an array in linear order of the leafs to which they belong these fields are no longer being used.We will use it each frame to add the polygon to the Linked list to which it belongs depending on the texture it uses. We will step though the PVS Data but when we find a visible Leaf, instead of rendering its polygons we will check which texture that polygon uses and add it to the appropriate Linked list.In other words, if the the current polygon being checked has 6 in its TextureIndex field we will add that to the linked list being pointed to by pTexturePolygon[6].By the time we have traversed through the PVS data for this leaf we will have 25 linked lists.We will SetTexture 1 and then render all the polygons in linked list 'pTexturePolygonList[1]' and then set texture 2 and then render all the polygon in Linked list 'pTexturePolygonList[2]' and so on. This means at most we will only be setting a new texture 25 times per frame instead of every polygon.
We will see this in action shortly but the next snippet of code in the 'DrawTree' function simply sets all these pointers to null each frame.

// Set All the Texture Batch Pointers to NULL
for (i=0;i<25;i++)
{
pTexturePolygonList[i]=NULL;
}

Ok thats done. Now its time to check the Visibility information for this leaf.We create a while loop that wil exit once CurrentLeaf=NumberOfLeafs because this means we have checked all the visibility information and we now have our 25 linked lists of polygons ready to be rendered..

while (currentleaf<NumberOfLeafs)
{
if (*PVSPointer!=0)
{
for (i=0;i<8;i++)
{
BYTE mask=1<<i;
BYTE pvs=*PVSPointer;

if (pvs&mask)
{

First we check if the current Byte is non zero.If it is non zero then it means some of the BITs are set to one in this byte so at least one(but could be all) of the leafs represented by this byte must be visible.
We need to find out which bits are visible so we create a for next loop from 0-7 (one byte holds 8 leafs visibility bits remember) to find out which bits have been set.We do this by creating a MASK that begins with a value of 1 and gets shifted right by one position each iteration of the four next loop.By ANDing this BYTE with the PVS Byte we can see if that BIT is set to one or zero.Each time through the for next loop CurrentLeaf is also increased by one so we know which leaf we are checking.If the AND result is not zero it means this leaf has its bit set so it is Potentially visible.

As discussed earlier Potentially Visible does not mean IS visible from the current location so we can further refine (lose even more polys) the PVS set by checking the Potentially Visible leafs bounding box against the View frustum.I will show you how to write Frustum Rejection code for bounding boxes later but this is the reason while compiling the Solid Leaf Tree we stored the Leafs Bounding Box in the Leaf Structure.We will see later the code for the 'LeafInFrustum' function.Just know for now that it returns true if part of this leaf is within the frustum and so should be rendered.

if (LeafInFrustum(currentleaf)==true || DontFrustumReject==true)
{


Above the DontRejectFrustum variable is used in the demo to Disable Frustum rejection so you can see the effect it has in the top down view.This is toggled by pressing 'F2'.

If we have made it here then the Leaf is visible and all polygons belonging to this leaf should be rendered.So what we do is gather the polygons used by this leaf (remember a leafs polygons are all stored in the PolygonArray next to each other) and add them to the appropriate TexturePolygonList to which they belong.

unsigned long start=LeafArray[currentleaf].StartPolygon;
unsigned long finish=LeafArray[currentleaf].EndPolygon;
unsigned long Count=0;

for (Count=start;Count<finish;Count++) // Collect the polys and add to Texture List
{
CurrentPoly=&PolygonArray[Count];
CurrentPoly->Next=pTexturePolygonList[CurrentPoly->TextureIndex];
pTexturePolygonList[CurrentPoly->TextureIndex]=CurrentPoly;
}
}// end if leaf infrustum
}// end for if pvsdata
currentleaf++;
}// end for i;
PVSPointer++;
}

Above we add each polygon in the leaf to the Texture Polygon List by checking the polygons texture index.In other words each poly in the leaf is checked and sent to the linked list to which it belongs.These lists will be rendered at the end of this function rendering all polygons in order of texture.

After this we increase the current leaf for each BIT checked in the current byte and when that is done we increment PVSPointer so that it points to the Next Byte (8 leafs) in the PVS data. And that is all there is to dealing with BYTEs in the PVS array that have some BITs set.

If this BYTE in the Leafs PVS data does equal zero however we dont have to check the bits at all. This means we are in Zero Run Length Encoding (compression) Mode and the NEXT byte in the PVS data array will equal how many bytes of zero there are or in other words, the length of the run.

else
{// we have hit a zero so read in next byte for run length
PVSPointer++;
BYTE RunLength=*PVSPointer;
PVSPointer++;
currentleaf+=RunLength*8;
}

}// end for while

Above shows what happen if the Current Byte the PVS data is zero. We read in the next Byte which will tell us how many zero bytes are in a run and we adjust our 'currentleaf' parameter so that it skips past all the leafs in the run. Remember that RunLength is the number of bytes in th run and each byte hold 8 bits of information for each leaf so that is why we adjust 'currentleaf' by 'Runlength*8'.
We then increase the 'PVSPointer' so that it will point at the next byte in the array for the next iteration of the while loop.

That is the main while loop covered. Once we get here the while loop has finished and we have a (possible) 25 linked lists of polygons , each list tied to a particular texture. Our next job now is easy, we simply loop from 0-24 checking that each Polygon list is not NULL, and if it isnt it means there are polygons to be rendered for that texture. So, we set the texture and do a while loop to render the polygons for each list.Here is the last bit of the 'DrawTree' function that actually draws ALL the polygons that can be seen this frame.

for (i=0;i<NumberOfTextures;i++)
{
if (pTexturePolygonList[i]!=NULL)
{
lpDevice->SetTexture(0,lpTextureSurface[i]);
CurrentPoly=pTexturePolygonList[i];
lpDevice->SetVertexShader( D3DFVF_LVERTEX );
while (CurrentPoly!=NULL)
{

lpDevice->DrawIndexedPrimitiveUP(D3DPT_TRIANGLELIST,0,CurrentPoly->NumberOfVertices, CurrentPoly->NumberOfIndices/3,&CurrentPoly->Indices[0], D3DFMT_INDEX16 ,&CurrentPoly->VertexList[0],sizeof(D3DLVERTEX));

CurrentPoly=CurrentPoly->Next;
}
}
}
lpDevice->SetTexture(0,NULL);
}//End Function

All done . Scene rendered. Above we Set the current texture in TextureStage 0 and also use the Temporary variable 'CurrentPoly' to point at the head of each List.This will be used to step though each polygon list.We then use 'CurrentPoly' to iterate through the Linked List until we reach the end of that list.When there is a polygon we Render it using DrawIndexedPrimitiveUP.

DX8 NOTE
For those of you that have not used DrawIndexedPrimitiveUP before the syntax is:-

DrawIndexedPrimitiveUP(
D3DPRIMITIVETYPE PrimitiveType,
UINT MinIndex,
UINT NumVertices,
UINT PrimitiveCount,
CONST void* pIndexData,
D3DFORMAT IndexDataFormat,
CONST void* pVertexStreamZeroData,
UINT VertexStreamZeroStride);

'MinIndex' is the index of the Minimum Vertex in the vertex array that will be used by this call.Because every Polygon in the scene has its own Vertex List we want to use all the vertices for this polygon so we start at zero. Primitive Count is a new parameter also.Instead of specifying the Number OF indices you now specify how many primitives will have to be rendered to make up this polygon.Remember your polygon may have been broken down into many triangles by the 'AddPolygon' function. For triangle Lists it works out we can just divide the Number Of Indices by three to get the primitive count. Above I have used D3DFMT_INDEX16 to tell direct3d that my Indices array holds 16bit (WORD) values.You can have 32 bit indices now if you need them. The one from last parameter is just a pointer to your vertex data and the last parameter describes the size to step between each vertex to get to the next one in the array (in bytes).This is equal tothe size of out D3DLVERTX structure.
All done.For completeness here is the entire DrawTree function:-

void DrawTree(long leaf)
{
POLYGON *CurrentPoly;
int i;
long PVSOFFSET=LeafArray[leaf].PVSIndex;
BYTE *PVSPointer=PVSData;
PVSPointer+=PVSOFFSET;
long currentleaf=0;

// Set All the Texture Batch Pointers to NULL
for (i=0;i<25;i++)
{
pTexturePolygonList[i]=NULL;
}

while (currentleaf<NumberOfLeafs)
{
if (*PVSPointer!=0)
{
for (i=0;i<8;i++)
{
BYTE mask=1<<i;
BYTE pvs=*PVSPointer;
if (pvs&mask)
{
if (LeafInFrustum(currentleaf)==true || DontFrustumReject==true)
{
unsigned long start=LeafArray[currentleaf].StartPolygon;
unsigned long finish=LeafArray[currentleaf].EndPolygon;
unsigned long Count=0;

for (Count=start;Count<finish;Count++)
{
CurrentPoly=&PolygonArray[Count];
CurrentPoly->Next=pTexturePolygonList[CurrentPoly->TextureIndex]; pTexturePolygonList[CurrentPoly->TextureIndex]=CurrentPoly; }
}// end if leaf infrustum

}// end for if pvsdata
currentleaf++;
}// end for i;
PVSPointer++;
}// if pvspointer!=0; In other words if this is not a compressed byte

else

{// we have hit a zero so read in the next byte and see how long the run of zeros is
PVSPointer++;
BYTE RunLength=*PVSPointer;
PVSPointer++;
currentleaf+=RunLength*8;
}
}// end for while

//Render Polygons in the textures lists.

for (i=0;i<NumberOfTextures;i++)
{
if (pTexturePolygonList[i]!=NULL)
{
lpDevice->SetTexture(0,lpTextureSurface[i]);
CurrentPoly=pTexturePolygonList[i];
lpDevice->SetVertexShader( D3DFVF_LVERTEX );

while (CurrentPoly!=NULL)
{
lpDevice->DrawIndexedPrimitiveUP(D3DPT_TRIANGLELIST,0,CurrentPoly->NumberOfVertices, CurrentPoly->NumberOfIndices/3,&CurrentPoly->Indices[0], D3DFMT_INDEX16 ,&CurrentPoly->VertexList[0],sizeof(D3DLVERTEX));

CurrentPoly=CurrentPoly->Next;
}
}
}
lpDevice->SetTexture(0,NULL);
}//End Function



We have now written the functions to compile our BSP tree and render the tree using a PVS. We do of course still have to to compile the PVS before we can use our render function which is what we are about to start looking at.Before we do though there is one more function I thought I had better put in for completeness.

In the previous tutorial we created a LineOfSight function that was created for determining if any solid space existed between two points.It was also used for collision detection and was a very handy function to have for all sorts of reasons.You could also use the LineOfSight function to create lightmaps or to see if a Monster can See the player during the game.

Because we now have a Solid LEAF tree (using the 'check first then go' system ) instead of the Node Tree this function had to be modified slightly.Once again I decribed how this function worked in Part 1 (with pretty pictures and everything:)) so I am not going to explain it again but simply show you the new Slightly modified Version with a few comments on the changes.

bool LineOfSight (D3DXVECTOR3 *Start,D3DXVECTOR3 *End, long Node)
{
float temp;
D3DXVECTOR3 intersection;
NODE *CurrNode=&NodeArray[Node];
PLANE *Plane=&PlaneArray[CurrNode->Plane];

int PointA=ClassifyPoint(Start, Plane);
int PointB=ClassifyPoint(End, Plane);

Point A and Point B hold the Start and End points of the Ray we are using to test against the tree.If BOTH points are on the Current Nodes Planes then we will simply treat it as being in front of the plane and recur down the Front of this node.We check this Nodes IsLeaf variable just to check that there is another Node down the front tree of this node.If there is then we recurse down that tree with the Ray.If IsLeaf is non zero however then there is not another NOde and the Front Variable holds the Leaf index.We can return true because Leafs are empty space and both points are obviously in this leaf.

if (PointA==CP_ONPLANE && PointB==CP_ONPLANE)
{
if (CurrNode->IsLeaf==0)// this is not a leaf so recurse
{
return LineOfSight(Start,End,CurrNode->Front);
}
else
{
return true; // This is a front leaf.Front Leafs are always empty so this is empty space
}
}


Next we check if PointA is in front of the Node and Point B is behind the node.First we check if the Back part of the Node is solid space (if 'Back==-1') and return false if it is because there is no line of sight if part of the ray is in solid space. If this is not the case we then split the Ray with the plane (using Getintersect decribed in detail in part1).We then check if there is a Node to the front of this node as we did above.If there is not a leaf to the front of this node but instead there is another node the we return true only if Both fragments of the ray are in empty space.We do this by recursing with both ray segments and returning true only if they both return TRUE.If there is a Leaf to the front of this node however we know that the front ray fragment must be in empty space(so we dont have to check that) so we return true if the Back ray fragment is in emprty space.

if (PointA==CP_FRONT && PointB==CP_BACK)
{
if (CurrNode->Back==-1) return false;

Get_Intersect (Start,End,&Plane->PointOnPlane,&Plane->Normal,&intersection,&temp);

if (CurrNode->IsLeaf==0)
{
return LineOfSight(Start,&intersection,CurrNode->Front) && LineOfSight(End,&intersection,CurrNode->Back) ;
}
else
{
return true && LineOfSight(End,&intersection,CurrNode->Back);
}
}


If none of the above have been true then we check the other way round seeing if PointA is to the Back of the node and if PointB is to the front.

if (PointA==CP_BACK && PointB==CP_FRONT)
{
if (CurrNode->Back==-1) return false;
Get_Intersect (Start,End,&Plane->PointOnPlane,&Plane->Normal,&intersection,&temp);

if (CurrNode->IsLeaf==0)
{
return LineOfSight(End,&intersection,CurrNode->Front) && LineOfSight(Start,&intersection,CurrNode->Back) ;
}
else
{
return true && LineOfSight(Start,&intersection,CurrNode->Back);
}
}

If we get as far as here it must mean one of the points is on the plane and the other is not.In this case we treat the point ON the plane as being the same as the Point not on the plane.In other words we send it down the side of the tree that the other point lays in.


if (PointA==CP_FRONT || PointB==CP_FRONT)
{
if (CurrNode->IsLeaf==0)
{
return LineOfSight(Start,End,CurrNode->Front);
}
else
{
return true;
}
}

else

{
if (CurrNode->Back==-1)
{
return false;
}
else
{
return LineOfSight(Start,End,CurrNode->Back);
}
}
return true;
}

Thankfully we have now Ported over all our Solid Node Based functionality to work with the Solid Leaf Tree which means its time to move on to the interesting stuff. Please make sure you understand everything we have talked about above before reading on because we are now going to cover calculating the PVS for the tree and it is vital that you understand how the Solid Leaf Tree works and how it is laid out in memory. With that said, lets move onto the good stuff.

Creating a PVS for the Solid Leaf Tree


Creating a PVS is not a simple task to undertake.Its one of those things that seems so hard when you are struggling to understand it and then seems so simple once you do understand it that you can not believe you struggled to understand it in the first place. Unfortunately because of the little information there is on creating a PVS on the net at time of writing it took me 3 months to struggle to grasp the concepts and implement the code for this tutorial. I hope I can explain it to you in a way that you can understand it easily and avoid the headaches I had.

Basically the process of Pre Calculating a PVS is broken down into two steps. Portal Generation is the First step and Anti-Penumbra Clipping is the second. We will ofcourse start at step one Portal Generation.

What is a Portal ?
In order to calculate which leafs in the tree can see each other we need to know the GAPS that exist between those leafs.For example if you have 2 rooms with a door way between them we need to know how much of room 2 can be seen through the doorway whilst standing in room 1. The problem is that is the only information we do not have about our 3d world.Our polygon set tells us where there are NOT gaps but does not tell us where there ARE gaps. To get around this problem we will create temporary polygons called Portals (thats right! Portals are just polygons) that will be the size and dimensions of all the gaps between all the leafs in the tree. If you think back to the two rooms example with the doorway, we need to somehow create a polygon that would fit into the doorway to plug it up so to speak. These portals will not be rendered or even kept after the PVS has been calculated but they are needed because if we can create a Portal to plug up the doorway then getting the dimensions of this Portal will get us the dimensions of the doorway which is exactly what we need.

As you can probably imagine creating portal Polygons to fit into all the gaps in our level is going to create many many portals.In otherwords if a room has 5 doorways we will need create a portal in each doorway and so on for every leaf .
If you look at the diagram below you should be able to see where the portals should be.I have highlighted them in red. You can see that the portals themselves lay on the Node that divides them.Below we have three leafs. You can see that portal 1 describes the entrance from leaf E,F,d1 and A,c2,d2 and vice versa. Portal 2 describes the entrance A,c2,d2 into leaf b,c1 and vice versa.

Because of the nature of BSP trees portals between two leafs will always be on one of the Split planes in out tree.Above Portal 1 is on Node F's split plane and Portal 2 is on Node A's split plane. So how do we create these Portal?

Portal Generation


First of all I am going to write a list of the steps involved in Portal Generation which may scare you half to death but do not worry I will show this being done in a moment with lots of diagrams:). To create a Complete Portal set for our Solid leaf bsp tree we have to do the following.

1.We have to Build a Large polygon (potential portal) on each Node plane in our tree.This polygon should be large enough so that it encompasses all the Leafs down the front and the back of the Node.We do this step for each node.This polygon is an initial portal that we will later send through the tree and clip down to size.

2.For each node, once we have created the initial large portal we then have to send it down the BSP tree starting at the root node. The great thing about the Solid Leaf Tree is that because all the space is divided into Solid/Empty areas we can simply send it down the tree and repeatedly clip the portal against the other nodes in the tree knowing that any portal fragments that end up in solid space can be removed because a Portal can not exist in the middle of a wall obviously.What this means is that we can create a large square Portal and send it down the BSP tree and clip it against all the nodes of our tree.As the Portal gets recursively split,any portal fragments that end up in Solid Space are deleted and we are left with the shape and size of the portal as it should be to fit the gap between the two leafs the portal is being generated for.
However the Clipping of the Portal to the tree is a little complex but I will describe it here in a little detail and then we will look at it in much more detail later when we write the clip function.Clipping the portal to the tree involved traverse each node in the tree and testing the Portal against the Node plane to see whether it is to the Front,Back,Spanning or on the plane.These are how the cases are dealt with

CP_FRONT:
If the portal (or portal fragment) is to the front of the current test node then we either send it down the front tree if another front node exists OR if there is a leaf there it means this portal has ended up in this leaf and we store the index of this leaf in the portal.Each portal will keep track of the leaf/leafs that it ends up in as this describes the two leafs the portal connects.Valid portals must be in two leafs so any Portals that only end up in one leaf will be deleted . As an example, you can not have a door way in your house that can only be seen from that room can you? There must be something on the other side.If a portal only ends up in one leaf then it is buried in a wall or something like that so is removed.

CP_BACK:
If the Portal is to the back of the current node then we either send it down the back tree is a node exists OR of this nodes 'Back' =-1 the the portal has ended up in solid space and can be deleted.

CP_ONPLANE:
This is a little tricky this one.If the Portal is on the plane then we first send the Portal down the Front Tree.This portal will can get split and carved whilst travelling down the front Tree so that the returned result is no longer one portal but a list of portal fagments that survived the Front tree.We then send each of these Portal Fragments down the back tree one at a time , clipping the fragment to the back tree..Any fragments that survive both trees are returned as being Portals.

CP_SPANNING:
If the Portal is spanning the Node plane then we split the portal into two (a front and a back portal) and send the Front Portal down the front tree and then send the Back portal down the back tree. We then return the resulting frgament from both trees. Dont Panic you will see the code shortly:).Its not as bad as it sounds.

3) At this point we will now have a list of many many Portals.Not all Portals will be valid however as some may only exist in one leaf.Portals must be in two leafs so we will then remove any portals that are only in one leaf.

4)At this point we have a valid list of all the portals in our scene.Each portal will contain which leafs it is in and for ease later on we will also store in each leaf which Portals it contains.Just as a reminder lets have a look at the 'Leaf' structure again and you should be able to see what those other fields were for.

struct LEAF{
long StartPolygon;
long EndPolygon;
long PortalIndexList[50];
long NumberOfPortals;
long PVSIndex;
BOUNDINGBOX BoundingBox;
};

Above I have highlighted the two fields we have not yet discussed.NumberOfPortals simply holds the NumberOfPortals that this leaf has in it.Remember that a room could have many many doorways. PortalIndexList is just an array in indexes into the master Portal Array.We will talk about the Portal Stucture in a moment but just know for now that this array is just like the other master arrays (LeafArray,NodeArray etc) except that it holds Pointers to PORTALs.When the Valid Portals are created they will be added to this array.

If the above explanation had made you head spin a little bit dont worry as we are now going to put it to practice with a load of diagrams so you can see step by step how the Portal Generation system works.

Building a Portal


The first step to creating a portal mentoined above was to create a Polygon on each Node plane that was large enough to encompass the data set of that node.This is why we stored the Bounding Box information of each node during the Tree compilation because this information will be needed to create such a polygon. We will start at the root of the tree so the first Node we will create a portal for is Node F.Diagram K, shows what this initial portal will look like.


PLEASE NOTE: The Walls in the Above diagram are all facing INWARDS.

Now we said above that we now have to traverse the tree starting at the root and classifying it against each pain.The first Node this polygon is tested against then is NODE F which obviously returns ON_PLANE because this is the Node the Portal was created from to begin with.I said above that if a portal is On the plane with the Node then we have to send the portal down both sides of that nodes tree.This means we have to first of all send it down the front tree of F. This means the next node we test it against is Node E to which this Portal Spans the plane.

The Portal is split into Portal A and Portal B.We send portal A down the Back of E but this is solid space so Portal A gets deleted because it cant be a valid portal.Portal B however is the front split so we now send that down Node E's front tree where it comes to Node D1.The Portal is spanning Node d1 also so this Portal has to be split also.
Portal B is now split into Portal C and D.D is behind Node d1 so trying to send it down the back of d1 results in it being in solid space and so is deleted because it can not be a valid portal.Portal C however is sent down Node D1's Front List at which point it ends up in Leaf [F,E,d1]. The work is now done for this side of Node Fs tree because we have gone right down the front tree of Node F , Carving it down to size as we go ,until we hit a leaf, so we record in that portal which leaf it is in and return.We are now back at Node F having done the Front Tree.We now have a portal returned which is nearly the right size but not quite.Part of the Portal is still in Wall F when the Portal should start where A meets F. Because this Polygon was On Plane with Node F we now have to send the Portal Fragment down the Back tree of F and do the same thing again.

First we come to Node C of which this portal C is in front of so we send it down Node C's front tree.Next we come to Node A which Portal C spans

So we split Portal C into Portal E and Portal F.Portal F is sent down Node A's front list where it is tested against Node d2 and is found to be infront of Node d2.It is then passed down Node d2's front list where it ends up in a leaf [c2,A,d2] which means this portal has during its life has ended up in Leaf [c2,A,d2] and [F,E,d1] and so is a valid portal.Portal F in the above diagram is exactly how our portal should be.It is important to understand also that when a Portal is Split into two child Portals,the child portals inherit the History of the Parent Portal.In otherwords,if the Parent Portal has already ended up in a leaf, then the two child pointers also have that data copies over into them so that they too have ended up in the same leaf as the parent.This is similar to how we split the Polygons in the BSP compiler .If you remember we carried the History of the Parent Polygon into the two child Splits so that if the Parent had been used as a split plane then the the two child Splits were marked as having been used as a split plane also.With the Child Portals though, instead of inheriting whether the Polygon has been used as a split plane, we inherit the journey of the Parent portal or more basically which Leafs the parent portal has landed in up to that point.
Dont forget though that at Node A we also had a back split 'Portal E' to consider.This was the bit of the original portal that seemed to be buried in Wall F.We send it down Node A's Back list where it comes to Node B.It is Behind Node B and Therefore in Solid space and so is deleted.

Thats the basic idea of portal generation.We have just created the portal for Node F but we have to step through every Node in the tree and do this for every node.Looking at the diagram above try seeing what happens when you build a portal on Node E.This Node should not have any portals that end up in two leafs and you will find that using this system it wont have.Oh hell let me draw some more diagrams just to show you what I mean. We will now see what happens when we try to create portals for a Node that will not have any valid portals .Lets build a Portal on Node E and see what our Portal Generator will do with it.



Once again we build a Portal on Node E (called Portal A) initially large enough to contain all the data(polygons) in its Front and Back Trees.We sent it in at the Root node (NODE F).Portal A is spanning Node F so we have to split it into portals B and C. We send Portal B down the front of Node F where it comes to Node E of which it is On the Plane of so we have to process it down both sides.We send it down Node E's front first where it comes to Node d1.It is spanning d1 so Portal B gets split in to Portal D and E.D is sent down the front where it end up in leaf [F,E,d1] and portal E is sent down Node D1's back where it is in solid space and so gets deleted.



Remember that Portal D originally was sent here from Node E (the Node it was On the plane of) so is now returned there where it will be sent down the back list of Node E.But there is only solid space down the back of node E so we dont send it down there but instead just return it. Later when we test the portals we will find out that that portal only ended up in one Leaf [F,E,d1] and so is not a valid portal and so should be deleted. And there you have, no portals were generated for Node E which is correct.

Creating the Initial Portal at Each Node


The first part of the puzzle to solve is how to Create the Initial Portal on each Node plane that is large enough to encompass that Nodes data set.Below I will take you through a couple of lines at a time the code to the 'CalculateInitialPortal' function but first lets just have a look at what the PORTAL structure looks like.It is very similar to our POLYGON structure but has a few extra fields.

struct PORTAL
{
D3DLVERTEX *VertexList;
D3DXVECTOR3 Normal;
WORD NumberOfVertices;
WORD NumberOfIndices;
WORD *Indices;
PORTAL * Next;
PORTAL * Prev;
BYTE NumberOfLeafs;
long LeafOwnerArray[2];
};

This first five fields of this structure are the same as the POLYGON structure.This allows us to easily cast a PORTAL pointer to a POLYGON Pointer so that we can pass in Portals to our 'SplitPolygon','ClassifyPoly' & 'ClassifyPoint' functions.This structure though has a Next and a Prev pointer that will be used to point to other portals in linked lists.We use the previous portal so that we can remove portals from the linked list without destroying the chain.We will see this in action shortly.
The Next Variable is 'NumberOfLeafs' which will be incremented every time the portal being created ends up in a leaf. If at the end of the portal creation this field does not equal '2' then this is not a valid portal because only one side of it is in a leaf. Portals Must exist in two leafs. The final 'LeafOwnerArray' field is an array of Indexes into the LeafArray describing what two leafs the portal is in , in other words which two leafs is this portal the doorway between.

Lets have a look at the code to the 'CalculateInitialPortal' function.This function is called for each Node in the tree during the Portal Generation process.We pass in the NOde we want to generate the Portal For.Remember that this Initial Portal is a Large portal that encompasses the Nodes data set.This portal will be carved up as we send it down the tree and will possible end up getting carved into many Real Portals.

PORTAL * CalculateInitialPortal(long Node)
{
D3DXVECTOR3 MaxP,MinP,CB,CP;
MaxP=NodeArray[Node].BoundingBox.BoxMax;// Get this Nodes Bounding Box ranges
MinP=NodeArray[Node].BoundingBox.BoxMin;
D3DXVECTOR3 PlaneNormal=PlaneArray[NodeArray[Node].Plane].Normal;


CB=(MaxP+MinP)/2;
float DistanceToPlane=D3DXVec3Dot(&(PlaneArray[NodeArray[Node].Plane].PointOnPlane-CB),&PlaneNormal);
CP=CB+(PlaneNormal*DistanceToPlane);

First we get the Nodes Bounding Box MAX and MIN points and use them to calculate the Center of the Bounding Box ( CB ). Next we use the dot product to find the distance from 'CB' to the Plane. This is just standard DotProduct stuff so if you are not familiar you may want to read our DotProduct Tutorial. Anyway if we create a Vector from the Box Center Point (CP) to a Point known to be on the Plane (we have this stored in the Nodes Plane structure) and then Dotproduct that vector with the Plane Normal we get returned the Shorted Distance to the plane from this point. If we travel in the direction of the Plane Normal from 'CB' we will be on the plane at point 'CP'.CP will be the Center of the Portal we are going to create. Look below at Figure Q which shows a Portal being calculated (SIDE VIEW like a cross section).
You can see from this diagram where Point CP actually is.We now have the Center of our portal but now we need to find the vertices are going to go.The portal we create will be square so we need four vertices. To do this we have to create a Local axis for the Portal.We already have the portals Look Vector (the Normal) but we need to create its Right Vector(U in the diagram) and Up vector (V in the Diagram).
Performing the Cross Product on two Vectors returns a Vector that is Orthogonal to the other two.In the above diagram you can see then that if we feed the Plane Normal (which will also be the portal normal) and the World UP vector we will have returned a Vector that is Orthogonal to both (Vector U).This is our right Vector.This then means that if we Perform the Cross product on the Normal and this new right Vector the cross product will return the Up vector (Vector V) in the diagram because this is orthogonal to the other two axis. Although this works fine in the above example problems can arise if for example the Plane Normal is the same as the World Up vector because feeding the Cross Product two vector that are the same will fail and the cross product will return an incorrect result . This is because if both axis are the same then there is an infinite number of orthogonal vectors that could be returned. To get around this we have to check the Plane Normal to decide whether we are going to use the World Y axis (like above), the World X axis or the World Z axis to Cross (cross product) with the Normal to get Vector U.Below we check the Normal of the Plane and create a Vector that is LEAST aligned to the plane normal.This vector when used with the plane Normal in a Cross product calculation will return Vector U in the above diagram.

D3DXVECTOR3 A=D3DXVECTOR3(0.0f,0.0f,0.0f);
if( fabs(PlaneNormal.y) > fabs(PlaneNormal.z) )
{
if( fabs(PlaneNormal.z) < fabs(PlaneNormal.x) )
{
A.z = 1;
}
else
{
A.x = 1;
}
}
else
{
if (fabs(PlaneNormal.y )<= fabs(PlaneNormal.x) )
{
A.y = 1;
}
else
{
A.x = 1;
}
}

At this point, Vector 'A' will either contain (1,0,0),(0,1,0) or (0,0,1) depending on which vector is least aligned with the plane normal. All we have to do now is Cross Vector A with the Plane Normal to get Vector U (our right vector).We Normalize (make it a unit vector) U and then Cross 'U' with the Plane Nomral to get Vector 'V' our up vector.Once again Normalizing the Result.

D3DXVec3Cross (&U,&A,&PlaneNormal);
D3DXVec3Normalize(&U,&U);
D3DXVec3Cross (&V,&U,&PlaneNormal);
D3DXVec3Normalize(&V,&V);

We now have our U and V vectors which along with our Plane Normal vector describes the Local Axis of the Portal. At this point I am now going to show Figure Q) again to save you scrolling back up everytime I refer to the diagram.
Next we get the Length of the Vector from center point (CP) to the Node Bounding Box maximum Points. This will basically be the distance from the Center of the Bounding Box to the Box Corners.In other words, half the distance from top right of the Bounding Box to bottom left.This Distance is how much we need to scale out U and V vectors so that CP+V will describe the Top edge of the portal,CP-V will describe the Bottom edge of the portal, CP+U will describe the right Right edge of the Portal and CP-U will describe the Left edge of the Portal.

D3DXVECTOR3 BoxHalfLength=(MaxP-CB);
float Length=D3DXVec3Length(&BoxHalfLength);
U=U*Length;
V=V*Length;

With these Vectors now scaled like this we can create the the four vertices for our portal like so:-

D3DXVECTOR3 P[4];
P[0] = CP + U - V;// Bottom Right
P[1] = CP + U + V;// Top Right
P[2] = CP - U + V;// Top Left
P[3] = CP - U - V;// Bottom Left

Thats the vertices created for the portal.Now we just fill in the Portal Structure much the same way we fill in the Polygon Structure in the 'AddPolygon' structure.We break the square into two triangles indexed by 6 indices.

PORTAL *Portal=new PORTAL;
ZeroMemory(Portal,sizeof(PORTAL));

Portal->Normal=PlaneNormal;
Portal->NumberOfVertices=4;
Portal->NumberOfIndices=6;
Portal->VertexList = new D3DLVERTEX [Portal->NumberOfVertices];
Portal->Indices = new WORD [Portal->NumberOfIndices ];

for (int i=0;i<4;i++)
{
Portal->VertexList[i].x=P[i].x;
Portal->VertexList[i].y=P[i].y;
Portal->VertexList[i].z=P[i].z;
Portal->VertexList[i].color=D3DCOLOR_RGBA(rand()%255,rand()%255,rand()%255,240);
Portal->VertexList[i].specular=0;
Portal->VertexList[i].tu=0;
Portal->VertexList[i].tv=0;
}
Portal->Indices[0]=0;
Portal->Indices[1]=1;
Portal->Indices[2]=3;
Portal->Indices[3]=3;
Portal->Indices[4]=1;
Portal->Indices[5]=2;
Portal->Next=NULL;
Portal->Prev=NULL;
Portal->NumberOfLeafs=0;
return Portal;
}

Thats the 'CalculateInitialPortal' function done then. I will just show it in its complete form now (without my interruptions every couple of lines) so you can follow it more easily.

PORTAL * CalculateInitialPortal(long Node)
{
D3DXVECTOR3 MaxP,MinP,CB,CP;
MaxP=NodeArray[Node].BoundingBox.BoxMax;// Get this Nodes Bounding Box ranges
MinP=NodeArray[Node].BoundingBox.BoxMin;
D3DXVECTOR3 PlaneNormal=PlaneArray[NodeArray[Node].Plane].Normal;


CB=(MaxP+MinP)/2;
float DistanceToPlane=D3DXVec3Dot(&(PlaneArray[NodeArray[Node].Plane].PointOnPlane-CB),&PlaneNormal);
CP=CB+(PlaneNormal*DistanceToPlane);


D3DXVECTOR3 A=D3DXVECTOR3(0.0f,0.0f,0.0f);
if( fabs(PlaneNormal.y) > fabs(PlaneNormal.z) )
{
if( fabs(PlaneNormal.z) < fabs(PlaneNormal.x) )
{
A.z = 1;
}
else
{
A.x = 1;
}
}
else
{
if (fabs(PlaneNormal.y )<= fabs(PlaneNormal.x) )
{
A.y = 1;
}
else
{
A.x = 1;
}
}

D3DXVec3Cross (&U,&A,&PlaneNormal);
D3DXVec3Normalize(&U,&U);
D3DXVec3Cross (&V,&U,&PlaneNormal);
D3DXVec3Normalize(&V,&V);

D3DXVECTOR3 BoxHalfLength=(MaxP-CB);
float Length=D3DXVec3Length(&BoxHalfLength);
U=U*Length;
V=V*Length;

D3DXVECTOR3 P[4];
P[0] = CP + U - V;// Bottom Right
P[1] = CP + U + V;// Top Right
P[2] = CP - U + V;// Top Left
P[3] = CP - U - V;// Bottom Left
PORTAL *Portal=new PORTAL;
ZeroMemory(Portal,sizeof(PORTAL));

Portal->Normal=PlaneNormal;
Portal->NumberOfVertices=4;
Portal->NumberOfIndices=6;
Portal->VertexList = new D3DLVERTEX [Portal->NumberOfVertices];
Portal->Indices = new WORD [Portal->NumberOfIndices ];

for (int i=0;i<4;i++)
{
Portal->VertexList[i].x=P[i].x;
Portal->VertexList[i].y=P[i].y;
Portal->VertexList[i].z=P[i].z;
Portal->VertexList[i].color=D3DCOLOR_RGBA(rand()%255,rand()%255,rand()%255,240);
Portal->VertexList[i].specular=0;
Portal->VertexList[i].tu=0;
Portal->VertexList[i].tv=0;
}
Portal->Indices[0]=0;
Portal->Indices[1]=1;
Portal->Indices[2]=3;
Portal->Indices[3]=3;
Portal->Indices[4]=1;
Portal->Indices[5]=2;
Portal->Next=NULL;
Portal->Prev=NULL;
Portal->NumberOfLeafs=0;
return Portal;
}

That wasn't so bad was it ?

We now have a function that can create the Initial Portal, next we will look at the function that actually Processes the Portal down the tree and actually creates the final portal. This function called 'Clip Portal' is the one function that generates our portals. It is a recursive function and all you have to do is pass the function a Portal (Our Initial Portal) and the function will return a Linked List of Real Portals (Fragments of the Initial Portal) that survived the Clipping process. If the function returns NULL then none of 'InitialPortal' survived the Clip Process. After we talk about the Code for the 'ClipPortal' function we will take a look at the parent function that calls both 'CalculateInitialPortal' & 'ClipPortal' for every Node in the tree and generates the master PortalArray that will be used for the PVS set.

I will try and go easy with you explaining this next function because this is the Real MEAT of the portal generator and is the function that does all the work.As a result it can look a bit daunting on first look.But I will take it real slow.

The 'Clip Portal' Function (Pretty Scary)


Actually the 'ClipPortal' function is very similar to the actual BSP Compiler in many ways.We simply Send the Portal through the tree Classifying it against each node. All the Portal Related functions are contained in 'Portal.cpp'.Anyway, here it is a few lines at a time.:-

PORTAL * ClipPortal(long Node,PORTAL *Portal)
{
PORTAL *PortalList=NULL,*FrontPortalList=NULL,*BackPortalList=NULL,*Iterator=NULL,*FrontSplit=NULL,*BackSplit=NULL;

switch (ClassifyPoly(&PlaneArray[NodeArray[Node].Plane],(POLYGON *)Portal))
{

We will call ClipPortal initially from out BuildPortals function (which we will look at later) and pass in the InitialPortal we created by calling 'CalculateInitialPortal'. We will also pass in a value of Zero because we want the clipping to start for the portal at the Root node of the BSP Tree. Next we do something you should be very familiar with by now, we classify the Portal against the Node Plane. Notice above that because the first part of our PORTAL structure is the same as our POLYGON structure we can safely cast the PORTAL to type POLYGON to save have to re write the functions to deal with portals. The Rest of the function simply is divided into four 'Case' blocks for our switch statement with instructions on what to do if the Portal is CP_FRONT,CP_BACK,CP_SPANNING or CP_ONPLANE. We will look at these cases one at a time starting with CP_FRONT. Also it may help to refer back to Diagrams K through P while reading the code as this is where we showed the Portal being clipped Graphically which may help you picture it in your head.

CP_FRONT

If the Portal is to the Front of the Node then we have two possible courses of action. If there IS a front Node (IsLeaf==0) then we simply call this function again recursively using the 'Front' Node to send the Portal down the Front Tree. The function will return a list of Portal Fragments that survived the front tree in a linked list which we point to using 'PortalList'.The function may return 'NULL' if nothing of this portal survives the recursive clipping process.We then 'Return' this Linked List from the function.
However if there is a Leaf to the front of this node (IsLeaf!=0) then the portal has made it to a leaf (empty space) and has finished its journey in this part of the tree. We record in the Portal Structures 'LeafOwnerArray' the Index of the leaf it has landed in and also increment the Portals 'NumberOfLeafs' variable so that we can keep track of how many leafs it lands in. Remember that Portals that only end up in one Leaf at the end will be removed because they are not Valid Portals. Portals Must end up in two leafs.

After we have recorded the leaf information in the Portal we simply Return the Portal.

case CP_FRONT:

if (NodeArray[Node].IsLeaf==0)
// Not a Leaf {
PortalList=ClipPortal(NodeArray[Node].Front,Portal);
return PortalList;
}
else // Is a Leaf
{
Portal->LeafOwnerArray[Portal->NumberOfLeafs]=NodeArray[Node].Front;
Portal->NumberOfLeafs++;
Portal->Next=NULL;
Portal->Prev=NULL;
return Portal;
}

break;

Phew that was nice and easy. Luckily the CP_BACK case is equally as stress free.

CP_BACK

In order to send the Portal down the Back tree we first have to check a Back tree actually exists.If it does then we send the portal down the Back tree by calling the 'ClipPortal' function again with the New Back Node.The function will return a Linked List of portal fragments that survive the tree or 'NULL' if none nothing of this Portal survives.We then return this Linked list .
If there is no Back tree however it means that this portal has ended up in solid space and so can not possibly be a Portal.In that case we delete the portal and return 'NULL' . This Portal has died. (RIP my young portal friend).

case CP_BACK:
if (NodeArray[Node].Back!=-1)// another Node exists
{
PortalList=ClipPortal(NodeArray[Node].Back,Portal);
return PortalList;
}
else// were are in solid space
{
DeletePortal ( Portal);
return NULL;
}
break;

So as you can see if a portal (or more likely a portal fragment) end up in the Front or Back 'case' it is either passed down the front or back of the tree,deleted or returned having had its Leaf information saved. It is not very likely that our Initial Portal will ever fall into these two cases because it is very large and will be spanning several polygons.The next 'case' (CP_SPANNING) is the code that is responsible for splitting the Portals into fragments and sending each Fragment down the tree keeping track of which fragments survive and returning all the fragments from the function in a Linked List.

CP_SPANNING:

If the Portal is Spanning the Node Plane then we have to split the portal into two fragments.We also want to remove the parent portal because it is no longer needed. As with splitting the Polygons in our BSP Compiler Child Portal Splits also retain the information of the parent that created them.Remember the parent portal may have already been recorded as having been in a leaf which means the Child Splits have also been in that leaf because they were part of the Parent at the time. In order to do this we simply have to copy over the Parent 'LeafOwnerArray' and 'NumberOfLeafs' fields into both the Child Split portals before we delete the Parent portal.

case CP_SPANNING:

FrontSplit=new PORTAL;
BackSplit=new PORTAL;
ZeroMemory(FrontSplit,sizeof(PORTAL));
ZeroMemory(BackSplit,sizeof(PORTAL));

SplitPortal(Portal,&PlaneArray[NodeArray[Node].Plane],FrontSplit,BackSplit);
DeletePortal (Portal);

You can see above that we create two new Portals 'FrontSplit' and 'BackSplit' and pass them into the 'SplitPortal' function to be filled.The parent Portal is then deleted.

The 'SplitPortal' function is just a wrapper function that calls 'SplitPolygon' and then copies over the additional Portal information ('LeafOwnerArray' & 'NumberOfLeafs') into the Child Splits as mentioned above.

We now have two Portals to process from here, 'FrontSplit' and 'BackSplit'.We deal with them one at a time. So first we deal with the Front Split.

if (NodeArray[Node].IsLeaf==0)//There is another Front NODE
{
FrontPortalList=ClipPortal(NodeArray[Node].Front,FrontSplit);
}
else// Its about to get pushed into a Leaf
{
FrontSplit->LeafOwnerArray[FrontSplit->NumberOfLeafs]=NodeArray[Node].Front;
FrontSplit->NumberOfLeafs++;
FrontSplit->Prev=NULL;
FrontSplit->Next=NULL;
FrontPortalList=FrontSplit;
}

This should look a bit familiar to you by now.The first thing we have to do is send the Front Split down the Front Tree of the Current Node if a Front Tree exists.If not then the Front Split has ended up in a Leaf so we record in the Portal which leaf it has ended up in.The idea is that we will send the Front Split down the Front Tree and get returned a Linked List of fragments that Survived the Front Tree. A pointer to these fragments is returned and stored in 'FrontPortalList'. You can see that if the Front Split end up in a Leaf then we simply set 'FrontPortalList' to point at the Front Split portal because this is obviously the only Portal in the front because it will not get clipped into any more fragments.If however there is Front Tree to send the Front Split down then 'FrontPortalList' will contain either the Fragments of the Front Split that survived or NULL if nothing of the Front Split survived the Front Tree (totally Clipped Away).

Our next task is now to do the same with the Back Split and send that down the Back Tree if one exists storing a pointer to the returned fragments in 'BackPortalList' or delete Back Split completely if there is no Back tree (Back==-1) because the entire Back Split must be solid space.

if (NodeArray[Node].Back!=-1)
{
BackPortalList=ClipPortal(NodeArray[Node].Back,BackSplit);
}
else // in solid space so delete
{
DeletePortal ( BackSplit);
}

As you can see above, if there is a back tree we get either the Portal Fragments of the Back Split returned in BackPortalList or NULL if nothing of the Back Split survived.Else if there is no back tree this means the Back Split is in solid space and there for is completely removed (clipped out).

At this point we now have a possible two linked lists, one containing the Fragments of the Front Split that survived and one containing the Fragments of the Back Split that survived. What we now have to do is link these two linked lists together into one main linked list that will be returned from the function containing ALL (both front and back) fragments that have survived.

if (FrontPortalList!=NULL)
{
Iterator=FrontPortalList;
while (Iterator->Next!=NULL)
{
Iterator=Iterator->Next;
}
if (BackPortalList!=NULL)
{
Iterator->Next=BackPortalList;
BackPortalList->Prev=Iterator;
}
return FrontPortalList;
}//something in the front list

else ////There is nothing in the front list

{
if (BackPortalList!=NULL) return BackPortalList;
return NULL;
}

return NULL;
break;

All we do above is attach the 'BackPointerList' to the end of the 'FrontPortalList'.First we check if there are any fragments in the Front Portal List. If there is then we find the last fragment in the front list and if there are fragments in the back list we attach the last portal in the Front List to the Back list.In other words, instead of the last portal in the front list haveing its 'Next' pointer set to null we now set the 'Next' pointer to point at the Back List.We have now merged the lists in to one large one.Obviously if there are no portals in the back List then just Front List is returned.
If there are no portals in front list then we just return back list. All Done.

Sometimes it can be hard to picture recursive function in your head but try and imagine the cascade effect of this splitting and merging lists as we progress down the tree. I have drawn a diagram (Figure R) that shows this splitting and merging in a simply 3 Node four leaf set up.



Above we send in the initial portal into the 'ClipPortal' function at Node 0.The portal is spanning the plane so is split into Front and Back.Lets process the front List.The front split is sent down to Node 1 which it is also spanning so is split once again into Front and Back Splits and sent down the Front and Back tree respectively.Both the Front and Back Splits of N1 end up in Leafs so are returned back to Node 1 where 'FrontPortalList' contains one portal and the BackPortalList' contains one Portal.These two lists are merged into one so that the list contains BOTH Portals and is returned from Node 1's function back to Node 0 function.At Node 0 we now have a front list containing two portal fragment.Now we process Node 0's Back Split which in the above diagram take a similar path. , being split again at Node 2 with both Node2's Front and Back splits ending up in leafs.These Leaf portals are returned to Node 2 where they are merged into one list and returned back to Node 0. Node 0 now has a FrontPortalList containing two fragments and a BackPortalList containing two fragments.These are merged into one list that contains four fragments and the funtion returns back to the function that called 'ClipPortal' with the Portal Fragments List.

In the above example all fragments ended up in leafs but you should be able to see that if at any point a fragment ended in sold space it would be removed.For example if Leaf 4(L4) was not there and in fact solid space was there then Node 2's BackPortalList would be NULL and the Front portal List would contain one polygon.In this case the Front Portal List of Node 2 would be returned to Node 0.Node 0 then would only have one portal in its Back Portal List.The final list merged at Node 0 then would only contain three portal fragments.

I would just like to say that the Diagram is a little inaccurate because obviously there is not such thing as a Back Leaf in our solid Node tree.It was just to prove a point on how lists get merged.In a real Solid Leaf Tree, Leaf L4 would be a Front Leaf attached to an intermediate Node where L4 currently is.

Study the diagram and read the text again a few more times and it should start becoming clear to you. The 'Splitting Case' in this function is the main part of the code and is solely responsible for clipping the portals into fragments and returning the Fragments.

We do however have one more case to consider in our 'ClipPortal' function and that is probably the nastiest 'case' of all.The ON_PLANE case.

CP_ONPLANE

The CP_ONPLANE case is very important because without it a portal can not at any point go down both sides of the tree.This is needed because if a portal lays between two leafs, one leaf will be to the back of the portal and one leaf will be to the front.Sending the Portal down BOTH sides of the tree is the only way that a Portal can end up in more than one leaf.The code we have seen up to this point mearly sends down the Front Tree or the Back tree or Splits that portal and sends the Front or Back split down the relative trees. In other words they clip and push the portal until it pops out in a leaf (one leaf) and then returns.We know for a fact that a Portal must exist down its Front and Back tree (remember that a portal was created from a NODE plane and so is on that plane) if it is to be a valid portal because it must exist in a leaf each side of the Node Plane it was created from. To picture this better in your head imagine Two rooms with a doorway in between them.The Node that the dorrway is on has a leaf (room) to each side of it.Therefore when we create a portal on that node (for the doorway) we can only record which leafs the portal lands in by testing the Front AND the back of the doorway.We would send the Portal down the front of the doorway and say "Yes this portal is in room 1" and then send it down the back of the dorway and say "yes this portal is also in room 2", in other words we have just figured out that the doorway portal connects rooms 1 and 2.

In order to do this we have to first of all send the portal down the front tree where we will get returned any Portal Fragments that have survived the font tree.At this point the ON_PLANE case has been no different from the CP_SPANNING case .The difference is though, with the CP_SPANNING case we created a List for the Front Split and the Back Split and then merged them together.Up until this point though the Front and Back lists had been seperate (Front Split and Back Split were two speperate portals). With CP_ONPLANE we have to clip the SAME portals down BOTH sides of the tree, so we once we have our Front Portal List we have to send EACH portal in that Front List down the back Tree and Clip each one to the back Back Tree. In other words, we send the initial portal down the Front Tree and get back a list of fragments that survived.We then send each fragment in this list down the Back tree also so the any fragment that survives has been clipped to both trees.

This sounds quite horrible doesnt it but in fact it's not that bad.The Nasty bit is that we first get a List of Front Portal Fragments and then Send each Fragment down the back tree which can get split into more fragments.So we get Fragment lists returned for each fragment aaaAARRHGGGH. Lets have a look at the code anyway.

case CP_ONPLANE:

if (NodeArray[Node].IsLeaf!=0) // this is a Leaf Node
{
Portal->LeafOwnerArray[Portal->NumberOfLeafs]=NodeArray[Node].Front;
Portal->NumberOfLeafs++;
Portal->Next=NULL;
Portal->Prev=NULL;
FrontPortalList=Portal;
}
else
{
FrontPortalList=ClipPortal(NodeArray[Node].Front,Portal);
}

if (FrontPortalList==NULL) return NULL;
if (NodeArray[Node].Back==-1) return FrontPortalList;


Well there is nothing new to us so far. If the Portal is on the plane we have to send it down the Front Tree first to get returned back a list of portal fragments that have been clipped to the front tree.However if there is no front tree (IsLeaf!=0) then there is a leaf to the front of this node so we simply record in the portal which leaf number it has landed in.This means no clipping will take place down the front tree so this portal will not get split into fragments so we simply set 'PortalFrontList' to point at this portal.
If there is a Front Tree however we send the portal down the front tree and get returned a list of portal fragments that have survived the front tree or NULL if nothing of the portal survived.

We can now check 'FrontPortalList' because if it is NULL then there is no point continuing because this can not be a portal if it does not survive the front tree so we just return null. Also if this node does not have a back tree we just return the front portal list.You are probably thinking that if there is no back tree then this can not be a portal and thats what I thought at first as well.You must remember though that the solid leaf tree is compiled in such a way polygons sharing the plane and facing the same way as the Split Plane are placed in the Nodes Front tree.Therefore we can only be certain that it is not a valid portal if it does not survive the Front Tree. This means we may return some portals that only exist in one leaf and are not valid portals but they will be taken out later on once the portal list is returned.Remember that the 'ClipPortal' function will have been called by the 'BuildPortals' function which we have not looked at yet but will in a moment. The 'BuildPortals' function will check through the List of portals returned by 'ClipPortal' and remove any portals that do not exist in two leafs.

Ok then, so if we have made it to here it means we have at least one but perhaps many fragments in our Front Portal List.It also means that there is a back tree or otherwise we would have just returned 'FrontPortalList'. So what we have to do now is is step through each Fragment in 'FrontPortalList' and send it down the Back tree of this node.This will in turn send back a list of Fragments (for EACH fragment in the Front List) that survived the Back tree.

while (FrontPortalList!=NULL)
{
PORTAL *tempnext=FrontPortalList->Next; BackPortalList=NULL;
BackPortalList=ClipPortal(NodeArray[Node].Back,FrontPortalList);// fragment this fragment into the back tree

So we start a while loop to step through each Portal (Fragment) in PortalFrontList.Each time through the loop we store in 'tempnext' a pointer to the Next Portal in FrontPortalList because the current portal will be getting sent down the back tree and possibly deleted which means when the function returns FrontPortalList could be NULL so 'FrontPortalList->Next' would hold complete gibberish and we would loose a way to step though the list.As you can see above we send the Current Fragment (pointed to by 'FrontPortalList' down the back tree and store a pointer to the list of Fragments that have survived in 'BackPortalList'. 'BackPortalList' will be NULL if nothing of the Current Front List fragment survived the tree.

If there are some portals that have survived this fragment (and this iteration of the while) then we get the Last Fragment in the list and attach it to our master portal list 'PortalList'. Remember that 'BackPortlist' will contain each loop the Fragments that have survived for the current fragment that was in the Front list. Any portals that have been returned in BackPortalList each time through the whole loop are added to 'PortalList' at the end of each loop so that when we exit the loop 'PortalList' will contain a Linked List of ALL the Fragments that survived the front and back trees for each loop. So below is the very last section of the Clip Portal function.

if (BackPortalList!=NULL)
{
Iterator=BackPortalList;
while (Iterator->Next!=NULL)
{
Iterator=Iterator->Next;
}

// attach the last fragment to the first fragment from a previos loop.

Iterator->Next=PortalList;
if (PortalList!=NULL)
{
PortalList->Prev=Iterator;
}
PortalList=BackPortalList; // portal List now points at the current complete list of fragment collected from each loop
}

FrontPortalList=tempnext;
}
return PortalList;
// ALL DONE
break;

Well admittedly that function can fry your brain a little bit first time through but if you are comfortable with recursion then there is nothing here very different from any other recursive function. Its good to try and and do a dry run in your head and following the code imagine the root a portal might take.Anyway, to give you a much better view of the function I will now show you it in its complete form.I have colored the different section to make easier reading.

PORTAL * ClipPortal(long Node,PORTAL *Portal)
{
PORTAL *PortalList=NULL,*FrontPortalList=NULL,*BackPortalList=NULL,*Iterator=NULL,*FrontSplit=NULL,*BackSplit=NULL;
switch (ClassifyPoly(&PlaneArray[NodeArray[Node].Plane],(POLYGON *)Portal))
{

case CP_ONPLANE: Send down Both Trees
if (NodeArray[Node].IsLeaf!=0) // this is a Leaf Node
{
// The Front is a Leaf
Portal->LeafOwnerArray[Portal->NumberOfLeafs]=NodeArray[Node].Front;
Portal->NumberOfLeafs++;
Portal->Next=NULL;
Portal->Prev=NULL;
FrontPortalList=Portal;
}
else
{
// Send the Portal Down the Front List and get returned a list of PortalFragments that survived the Front Tree
FrontPortalList=ClipPortal(NodeArray[Node].Front,Portal);
}

// Then send each fragment down the back tree.
if (FrontPortalList==NULL) return NULL;
if (NodeArray[Node].Back==-1) return FrontPortalList;

while (FrontPortalList!=NULL)
{
PORTAL *tempnext=FrontPortalList->Next;
BackPortalList=NULL;
BackPortalList=ClipPortal(NodeArray[Node].Back,FrontPortalList);

if (BackPortalList!=NULL)
{
Iterator=BackPortalList;
while (Iterator->Next!=NULL)
{
Iterator=Iterator->Next;
}
// attach the last fragment to the first fragment from a previos loop. Iterator->Next=PortalList;
if (PortalList!=NULL)
{
PortalList->Prev=Iterator;
}
PortalList=BackPortalList; // portal List now points at the current complete list of fragment collected from each loop
}

FrontPortalList=tempnext;
}
return PortalList;
break;


case CP_FRONT: // Either send it down the front tree or add it to the portal list because it has come out //Empty Space

if (NodeArray[Node].IsLeaf==0)
{
PortalList=ClipPortal(NodeArray[Node].Front,Portal);
return PortalList;
}
else // The Front Node is a Empty Leaf so Add it to the Portal List
{
Portal->LeafOwnerArray[Portal->NumberOfLeafs]=NodeArray[Node].Front;
Portal->NumberOfLeafs++;
Portal->Next=NULL;
Portal->Prev=NULL;
return Portal;
}
break;


case CP_BACK:// either asend it downthe back tree or Delete it if no back tree exists

if (NodeArray[Node].Back!=-1)
{
PortalList=ClipPortal(NodeArray[Node].Back,Portal);
return PortalList;
}
else
{
DeletePortal ( Portal);
return NULL;
} break;


case CP_SPANNING:

FrontSplit=new PORTAL;
BackSplit=new PORTAL;
ZeroMemory(FrontSplit,sizeof(PORTAL));
ZeroMemory(BackSplit,sizeof(PORTAL));
SplitPortal(Portal,&PlaneArray[NodeArray[Node].Plane],FrontSplit,BackSplit);
DeletePortal (Portal);
if (NodeArray[Node].IsLeaf==0)
{
FrontPortalList=ClipPortal(NodeArray[Node].Front,FrontSplit);
}
else// Its about to get pushed into a Leaf
{
FrontSplit->LeafOwnerArray[FrontSplit->NumberOfLeafs]=NodeArray[Node].Front;
FrontSplit->NumberOfLeafs++;
FrontSplit->Prev=NULL;
FrontSplit->Next=NULL;
FrontPortalList=FrontSplit;
}

if (NodeArray[Node].Back!=-1) // the backsplit is in solid space
{
BackPortalList=ClipPortal(NodeArray[Node].Back,BackSplit);
}
else // delete it its in solid space
{
DeletePortal ( BackSplit);
}
if (FrontPortalList!=NULL)// Find the End of the list and attach it to Front Back List
{
Iterator=FrontPortalList;
while (Iterator->Next!=NULL)
{
Iterator=Iterator->Next;
}
if (BackPortalList!=NULL)
{
Iterator->Next=BackPortalList;
BackPortalList->Prev=Iterator;
}
return FrontPortalList;
}// there is something in the front list

else ////There is nothing in the front list
{
if (BackPortalList!=NULL) return BackPortalList;
return NULL;
}


return NULL;
break;

default:
return NULL;
break;
} //end switch
return NULL;
}

Well I have to say that if you have understood everything up until now then its all gonna get a little bit easier from here on in.In my opinion 'Portal Generation' is the hardest thing to grasp and also rather hard to explain or even to show on paper for that matter.Well I did mention earlier when we were looking at the CP_SPANNING case of the above function that it uses a function called 'SplitPortal' to wrap the 'SplitPolygon' function call and also copy over the extra portal information over from the parent about to be split into the two child splits.This information that had to be retained was mainly just what leafs the Portal the parent had landed in up to that point.I will just show you the 'SplitPortal' function for completeness.

void SplitPortal(PORTAL *Portal,PLANE *Plane,PORTAL *FrontSplit,PORTAL *BackSplit)
{
SplitPolygon((POLYGON *)Portal,Plane,(POLYGON *)FrontSplit,(POLYGON *)BackSplit);
FrontSplit->NumberOfLeafs=Portal->NumberOfLeafs;
BackSplit->NumberOfLeafs=Portal->NumberOfLeafs;
memcpy(FrontSplit->LeafOwnerArray,Portal->LeafOwnerArray,sizeof(long)*Portal->NumberOfLeafs);
memcpy(BackSplit->LeafOwnerArray,Portal->LeafOwnerArray,sizeof(long)*Portal->NumberOfLeafs);
}



Ok well we have looked at the 'CalculateInitialPortal' function that creates the first Large portal on the Planes and we have had a look at the 'ClipPortal' function that will clip this Initial Portal to the tree and send back a List of portals that have survived. Lets now have a look at the function that calls both of these functions to actually create the real Master Portal Array that will be used for Pre Calculating the PVS. This function is called 'BuildPortals' and is the function that is called from our main intializatoin routine 'InitPolygons' that we saw earlier.Here is the InitPolygons function again just to remind you.As you can see from the code below we have already called 'BuildBspTree' to Compile the Solid Leaf Tree and the next function we have to call (and write) is 'BuildPortals' to build the Portal set for the tree. When the 'BuildPortal' function returns the Global portal array 'PortalArray' will hold all the Portals for the tree. Unlike the other master arrays (LeafArray,PolygonArray etc) PortalArray is actually an array of Pointers to type PORTAL instead of being a block of PORTAL structures.

void InitPolygons(void)
{
ReserveInitialMemoryForArrays();

PolygonList=NULL;
PolygonList=LoadMWM("demolevel.mwm");
LoadTextures();

BuildBspTree(0,PolygonList);
BuildPortals();
...
...
}

BuildPortals is the main PortalCreation function and the only one that the main application has to call to generate the portals for the tree. Its job is to step through every Node in our BSP tree and at each Node it calls 'CreateInitialPortal' to create the Initial Portal for that Node and then Calls 'ClipPortal' to send that Portal down the tree. After the call to 'ClipPortal' , our BuildPortals function will then have a list of Portal Fragments for the Current Node.It will then Loop through the portal list and remove any Portals that are not in Two leafs.This is a simple case of checking the Portals 'NumberOfLeafs' flag to see that it is equal to Two.

If the Portal has survived this far we only have one more check to make on it before it is accepted.We have to check that a portal that bridges the same two Leafs doesnt already exist, if it does we compare the sizes and keep the largest one whether that be the new portal or the portal already in the Master Portal Array. The reason for having to do this may not be immediately obvious, it certainly wasnt to me which is why I spent two days continually rebooting my machine everytime it crashed during PVS calculation. Let it just be known for now that it is possible for bit of a Portal from one Node to spill over into another node so that you may have two portals both occupying the same space and both existing in the same two leafs.We simply compare the sizes of our new portal with the one that already exists and keep the largest one.The largest one will always be the correct size where as the smaller one is just a peice of a fragment from another node that has found itself occupying the same space as our portal but not bridging the gap between the two leafs completely.

This code may look a little strange in the fact that it used a few 'GOTO' comands(ouch).Sorry but the job of this function was to recurse the tree, the problem though is that while recursing the tree this function calls 'ClipPortal' which also Recurses the tree. Recursing the Tree While recursing the tree is a sure to get us a STACK OVERFLOW on even a moderately sized level so I was forced to write the BuildPortal function without Recursion.Instead of the function recursively calling itself as we have done up till now with all tree traversal function, this function traverses the BSP tree without using recursion.This way the tree will only be getting recursed once each loop by the call to 'ClipPortals'.

In order to traverse the tree I have simulated the stack by allocating an array of NODESTACK structures. The NODESTACK structure I have defined like this:-

struct NODESTACK
{
long Node;
BYTE JumpBackPoint;
};

Everytime we have to traverse the tree we will record the current node we are using and also were we have come from so that we can jump back that point.This is basically just simulting what a C++ function call would do.If you are half way through 'Function 1' and you call 'Function 2' the current Variables and line that is currently being executed in 'Function 1' is saved on the stack.When function two returns the computer can return to function one at the same place it left off by popping the information back off the stack so that the execution of 'Function 1' can again resume from where it left off before 'Function 2' was called.
Here is the code a bit at a time.It uses a few support function which we will examine after.

void BuildPortals(void)
{
long stackpointer=0;
NODESTACK *NodeStack=new NODESTACK[NumberOfNodes+1]
int portalindex;
NodeStack[stackpointer].Node=0;// root node
NodeStack[stackpointer].JumpBackPoint=0;

Above is the code that sets up our Stack. The variable 'stackpointer' is set to zero because this will be the Node that we start off with.You can see that after we have safely allocated enough memory to hold more than enough nodes we fill in the current NodeStack position with the Node we are going to start with (Node 0). JumpBackPoint is set to zero meaning there is no jump back point for the first node.Once Node 0 is done it is time to Return from the function because the whole tree will have been traversed so this is why we just set the jump back point to zero.
Next we start the Traversal bit of the function.This is the bit that will actually simulate the Recursion with the label 'START:' being the entry point every time we traverse to a new node.

START:

PORTAL *InitialPortal=CalculateInitialPortal(NodeStack[stackpointer].Node);
PORTAL *PortalList=ClipPortal(0,InitialPortal);

As you can see the first thing we do is call the 'CalculateInitialPortal' function to create the initial portal for the current node (the index of the node is stored in the NodeStack[stackpointer].Node) and then pass the returned Portal into the 'ClipPortal' function specifying '0' meaning send it down the root of the tree (begin at node '0').After the function returns 'PortalList' will contain a Linked List of portals that have survived the tree.
Our next task is to step through each portal in the tree and carry out some tests to check whether it is a valid portal or not.

PORTAL *Iterator=PortalList; // Step through the Portal List

while (Iterator!=NULL)
{
if (Iterator->NumberOfLeafs!=2)// not in two leafs so delete
{
PORTAL *temp=Iterator->Next;
RemovePortalFromList(Iterator);
Iterator=temp;
}

We use a temporary pointer 'Iterator' to step through the Linked list with each iteration of the while loop.The first check we do is to see that the portal exists in Two leafs.If it does not then we remove the Portal from memory because it is not a real Portal.Above we use support function 'RemovePortalFromList' which we will see in a minute but it is a simple function to remove an element from a linked list while still keeping the integrity of the list in tact.We also use temporary pointer 'temp' to store what the next Portal in the list will be because after we have deleted 'Iterator' we will not be able to use Iterator->Next to get it.After that we are done with this iteration of the while loop so we loop around again to test the next Portal in the list.

If the Portal does exist in two leafs then we have a real possibility that this is proper portal.So we have to do another check to see if a similar portal (a portal that bridges the same two leafs) has already been entered into the array.If so then we have to compare the size of this new portal with the size of the one already in the array. If our new portal is smaller then we just delete it like we did above and we are done for this iteration of the while loop. The checking for duplicate portals in the master Portal Array is performed by the support function 'CheckDuplicatePortal' which we will see in a moment.

else
{
if (CheckDuplicatePortal(Iterator,&portalindex)==true)
{
PORTAL *temp=Iterator->Next;
RemovePortalFromList(Iterator);
Iterator=temp;
}

The 'CheckDuplicatePortal' array returns 'TRUE' if there is a Portal already in the portal array that is larger then the current one we are testing.In other words, if it returns true then the portal we are testing is no good and we want to delete it. Notice that we pass in the pointer to an INT 'Portalndex'.The contents of this int is modified by the 'CheckDuplicatePortals' function.If the function returns false it means we DO want to keep this portal and add it to the main portal array.'PortalIndex' will hold the index of the entry in the array that we need to put this portal.If there is not a duplicate portal then 'PortalIndex' will simply hold the next element on the end of the Portal Array so we will just be adding a new pointer to the end of the array (PortalIndex will equal NumberOfPortals) however, if there is a duplicate portal in the array and this new one is bigger then we want to replace the portal that is already in the array with this one ,so 'PortalIndex' will hold the position where we should place this new portal in the array to replace the one thats already there.

If there is Not a duplicate Portal in the array OR if there is but this new one is bigger then we have found ourselves a real portal that needs to be added to the master Portal Array.Note that at this point the 'PortalIndex' will hold the index into the master Portal array that this portal needs to be places at.

else
{
PortalArray[portalindex]=Iterator;
if (portalindex==NumberOfPortals)
{
for (int a=0;a>Iterator-<NumberOfLeafs;a++)
{
long Index=Iterator->LeafOwnerArray[a];
LeafArray[Index].PortalIndexList[LeafArray[Index].NumberOfPortals]=NumberOfPortals;
LeafArray[Index].NumberOfPortals++;
}
IncreaseNumberOfPortals();
}


Iterator=Iterator->Next;
} // if not a duplicate portal
} // end else
}// END WHILE LOOP

Above concludes the main while loop.If we have made it here this portal needs to be added.As we have said 'PortalIndex' holds the position in the master Portal Array where this portal should be added.If 'PortalIndex==NumberOfPortals' however it means we are NOT simply replacing a portal that is already there but are adding a new one to the end of the array.We then loop through this portals 'LeafOwnerArray' to gain access to the Two leafs this portal is in and then record this portals index number in the array in the Leaf Structures 'PortalIndexList' array and increase the Leafs portal count 'Leaf.NumberOfPortals'. This means we now have in the Portal which leafs it exists in and we also have in each leaf a reference to this portal.This will come in handy when we calculate the PVS.Remember a Leaf could have many portals in it.

After we have added the portal to the end we call 'IncreaseNumberOfPortals' which is just another Memory Allocation function (like the others we have looked at) that simply re sizes the PortalArray if we have exceeded its current maximum capacity.

After that we simply have the code to move to the next fragment in the portal list and loop round again.This while loop will continue until all the portals in the list have either been added to the master portal array 'PortalArray' or have been rejected. Once the while loop is over it means we are done at this Node in the tree .We will now check the Front and Back trees of this node. The above while loop will be executed for EVERY node in the tree (wow) .

The last bit of code is simply the Tree traversal code that has been written WITHOUT recursion.Normally the following code could be replaced with two lines like 'BuildPortals(Node Front) and BuildPortals(Node Back) but it looks a little ugly now because we have had to get rid of this recursion to stop stack over flow on large levels.Below is the code that basically just says 'If ther is a Front tree go down it' followed by 'if there is a Back tree go down it'

if (NodeArray[NodeStack[stackpointer].Node].IsLeaf==0)
{
NodeStack[stackpointer+1].Node=NodeArray[NodeStack[stackpointer].Node].Front;
NodeStack[stackpointer+1].JumpBackPoint=1;
stackpointer++ ;
goto START;
}

BACK:
if (NodeArray[NodeStack[stackpointer].Node].Back!=-1)
{
NodeStack[stackpointer+1].Node=NodeArray[NodeStack[stackpointer].Node].Back;
NodeStack[stackpointer+1].JumpBackPoint=2;
stackpointer++ ;
goto START;
};

END:

stackpointer--;// This is like returning from a function
if (stackpointer>-1)
{
if (NodeStack[stackpointer+1].JumpBackPoint==1) goto BACK;
else
if (NodeStack[stackpointer+1].JumpBackPoint==2) goto END;
}

delete [] NodeStack;
}
Hopefully you can see what is happening here.If there is a Front Node then we put this Front Node in the Next Position in the stack array (stackpointer+1) and also store in that stack positions 'JumpBackPoint' variable a Label Code indicating which label we will need to jump back to continue processing the Current Node.When the we get to the bottom of the function we simulate a stack by decreasing the Stack pointer so we are back at the previous node and jump back to either label BACK:(JumpBackPoint=1) or label END:(JumpBackPoint=2) or we dont jump anywhere and just return from the BuildPortals function if (JumpBackPosition=0) which should only be true for the root node.You can see that if there is a Back node then we do the same but place a value of '2' in the JumpBackPoint variable. Unfortunately things look a little uglier when we loose the clean looking Rescursive approach but thats just something we have to put up with in this case.You can unroll all your recursive functions this way and will probably get a speed boost by not using the stack.This is at the expense of some readablility though.If you read through the above code and follow the program flow you should see what is happening without any problems, we are simply restarting the a loop each time by jumping back to the START: label everytime a new node is reached but are storing in the stack the current position we are at in the code before the jump.This way, when we have dealt with the new node will reach the bottom and will decrease the stackpointer variable so we are back at the Node we were at before the jump.But we also have to get back to the position in the code we were at before the jump which is exactly the purpose of the JumpBackPoint variable.

Well that was a crash course on how to get a Recursive Funtion and Un recurse it. Thats it for the 'BuildPortals' function, when the function returns the Global array 'PortalArray' will hold an array of PORTAL pointers and the Global variable 'NumberOfPortals' will hold the number of Portal Pointers in that array.

Here is the BuildPortals function in its entirety:-

void BuildPortals(void)
{
long stackpointer=0;
NODESTACK *NodeStack=new NODESTACK[NumberOfNodes+1]
int portalindex;
NodeStack[stackpointer].Node=0;// root node
NodeStack[stackpointer].JumpBackPoint=0;

START:

PORTAL *InitialPortal=CalculateInitialPortal(NodeStack[stackpointer].Node);
PORTAL *PortalList=ClipPortal(0,InitialPortal);

PORTAL *Iterator=PortalList; // Step through the Portal List

while (Iterator!=NULL)
{
if (Iterator->NumberOfLeafs!=2)// not in two leafs so delete
{
PORTAL *temp=Iterator->Next;
RemovePortalFromList(Iterator);
Iterator=temp;
}

else
{
if (CheckDuplicatePortal(Iterator,&portalindex)==true)
{
PORTAL *temp=Iterator->Next;
RemovePortalFromList(Iterator);
Iterator=temp;
}

else
{
PortalArray[portalindex]=Iterator;
if (portalindex==NumberOfPortals)
{
for (int a=0;a>Iterator-<NumberOfLeafs;a++)
{
long Index=Iterator->LeafOwnerArray[a];
LeafArray[Index].PortalIndexList[LeafArray[Index].NumberOfPortals]=NumberOfPortals;
LeafArray[Index].NumberOfPortals++;
}
IncreaseNumberOfPortals();
}


Iterator=Iterator->Next;
} // if not a duplicate portal
} // end else
}// END WHILE LOOP

if (NodeArray[NodeStack[stackpointer].Node].IsLeaf==0)
{
NodeStack[stackpointer+1].Node=NodeArray[NodeStack[stackpointer].Node].Front;
NodeStack[stackpointer+1].JumpBackPoint=1;
stackpointer++ ;
goto START;
}

BACK:
if (NodeArray[NodeStack[stackpointer].Node].Back!=-1)
{
NodeStack[stackpointer+1].Node=NodeArray[NodeStack[stackpointer].Node].Back;
NodeStack[stackpointer+1].JumpBackPoint=2;
stackpointer++ ;
goto START;
};

END:

stackpointer--;// This is like returning from a function
if (stackpointer>-1)
{
if (NodeStack[stackpointer+1].JumpBackPoint==1) goto BACK;
else
if (NodeStack[stackpointer+1].JumpBackPoint==2) goto END;
}

delete [] NodeStack;
}

Support function: RemovePortalFromList & CheckDuplicatePortal


We are now nearly finished with Portal Generation but as promised I will just talk about the two support functions used by the BuildPortals function.The first support function used 'RemovePortalFromList' needs no explaining really.It is standard code to remove an element from a linked list.In case you are wondering this is why our portal also had a 'Prev' pointer , its purpose is just to allow us to remove un wanted portals from linked lists easily.The function works by deleting Portal n (n=portal you passed in to be deleted) and making Portal n-1->Next=Portal n+1 and also the same in reverse so that Portal n+1->Prev=Portal-1 which therefore unlinks Portal n from the list and then Portal n is deleted. Here is the Code to 'RemovePortalFromList' .

void RemovePortalFromList(PORTAL *RemovePortal)
{
PORTAL *temp=RemovePortal;
PORTAL *PrevPortal,*NextPortal;

if(RemovePortal->Prev!=NULL)
{
PrevPortal=RemovePortal->Prev;
if (RemovePortal->Next!=NULL)
{
PrevPortal->Next=RemovePortal->Next;
}
else
{
PrevPortal->Next=NULL;
}
}// if there is a prev

if (RemovePortal->Next!=NULL)
{
NextPortal=RemovePortal->Next;
if (RemovePortal->Prev!=NULL)
{
NextPortal->Prev=RemovePortal->Prev;
}
else
{
NextPortal->Prev=NULL;
}
}
DeletePortal ( temp );
}

Before we move away from Portal Generation we will just have a look at the last support function called by the 'BuildPortals' function. Remember that if valid Portal List was returned from 'ClipPortal' we still had to check that a Portal that connected the same two leafs did'nt already exist in the master Portal Array.If one did, then the 'CheckDuplicatePortal' functions job was to compare the size of the new Portal with the one already in the array and discard the smaller one and keep the larger one.You may want to take a quick flick back to the 'BuildPortals' code just to refresh your self how this function was used.
If you remember we passed in a pointer to an 'int' that would be filled with the correct position in the array that the new portal should be added.If a duplicate portal did not exist in the array then the function returns 'false' and the 'int' (called 'PortalIndex' in the 'BuildPortals' function) is filled with an an index to the end of the array. In otherwords, there is no duplicate portal to be replaced so this new portal has to be added to the end of the array.
If a duplicate portal does exist but the Portal already in the array is larger than the new one then the function returns 'true' and no action is taken except that the new portal is deleted.This indicates that no change needs to be performed on the master PortalArray because the new Portal is not as large as the one already in the array.
If there is a Smaller Portal in the Array than the one we are testing then this means we need to replace the old portal already in the array with the new one.In this instance the function returns 'false' and 'PortalIndex' hold the index into the master Portal Array of the Portal that needes to be replaced.The 'PortalIndex' is used later on in the Build Portals function to actually place the portal in the array at this position.

Here is the code to the 'CheckDuplicatePortal' function with explanations every couple of lines:-

bool CheckDuplicatePortal (PORTAL * CheckPortal,int *index)
{
long CheckPortalLeaf1=CheckPortal->LeafOwnerArray[0];
long CheckPortalLeaf2=CheckPortal->LeafOwnerArray[1];

First get both the leafs that the Portal exists in (this is stored in the Portal LeafOwnerArray remember).We will need these leafs to check all the other portal in the Portal Array and see if any other Portal exists in the Same two leafs.If so then we have found a duplicate portal.

long PALeaf1=0;
long PALeaf2=0;
for (long i=0;i<NumberOfPortals;i++)
{
PALeaf1=PortalArray[i]->LeafOwnerArray[0];
PALeaf2=PortalArray[i]->LeafOwnerArray[1];

if ((CheckPortalLeaf1==PALeaf1 && CheckPortalLeaf2==PALeaf2) || (CheckPortalLeaf1==PALeaf2 && CheckPortalLeaf2==PALeaf1))
{

Above we loop through each portal in the array and get the two leafs that the portal exists in.If this Portal exists in the same two leafs as our 'CheckPortal' then we have found a match and need to compare sizes as shown below.

D3DXVECTOR3 Max1,Min1,Max2,Min2;

GetPolygonBounds((POLYGON *)CheckPortal,&Min1,&Max1);
GetPolygonBounds((POLYGON *)PortalArray[i],&Min2,&Max2);

float NewSize=D3DXVec3Length(&(Max1-Min1));// Measure the Lengths of the vector to
// see which is bigger
float OldSize=D3DXVec3Length(&(Max2-Min2));

Above we set up some Vectors that will be used to hold a Bounding Box for the polygon.We then call 'GetPolygonBounds' which is a simple helper function that just tests each vertex in the polygon and returns a Minimum Point and a Maximum Point that bounds the Polygon.We call this function twice to retrieve the Bounding Box for both the Portal that we are checking ('CheckPortal') and the one that is currently already in the array.
With these two points for each polygon we can create a vector (max1-min1) and then call the 'D3DXVec3Length' to return the Length (Magnitude) of that vector.This is like measuring the distance between the two extreme corners of the bounding box.All we have to do now is measure the two Lengths 'OldSize' & 'NewSize'

if (fabs(NewSize)>fabs(OldSize))
{
PORTAL *temp=PortalArray[i];
DeletePortal ( temp );
*index=i;
return false;
}
else
{
return true;// This portal is already in the array
}
}
}
As you can see above if the new portal 'CheckPortal' is larger than the one already in the array then we delete the portal currently in the array and record the index number of the portal in the array so that it is returned to the calling function 'BuildPortals' and the calling function has an index to place the new portal.The function then returns 'false' informing the calling function that the Portal passed in to the function to be checked is valid and should be added to the array at the position held in 'index'.
If however,the portal already in the array is larger than the function returns 'true' informing the calling function that a larger portal already exists so the check portal can be deleted.

If there is no duplicate portal found then we end up down here were we simply store the current Number of Portals in the index and return false.This tells the calling function that the check portal should be kept and added at position (index) which is at the end of the array.

*index=NumberOfPortals;
return false;// This portal was not found inthe array
}

Here is the complete code:-

bool CheckDuplicatePortal (PORTAL * CheckPortal,int *index)
{
long CheckPortalLeaf1=CheckPortal->LeafOwnerArray[0];
long CheckPortalLeaf2=CheckPortal->LeafOwnerArray[1];
long PALeaf1=0;
long PALeaf2=0;
for (long i=0;i<NumberOfPortals;i++)
{
PALeaf1=PortalArray[i]->LeafOwnerArray[0];
PALeaf2=PortalArray[i]->LeafOwnerArray[1];

if ((CheckPortalLeaf1==PALeaf1 && CheckPortalLeaf2==PALeaf2) || (CheckPortalLeaf1==PALeaf2 && CheckPortalLeaf2==PALeaf1))
{
D3DXVECTOR3 Max1,Min1,Max2,Min2;

GetPolygonBounds((POLYGON *)CheckPortal,&Min1,&Max1);
GetPolygonBounds((POLYGON *)PortalArray[i],&Min2,&Max2);

float NewSize=D3DXVec3Length(&(Max1-Min1));// Measure the Lengths of the vector to
// see which is bigger
float OldSize=D3DXVec3Length(&(Max2-Min2));

if (fabs(NewSize)>fabs(OldSize))
{
PORTAL *temp=PortalArray[i];
DeletePortal ( temp );
*index=i;
return false;
}
else
{
return true;// This portal is already in the array
}
}
}
*index=NumberOfPortals;
return false;// This portal was not found inthe array
}

I am not going to cover the code to the 'GetPolygonBounds' function because it is just a simplified version of the function 'CalculateBox' function that we wrote earlier to build a BoundingBox for our leafs and nodes.The only difference is that it only has to build it for one polygon. You can check out the code anyway if you want.It can be found in 'Portals.cpp'


We Have the Portals.Can we Calculate Our PVS Now Please?

Now that we our Portal Array fully complete and each portal contains which Leafs it exists in and each Leaf in our Leaf Array contains which portals exist in them we now have all the information to calculate the PVS. We said earlier that in order to find out what was visiible from one room to the next we said we needed to now the size of the entrance/exit that joins those two rooms. Now we have exactly that. Each portal describes the the exact gap(doorway) between between one leaf and the next so the dimensions of the portal also gives us the dimensions of the gap.This means we are now entering the final leg of getting our PVS calculator working.

Our next task will be to recursively walk through each leaf in the tree and for every portal in that leaf create an ANTI-PENUMBRA between that Portal and every Portal in the Leafs adjoining that leaf and so on.Now probably that last sentance made very little sense to you, so we will cover this very slowly with plenty of diagrams etc so do not fear. The first question you are no doubt asking your self is......

What the Hell is an Anti-Penumbra. Is it Uncle Penumbras Wife?


A Penumbra describes the shadow cast by an object.Any objects within the Penumbra are said to be in shadow. An Anti-Penumbra therefore is the exact oppsite. Imagine you are in a very dark house where every room is pitch black so you can't see a thing (oh scary).Now imagine that one of the rooms was filled with very bright light but the door to this room was closed so that it was still pitch black outside the room. Imagine now that the door was opened and the light was allowed to flood out. You would see the light almost leave the room in a kind of Frustum shape as the light can only get out through the doorway hole.Outside the room some parts of the adjoining rooms may be at such an angle to the door that they still remain in shadow and are therefore not within the anti-penumbra.You must have seen this effect dozens of times.The Anti-Penumbra describes EVERYTHING that can been seen through the door way.If any parts outside the room remain in darkness then that area could not bee seen from anywhere within that room.

Creating an Anti-Penumbra

For our purposes the Anti-Penumbra will be built out of a number of clip planes.We will build the Anti-Penumbra using two portals,a Source Portal and a Destination Portal.The Anti-Penumbra will then tell us the Maximum that can be seen from the Source Portal through the Destination Portal.This is the Key behind calculating our PVS
Before we look at this in action lets first talk about how we build an Anti-Penumbra between any two Portals.

As said above the Anti-Penumbra will be nothing more than collection of Clipping planes not unlike the the Clip Planes that D3D uses to reject all invisible geometry that is not with in the View Frustum. As you are probably aware , any three points in 3D space describe a Plane. If we have these 3 points then we can create a Plane from them. What we will do is Loop through EACH Vertex in the 'Source' Portal and for each one of these Vertices we will loop through each EDGE in the Destination portal. Obviously an EDGE contains two Vertices so with these vertices and the one from the source Portal we have three points which describe a Plane. The Pseudo code then looks a little like this.

For Each Vertex In Source Portal (SourceVertex)
   For Each Edge in Destination Portal (EdgeVertex1 & EdgeVertex2)
     Create Plane using SourceVertex,EdgeVertex1 & EdgeVertex2
     Is Plane an Anti-Penumbra Clip Plane?
       If Yes then Add to Clip Plane List
       If No then ignore it and move onto next Edge
   End For Each Edge
End For Each Vertex


Now the one million dollar question that begs to be asked looking at the above Pseudo code is what 'IS' and Anti-Penumbra Plane.Above you can see that we will create many planes but only ones considered to be an 'Anti-Penumbra Plane' are added to the clip list and the rest are ignored.
A Plane is considered to be an Anti-Penumbra plane if the plane clearly divides the Source Portal and the Destination portal.That is to say the Plane is accepted if the Source Portal lays of one side of the Plane and the Destination portal lays on the other.
In the diagram below we see an example of two planes that might be created in the above loop.The first example is rejected because both portals lay on the same side of the portal.

You can see the first diagram that both the portals are on the underside of the plane that is created by the three points.You can also see however that the second diagram creates a plane that clearly divides space so that the Source Portal lays on the Left side of the plane (top down view) and the Destination portal lays on the Right side of the plane.This second plane will be accepted by our code and added to the clip list.Its hard to draw this stuff very well because of its 3D nature but take a look at the next diagram which also clearly show other planes that will be created and accepted in the above code.



You can see see that the Red plane is created by using one of the source portals bottom vertices and the top edge of the Destination portal. The creates a plane where the Source portal is on the top side of the plane and the Destination portal is on the bottom side. You can see the same thing but in reverse is also true with the Green plane in the diagram.Only this time the Souce portal is on the bottom side of the plane and the destination portal is on the Top side.

Please note:
I am using the terms 'Bottom Side' and 'Top Side' to describe the relation ship of the Portals to the planes in the above diagram a little loosely here purely for the sake of explaining the diagram. In fact the Normals for the Planes can be facing in any direction changing what is considered top and what is considered bottom. This does not matter at all, all we need to know is that one Portal lays on one side and the other Portal lays on the other side.The orientation of the plane is not important.


Now creating the planes is very easy and we have done this and explained this in detail before.It is exactly the same as creating the Normals for our polygon.The three points (Source point & Destination Edge) for a triangle.We just have to create vectors for two of these edges and perform the Cross product on them to get the Normal of the plane.As our PLANE structure is defined as a Normal and a Point known to be on the plane , once we have the normal we can just use any of the 3 points as our Point On Plane.We will see this in code in a moment but something that you are probably wondering is "So what if we have a list of clip planes at wierd angles,how does that help us?". The answer is very much.

When you view the planes (like the Red and Green ones shown above) from the source portal to the destination portal it just looks like a load of planes colliding and zig zagging with each other.The Real magic however is the shape the Planes make when they come out of the opposite side of the Destination portal. As the Planes project out from the opposing side of the destination portal they form the shape of the Anti-Penumbra (a bit like aView Frustum but with no Near or Far Clip Planes).Look at the diagram below , the white leafs as visible.



Once you start adding rooms around the portals you start to see how this is all used. In this example we are currently in the Source leaf.The portal in the Source Leaf is called the Source portal.Because of the way that BSP trees have convex leafs we ALWAYS know that the Leaf on the opposing side of a Source portal (the Destination Leaf 'Leaf 2') will Always be visible.We then test each portal in the Destination leaf (there is only one in the above diagram and these are called Destination Portals) and providing that the Destination portal is NOT on the same plane as the Source portal, it too is always Visible from the Source Portal.If a Destination portal is on the same plane as the Source portal then we just ignore it.If the Destination Portal is visible then it is obvious that the Leaf on the other side of the Destination portal is also visible (Leaf 3).This Leaf is called a Generator leaf and is where the recursive process kicks in to play.We will look at this later.For now though just look at how the Anti-Penumbra Planes leave the Opposing side of the destination portal to form a Anti-Penumbra in the Leaf 3 (the generator leaf).You can see that any portals in the Generator Leaf (called Generator portals) that fall within the Anti-Penumbra (the red area) are considered to be visible.And Generator portals that are OUTSIDE the red area are ignored.In the above diagram you can see that only one Generator Portal was within (partially in also count as in) the Anti-Penumbra and so we know that the Leaf on the opposing side of this Portal (Leaf 6) is also visible from the Source Leaf.You can see already that we have already rejected Leaf 5 and Leaf 4 from the Source Leafs PVS which is correct because there is no way that you could ever see in to either of those room from the Source Leaf.

What you have just seen above is the start of how the PVS calculator starts to calculate the PVS for a given leaf. We do this for every Source portal in the Source Leaf Leaf , and for every source portal we create an 'anti-penumbra' with every Destination portal in the destination leaf. In other we need to calculate what the Source Leaf can see out of all its doorways.Although the above diagram got us started we have not yet talked about the recursive process invloved in Calculating the PVS. In the Above Diagram we simply stop when Leaf 6 is hit. But what if Leaf 6 also had portal? We would also have to check that portal too because we might be able to see through one of leaf 6's portals from the source leaf which in turn means we would have to recurse into leaf 7 and test its portals etc etc etc.

The Process is not as bad as it seems and I have drawn some nice diagrams to help see it more clearly.
A you can see once again the Source,Destination and Generator Leafs are all marked as visible from the Source Leaf and the Clip Planes (aka Anti-penumbra) are built from the Source portal to the Destination Portal describing exactly how much of the Generator Leaf is visible through the Destination portal as viewed from the Source portal. You can see that out of the two Generator Portals only one is partially with in the Clip Planes.The portal outside the planes is ignored and not processed, but above you can see that the other portal is partially in side.So we clip away the part of the generator portal that is Outside so we are just left with a fragment that is within the penumbra. What happens next is really intersting.
Because the Generator portal is partially visible it also means the Leaf on the other side of this portal is visible so we mark it as so by adding it to the source leafs PVS. Next this new Leaf becomes the New Generator Leaf and the Old generator Leaf becomes the new Destination Leaf meaning the old generator portal now becomes the new destination Portal .This means we now build a new Anti-Penumbra from the same source Portal but to the new Destination portal (the old Generator Portal that was clipped) fully describing what area in the new Generator leaf is still visible.What is great about this is as we recursively move from leaf to leaf doing this (making old Generator Portal new Destination portal and rebuilding the anti-penumbra) and clipping each generator portal to the Anti-Penumbra,the Anti-Penumbra gets smaller and smaller with each recur until in the end no portals in the current generator leaf are in side.
Here is an exact example of what I mean.You can see that as we go into each new Generator Leaf the Anti-Penumbra get narrower and narrower raising the chances of clipping out all of the Next Generator Leafs portals(Generator Portals).As we enter every Generator leaf that leaf is added to the Source leafs PVS. We now have a PVS for the current Source Leaf.The diagram to the right shows in grey the leafs that could possibly be seen from the Source Leaf.


Now don't worry if you have not a clue how to code any of this as I will step through the actual code explaining everything in detail, but just look at the above diagrams until you understand in your head how the system works especially the bit where we make the Old generator portal the new Destination portal and rebuild the Anti penumbra between the Source Portal and the new Destination portal.

A Few Last Things Before We Code

One of the Last things we have to cover before you can actually understand the code I am going to show you is how exactly we determine whether or not a Generator portal is within the Anit-Penumbra.It turns out that this is very easy. Taking a look at the following diagram you can see that the grey areas that are NOT visible are on the SAME side of the clip plane as the source portal is.


In other words then, we will check each Generator Portal against each Clip Plane.If ANY of the Generator Portals are on the SAME side of the plane as the Source Portal then the portals are NOT within the Anti-Penumbra and can be rejected.This means we will no longer have to recur into the leaf on the other side of that portal. If however a Generator portal is on the Opposing Side of a Plane to the Source Portal, then it IS within the Anti-Penumbra and so should be tested against the other planes in the Anti-Penumbra.If a portal is Spanning the Plane then the Portal gets Clipped to the Plane keeping the piece that is on the opposing side of the plane to the source portal.

All the functions described below can be found in 'pvs.cpp' in the downloadable zip file.We will need suprisingly little code to implement this considering that we also have to compress the data as we write it.

The functions contained in 'PVS.cpp' are as follows:-

long CalculatePVS();

This is the function called by our 'InitPolys' routine.It is the only function called by our main program.This function uses the following functions to complete its task.

void RecursePVS(long SourceLeaf,PORTAL *SrcPortal,PORTAL *TargetPortal,long TargetLeaf,BYTE *LeafPVS);

Once 'CalculatePVS' has set up the Source Portal & Destination Portal this function is the function used to recurse the Tree.It is the function that is responsible for making the Current Generator portal the new Destination portal and rebuilding the Anit-Penumbra (Clip Planes) with each recur.

PORTAL * ClipToAntiPenumbra(PORTAL *SourcePortal,PORTAL *TargetPortal,PORTAL *GeneratorPortal);

This function is responsible for Building the Anti-Penumbra (which is just an array of clip planes) bewteen the Source and Destination Portal and clips the Generator Portal to the Planes and returns it.It will return 'NULL' if the Generator portal was outside the Anti-Penumbra.

PLANE GetPortalPlane (PORTAL *Portal);

Small helper function that takes a portal and fills out a PLANE struct and returns it.It does this simply by taking the First Vertex in the portal and the portal normal and stuffing them in a PLANE structure.

void SetPVSBit (BYTE *VisArray,long DestLeaf) ;

Before we calculate each Leafs PVS the 'CalculatePVS' function we allocates a temporary array to hold the CURRENT leafs PVS data.This array is large enough to hold 1 bit for each leaf. When we find that a Generator portal is visible (therefore so is the leaf on the opposing side) we need to record that the Leaf on the opposing side of the Generator portal is visible from the source leaf. We pass this function the temporary array and the Leaf that is visible.This function sets the correct Bit to 1 in the Temporary Visibility array.

long CompressLeafSet(BYTE *VisArray,long WritePos);

Once 'CalculatePVS' has calculating the PVS for a given leaf, that leafs PVS array is stored in a Temporary array as discussed above.It calls this function to add this data to the master PVSDATA array and compress the data as it is added (using zero run length encoding).When the 'CalculatePVS' function returns control back to the main program the 'PVSData' array will hold the PVS for every leaf compressed and each Leaf will also have in its PVSIndex field an offset into this array that describes the location of the start of its PVSdata. By adding each leafs visiblity info to the array one at a time means that we save on memory overhead.Instead of having to calculate the whole PVS information in one array and then Compress it into another array (therefore needing twice the size of the PVS data during the compression because of the need for two array), we will just calulate one leafs visibility info at a time and then add that to the PVSData array. The buffer that was used to hold the uncompressed Visibility info for the leaf can then be cleared and re used for the next leaf and so on.Oh well, I suppose we had better dig in and start looking at the code.

The 'CalculatePVS' Function


Just so we get an overall order of things up to this point lets just take a peek back at our 'InitPolygons' function. Remember that this is the function that calls all the other setup functions.It calls 'BuildBSP' , it calls 'BuildPortals' and now we will see it in its complete form where it also sets up a few things in preperation to the call to 'CalculatePVS'. Here it is in its complete form to give you and idea of how all the various subjects we have discussed up to now fit into the overall picture.You can see the new bits have been highlighted in blue.

void InitPolygons(void)
{
ReserveInitialMemoryForArrays();
PolygonList=NULL;
PolygonList=LoadMWM("demolevel.mwm");
LoadTextures();

BuildBspTree(0,PolygonList);
BuildPortals();

BytesPerSet=(NumberOfLeafs+7)>>3;
PVSData=(BYTE *)malloc (NumberOfLeafs*BytesPerSet);
ZeroMemory(PVSData,NumberOfLeafs*BytesPerSet);
PVSCompressedSize=CalculatePVS();
PVSData=(BYTE *) realloc(PVSData,PVSCompressedSize);
} // END FUNTION

The first thing we do is calculate the initial space needed to hold the PVS Data for the tree.At this point we have no idea how much the PVS data will be compressed so we have to allocate enough space for a worst case senario or no compression.Once we have the PVS we will also have the total size of it as well so we can then Re Size this array to the actual amount needed. We need to allocate enough memory initially so that there is enough memory for every leaf to have 1 bit for every other leaf in the tree.
'BytesPerSet' is a global variable that holds the number of bytes EACH leaf will need to hold its own PVS data.The reason we add 7 and then divide by 8 (>>3) is to allow for the truncation from 'float' to 'long'.For example, if we have 9 leafs and we know that each byte holds 8 Bits then we know we need 2 Bytes to hold all the bits because the Bit for the first eight leafs would be in byte[0] and the 9th Bit would be in Byte[1]. However if we simply do BytesPerSet=9/8 we get 1.However 9+7=16/8=2. In other words this just assures that we allocate the correct number of Bytes.

Next we multiply this by the Number of Leaves in the tree because each leaf will need its own PVS byte set.We then allocate the memory and clear it to zero. Then we call the 'CalculatePVS' function which is the function that does all the work.When this function returns the PVS set for every leaf will be stored together in the Global 'PVSData' array and the function will return the actual size of the array.We can then use this to resize the PVSData array to the correct size freeing up any uneeded memory.

At this point everything is done,we have a SolidLeafBSPTree and a PVSData array that each leaf uses to see what other leafs are visible from that leaf.You are then ready to render.

Lets now take a look at the 'CalculatePVS' function a few lines at a time and we will then discuss the helper functions it uses.

long CalculatePVS()
{
BYTE * LeafPVS=new BYTE[BytesPerSet];
long PVSMasterWritePointer=0;
First we allocate a temporary buffer 'LeafPVS' that will be used to hold the uncompressed PVS information for each Leaf.Remember that we calculated the value of the global variable 'BytesPerSet' prior to the function call.Once this function has calculated the PVS information for a Leaf into this buffer,the buffer is copied into the master PVSData array being compressed as it is copied. The variable PVSMasterWritePointer will be used to hold the current write position in the 'PVSData' array. This is a little like a File pointer when reading or writing to a file.The variable holds the offset into the master PVSData array that this Leafs visibility information will be copied to.This starts at zero obviously for the first leaf.As each Leafs PVS Information is added to the master 'PVSData' array the PVSMasterWritePointer will keep track of the position that the Next Leafs information will be written to. This is also the Offset we will store in the LEAF structures 'PVSIndex' array that we used earlier in our 'DrawTree' function to offset into the PVSData array at the correct position for the current leaf the camera was in.

Next we have to loop through each leaf (to calculate the PVS for it).

for (long Leaf=0;Leaf<NumberOfLeafs;Leaf++)
{
LeafArray[Leaf].PVSIndex=PVSMasterWritePointer;
ZeroMemory(LeafPVS,BytesPerSet);
SetPVSBit(LeafPVS,Leaf);

Above we set up a for next loop that will loop through each leaf in the LeafArray and calculate the Visibilty information for that leaf.We start by storing the value of 'PVSMasterWritePointer' into the leaf structure so that we can offset in the array during Rendering and get to any Leafs Visibility information.

One of the first things we do is call 'SetPVSBit' and pass in the Current Leaf and the Visibilty Buffer.This will find the current bit in the buffer and set it to one. This is because it is obvious that each leaf can see itself.In other words, if the current leaf we were working on was Leaf 10 (Leaf=10) then this function would find the correct bit to set (the 10th bit) in the buffer to indicate that Leaf 10 can see Leaf 10 which is obvious.In other words it would set Bit 3 in the second Byte of the 'LeafPVS' buffer.

Now for each leaf (which is the Source Leaf) we also have to loop through each Portal in that leaf.These portals are the Source Portals (refer back to the diagrams if you have forgotten about Source ,Destination and Generator portals) and for each Source portal we need to find the Leaf on the Opposite side of the portal.This will be the Destination Leaf.(SPI stands for Source Portal Index)
for (long SPI=0;SPI<LeafArray[Leaf].NumberOfPortals;SPI++)
{
PORTAL * SourcePortal=PortalArray[LeafArray[Leaf].PortalIndexList[SPI]];
long TargetLeaf=SourcePortal->LeafOwnerArray[0];
if (TargetLeaf==Leaf)
{
TargetLeaf=SourcePortal->LeafOwnerArray[1];
}
SetPVSBit(LeafPVS,TargetLeaf);

Remember that Each Portal Contains a List of leafs it exists in and each Leaf contains the a list of portals that exist in it.Above we get each source portals index (in the PortalIndex Array) by stepping through the current Leafs 'PortalIndexList' array which is an array of Portal Index's into the master Portal Array. Once we have the current Source Portal we then need to get the Destination Leaf.This is easy because the source portal will exist in two leafs and one of those leafs will be the current Leaf (the Source Leaf). So we step through the Portals 'LeafOwnerArray' (of which there are only two elements) until we find a Leaf that is NOT the Source Leaf. This will be the Leaf on the Other side of the portal (The Destination Leaf a.k.a the Target Leaf).

As I said earlier, because of the convex nature of BSP Trees every Leaf Sharing a Source portal with the Source Leaf (in other words each Destination Leaf) will always be visible from the source (refer back to earlier diagrams for a refresh) so we once again call 'SetPVSBit' and pass in the TargetLeaf (Destination leaf) index so this Leafs bit gets set in the 'LeafPVS' buffer.This will happen for each Source Portal in the Source Leaf so every Leaf on the opposing side of a Source Portal will be a Destination Leaf and have its bit set in the buffer during its iterartion of the loop.

So now we have the Source Leaf, the Source Portal and the Destination Leaf which is the Leaf over the other side of the Source portal.Our next job now is to loop through every portal in the Destination Leaf and see whether it is visible.This is easy, if any portal in the Destination Leaf (Destination Portal) is on the same plane as the Source portal then the source portal can not SEE the Destination portal and so should be ignored.Any other Destination Portals however will be visible from the Source Portal because remember the Source Portal is also a Portal in the Destination Leaf.So first have to check 1) whether the Source portal is on the same Plane as the Destination portal and 2) we have to check that the Source Portal is NOT the Destination portal because the source Portal is ALSO in the Destination Leaf.If we don't check for this the Source Portal will get treated as a Destination Portal also and the recursive loop will just go on looping forever between the Source Leaf and the destination Leaf.Believe me I know what I am talking about,i had to reset my machine about 50 times in one day because I overlooked this small and obvious fact.

for (long DPI=0;DPI<LeafArray[TargetLeaf].NumberOfPortals;DPI++)
{
PORTAL * TargetPortal=PortalArray[LeafArray[TargetLeaf].PortalIndexList[DPI]];
if (SourcePortal!=TargetPortal && ClassifyPoly(&GetPortalPlane(SourcePortal),(POLYGON *)TargetPortal)!=CP_ONPLANE)
{

RecursePVS(Leaf,SourcePortal,TargetPortal,TargetLeaf,LeafPVS);
} // End for If Source != Dest
} // End for DPI
} // End for SPI


As you can see above we loop through each Destination portal in the Destination Leaf and providing that the portal is not the source portal or the portal is not on the same Plane as the Source Portal we call the 'RecursePVS' function that is responsible for traversing the Leafs of our tree.We pass in the 'Leaf' we are currenly building the Visibility array for, the Source and the Destination Portal and the Destination leaf.We also pass the temporary Visibility buffer 'LeafPVS' because the function will need access to it to set the bits to all the leafs it finds that are visible.The RecursePVS function will get the Leaf on the Opposing side of the TargetPortal which is called the Generator Leaf.It will then Build a set set of clip planes (the Anti-penumbra) between the Source and Target portal and see if any of the Portals in the Generator Leaf are with in the Anti penumbra.If so the RecursePVS will call itself recursively but this time the Generator Portal will be passed in as the Target portal.This will go on recurring until it finds a Generator Leaf where none its Portals are within the Anti-Penumbra. Everytime a new Generator Leaf is found its bit is Set in the Visibility Buffer 'LeafPVS'.We will look at this in a moment.

When this function finally stops recurring and program flow returns it means we have now calculated all the leafs visible from the Source Leaf through that source portal and the destination portal (Target portal).We have to do this for each Target portal in the Target Leaf however. After the 'DPI' loop had finished we will have precalulated ALL the leafs visible through THAT Source Portal.We then do this for every Portal in the Source leaf (every source portal) and eventually when the 'SPI' loop comes to an end we have calculated this Leafs entire Visibility information.

With this Leafs PVS information calculated (all the relevant bits set) it is now time to copy this buffer into the Master PVSData array compressing it using Zero Run Length Encoding as we do so. To do this we call the 'CompressLeafSet' function and pass in the buffer to be copied that contains this Leafs Visibility information (which is LeafPVS) and we also pass in the 'PVSMasterWritePointer'. The function will compress the buffer and copy it into the PVSData array and willl return how many Bytes got written after the compression took place.We can simply add this onto our 'PVSMasterWritePointer' and we now have the offset into the array to start writing the PVS for the Next Leaf.

PVSMasterWritePointer+=CompressLeafSet(LeafPVS,PVSMasterWritePointer);
} // End for Leaf

delete [] LeafPVS;
return PVSMasterWritePointer;
} // End Function



When the function comes to an end (because we have calculated the visibility information for each leaf) the 'PVSData' array will hold a complete PVS , also the PVSMasterWritePointer will also hold the total number of bytes that was written to the PVS array.In other words,it will hold the size of the PVS array in Bytes.We return this from the function so that we can use this to resize the PVSData array to the correct size as we saw earlier in the 'InitPolygons' function.

Without my interruptions here is the complete code for 'CalculatePVS' in in entirety:-

long CalculatePVS()
{
BYTE * LeafPVS=new BYTE[BytesPerSet];
long PVSMasterWritePointer=0;
for (long Leaf=0;Leaf<NumberOfLeafs;Leaf++)
{
LeafArray[Leaf].PVSIndex=PVSMasterWritePointer;
ZeroMemory(LeafPVS,BytesPerSet);
SetPVSBit(LeafPVS,Leaf);

for (long SPI=0;SPI<LeafArray[Leaf].NumberOfPortals;SPI++)
{
PORTAL * SourcePortal=PortalArray[LeafArray[Leaf].PortalIndexList[SPI]];
long TargetLeaf=SourcePortal->LeafOwnerArray[0];
if (TargetLeaf==Leaf)
{
TargetLeaf=SourcePortal->LeafOwnerArray[1];
}
SetPVSBit(LeafPVS,TargetLeaf);
for (long DPI=0;DPI<LeafArray[TargetLeaf].NumberOfPortals;DPI++)
{
PORTAL * TargetPortal=PortalArray[LeafArray[TargetLeaf].PortalIndexList[DPI]];
if (SourcePortal!=TargetPortal && ClassifyPoly(&GetPortalPlane(SourcePortal),(POLYGON *)TargetPortal)!=CP_ONPLANE)
{

RecursePVS(Leaf,SourcePortal,TargetPortal,TargetLeaf,LeafPVS);
} // End for If Source != Dest
} // End for DPI
} // End for SPI
PVSMasterWritePointer+=CompressLeafSet(LeafPVS,PVSMasterWritePointer);
} // End for Leaf

delete [] LeafPVS;
return PVSMasterWritePointer;
} // End Function



Before we continue with the rest of the support functions used by 'CalculatePVS' I would just like to show you the code to the 'SetPVSBit' as it is called above and will be called by the following functions also.It is a very small function which simply sets the correct Bit in the Visibility Buffer that we pass in as a parameter.It looks like so:-

void SetPVSBit (BYTE *VisArray,long DestLeaf)
{
long ByteToSet=DestLeaf>>3;
BYTE BitToSet=(BYTE)(DestLeaf-(ByteToSet<<3));
VisArray[ByteToSet]|=1<<BitToSet;
}

As you can see it is very small.To find the Correct BYTE to set in the temporary Visibility Buffer we divide by 8 (>>3). So for example, if we called this function to mark Leaf 19 as being visible (remember ''VisArray' is a temporary Visibility Buffer for the current Leaf having its PVS calculated) then 19/8=2 (truncated to 2 because of rounding) which is correct.We know that Bit 19 is in the 3rd Byte (VisArray[2] because its zero based).To find the Bit we wish to set in that Byte we then Multiply the Byte it is in (2) by 8 (<<3) which gives us 2*8=16. We then take this away from the index of the Leaf we wish to set (DestLeaf) like so 19-16=3.This tells us we need to set the 4th (zero basd again) bit in the third Byte. We do this by shifting '1' into the fourth position in the Byte (shifting it by three puts it in position four because the '1' is already in position 1 before any shifting is done).We then 'OR' this Bit with the Visibility array so that if the Bit is not already set it is Set, and if the Bit is already set then then the Bit still remains Set.

The 'RecursePVS' function


This function is the Recursive engine behind the PVS code.It steps through the Leafs setting the Bits in the Visibility buffer and keeps on recurring until no more generator portals fall within the Penumbra planes.Lets take a look at it.

void RecursePVS(long SourceLeaf,PORTAL *SrcPortal,PORTAL *TargetPortal,long TargetLeaf,BYTE *LeafPVS)
{
long GeneratorLeaf=TargetPortal->LeafOwnerArray[0];
if (GeneratorLeaf==TargetLeaf) GeneratorLeaf=TargetPortal->LeafOwnerArray[1];
SetPVSBit(LeafPVS,GeneratorLeaf);

As we have passed in the Target portal and the Target leaf as parameters our first task is to get the Leaf on the Opposing side of the Target Portal.This is similar to how we got the Target Leaf from the Source portal. This Leaf is the Generator Leaf. As we find Generator Leafs we mark them as being visible.This is because any Target portal passed into this function must be visible therefore the Leaf on the Other side of the Portal must also be visible.Once we have the Generator leaf, we set the relevant Bit in the Visibility buffer.

Our next job is to find which side of the Source portal the Source Leaf is on and which side of the Target portal the Target Leaf is on.We will use this information in a minute to remove Generator Portals from consideration in trivial cases.To find whether the Source Leaf is to the Back or Front of the source portal we simply get the Center of the Leafs Bounding Box and Classify that center point against the Portals Plane.We also do the same for the Target Leaf to see which side (CP_BACK,CP_FRONT etc) of the Target Portal it is on.We will look at why we need this information shortly.

D3DXVECTOR3 SourceLeafCenter=(LeafArray[SourceLeaf].BoundingBox.BoxMax+LeafArray[SourceLeaf].BoundingBox.BoxMin)/2;
D3DXVECTOR3 TargetLeafCenter=(LeafArray[TargetLeaf].BoundingBox.BoxMax+LeafArray[TargetLeaf].BoundingBox.BoxMin)/2;
int SourceLeafLocation=ClassifyPoint(&SourceLeafCenter,&GetPortalPlane(SrcPortal));
int TargetLeafLocation=ClassifyPoint(&TargetLeafCenter,&GetPortalPlane(TargetPortal));


We will now loop through EACH 'Generator' Portal in the Generator leaf and first of all do some quick and simple tests to see whether it could be visible. First we make a backup copy of the Source portal and the current Generation portal.This is because we will need to continually clip these portals to the Anti-Penumbra planes later on until nothing is left of them, we can't use the Actual Portals in the master portal array because we will need these to create the Visibility Information for all the other leafs in the tree, therefore we make Copies of them that we can clip.

for (long GPI=0;GPI<LeafArray[GeneratorLeaf].NumberOfPortals;GPI++)
{
if (PortalArray[LeafArray[GeneratorLeaf].PortalIndexList[GPI]]==TargetPortal){delete GeneratorPortal;delete SourcePortal;continue;}
PORTAL *SourcePortal=new PORTAL;
*SourcePortal=*SrcPortal;
PORTAL *GeneratorPortal=new PORTAL;
*GeneratorPortal=*PortalArray[LeafArray[GeneratorLeaf].PortalIndexList[GPI]];



The first thing we have to do is check the Generator Portal is NOT the Target Portal otherwise we will recurse forever.We discussed this before when talking about finding the the Target portal.If the Generator Portal IS the Target Portal then we move on to the next iteration of the loop to process the next Generator Portal if one exists.
We also make a copy of the Source portal and the current Generator portal for this iteration of the loop as discussd above.

Now we will see why we needed to calculate which side of the Source portal the Source Leaf lay on.If the Generator Portal is on the Same side of the Source Portal as the Source Leaf then the Generator Portal can not possibly be seen.Look below at the diagram.

Can you see that the Generator Portal is on the same side of the Source portal as the Source leaf.It would be a waste of time building a set uf Anti-Penumbra planes and clipping testing the Generator portal against them when we can see that the Portal could not possible be seen.So all we have to do is findout what side of the Source portal the Generator Leaf is on and if it is on the same side as the Source leaf we can skip on to the next Generator Portal.

int GeneratorLocation=ClassifyPoly(&GetPortalPlane(SourcePortal),(POLYGON*) GeneratorPortal);
if (GeneratorLocation==CP_ONPLANE || GeneratorLocation==SourceLeafLocation) {delete GeneratorPortal;delete SourcePortal;continue;}


You can see above that if the Generator is on the same side as the Source Leaf or On the same Plane as the Source portal then there is no need to test this portal.We simply delete the copies we made of the Source and Generator Portals and move on to the Next Generator portal if there is one.

Although I have not done a diagram of it, the same is also true if the Generator Portal is on the Same side of the Target Portal as the Target Leaf is. You will have to draw your own diagram if you do not believe me.So the following code checks for this also

GeneratorLocation=ClassifyPoly(&GetPortalPlane(TargetPortal),(POLYGON*) GeneratorPortal);
if (GeneratorLocation==CP_ONPLANE || GeneratorLocation==TargetLeafLocation) {delete GeneratorPortal;delete SourcePortal;continue;}

If we have made it here then we have a Generator portal that could Possibly be seen so we have to test it against the Anti-Penumbra planes to find out.

Testing it against the Anti-Penumbra is done via a call to the 'ClipToAntiPenumbra' function shown below

GeneratorPortal=ClipToAntiPenumbra(SourcePortal,TargetPortal,GeneratorPortal);

if (GeneratorPortal==NULL)
{
if (SourcePortal) delete SourcePortal;continue;
continue;
}

Above we send into the 'ClipToAntiPenumbra' function the Source,Target and current Generator portal being tested.This function Creates the Clip Planes for the Penumbra and Clips the Generator portal to them returning the result.If the function returns 'NULL' then the Generator Portal was completely clipped away which means it was outside the Anti-Penumbra and therefore we dont have to recurse any further for this Portal because it is not visible.In this case we simply delete the copy of the Source Portal that we made and move onto the next iteration of the loop where we will test the next Generator portal.

Now something I have not mentioned up until now is that I will now do the same in reverse.In other words I will now call the 'ClipToAntiPenumbra' function but send in the SourcePortal as the Generator portal and send in the Generator Portal as the Source portal.This will create a List of Clip Planes between the Generator Portal and the Target Portal which may allows us to Clip the Source Portal down a bit.This allows us to shave that little more off each time because I said before that the Anti-Penumbra gets narrower with each recur because the Generator portals Get clipped.By clipping the Source portal as well to the Anti-Penumbra we are getting the Anti-Penumbra to narrow more quickly therefore raising the chances of completely clipping out the next Generator Portal and therefore getting a smaller and more accurate PVS. Many PVS implementations do not do this but I wanted to shave off as much as possible even if it was at the expense of compile time.

SourcePortal= ClipToAntiPenumbra(GeneratorPortal,TargetPortal,SourcePortal);

if (SourcePortal==NULL)
{
if (GeneratorPortal) delete GeneratorPortal;continue;
}

If a the Generator portal and the Source Portal still exist (have not been completely clipped away) then we will make this function call itself BUT this time passing in the Current Generator Portal and Generator Leaf as theTarget Portal and Target Leaf respectively.This means the next time round in the recur a New Generator Leaf on the opposing side of the old Generator Portal (the new target portal) will be added to the visibility buffer and the Anti-Penumbra will be built between the source Portal and the new Target portal (old Generator portal).

RecursePVS(SourceLeaf,SourcePortal,GeneratorPortal,GeneratorLeaf,LeafPVS);
delete GeneratorPortal;
delete SourcePortal;
}
}



That was not as bad as it could have been.Read through again until you understand the recursive process involved.The key to understanding it is in understanding how the next time the function calls itself it uses the Old Generator portal as the new target portal to find a new Generator Leaf on the opposing side of this new Target Portal.Remember that each new Generator Leaf found is considered to be visible and has its Bit set in the Visibility buffer.

Here is the code in its complete form:-

void RecursePVS(long SourceLeaf,PORTAL *SrcPortal,PORTAL *TargetPortal,long TargetLeaf,BYTE *LeafPVS)

{

long GeneratorLeaf=TargetPortal->LeafOwnerArray[0];
if (GeneratorLeaf==TargetLeaf) GeneratorLeaf=TargetPortal->LeafOwnerArray[1];
SetPVSBit(LeafPVS,GeneratorLeaf);

D3DXVECTOR3 SourceLeafCenter=(LeafArray[SourceLeaf].BoundingBox.BoxMax+LeafArray[SourceLeaf].BoundingBox.BoxMin)/2;

D3DXVECTOR3 TargetLeafCenter=(LeafArray[TargetLeaf].BoundingBox.BoxMax+LeafArray[TargetLeaf].BoundingBox.BoxMin)/2;

int SourceLeafLocation=ClassifyPoint(&SourceLeafCenter,&GetPortalPlane(SrcPortal));
int TargetLeafLocation=ClassifyPoint(&TargetLeafCenter,&GetPortalPlane(TargetPortal));


for (long GPI=0;GPI<LeafArray[GeneratorLeaf].NumberOfPortals;GPI++)
{

if (PortalArray[LeafArray[GeneratorLeaf].PortalIndexList[GPI]]==TargetPortal){continue;}
PORTAL *SourcePortal=new PORTAL;
*SourcePortal=*SrcPortal;
PORTAL *GeneratorPortal=new PORTAL;
*GeneratorPortal=*PortalArray[LeafArray[GeneratorLeaf].PortalIndexList[GPI]];


int GeneratorLocation=ClassifyPoly(&GetPortalPlane(SourcePortal),(POLYGON*) GeneratorPortal);
if (GeneratorLocation==CP_ONPLANE || GeneratorLocation==SourceLeafLocation) {delete GeneratorPortal;delete SourcePortal;continue;}

GeneratorLocation=ClassifyPoly(&GetPortalPlane(TargetPortal),(POLYGON*) GeneratorPortal);
if (GeneratorLocation==CP_ONPLANE || GeneratorLocation==TargetLeafLocation) {delete GeneratorPortal;delete SourcePortal;continue;}


GeneratorPortal=ClipToAntiPenumbra(SourcePortal,TargetPortal,GeneratorPortal);

if (GeneratorPortal==NULL)
{
if (SourcePortal) delete SourcePortal;continue;
continue;
}

SourcePortal= ClipToAntiPenumbra(GeneratorPortal,TargetPortal,SourcePortal);

if (SourcePortal==NULL)
{
if (GeneratorPortal) delete GeneratorPortal;continue;
}

RecursePVS(SourceLeaf,SourcePortal,GeneratorPortal,GeneratorLeaf,LeafPVS);
delete GeneratorPortal;
delete SourcePortal;
}
}



The 'ClipToAntiPenumbra' Function


The Next function we have to look at is 'ClipToAntiPenumbra'. This function is rsponsible for Building the list of Clip Planes (from the Source and Target portals) and also tests the Generator Portal against these Planes clipping it and even deleting it if nessacary.

Before we start I would just like to cover something that we looked at earlier and expand on it.You may remember I said that the Clip Planes were created by taking Each point in the Source portal with EVERY edge in the destination portal. We then check that the Plane divides the Two planes an accept it if it does. There is something that I have not mentioned earlier.Once we have done this and have our List of Clip Planes we will then do the Same thing again but the other way round. In other words we will then loop through Each vertex in the TARGET leaf and build a Plane with EVERY edge on the Source Leaf and once again add any planes that divide the portals to our Clip List.
The Reason why we do this has not been that clear up until now because every diargram has used two square Portals.This means that Building Planes from Source to Target will generate the Same set of planes as going from Target to Source.In this case it has no advantage BUT lets consider to portals of different shapes. A Square Source Portal and a Triangle Target Portal.By also Creating Planes from Target to Source we may find a plane that more tightly hugs what the source portal could see through the destination portal.The Planes generated would be quite different as shown below.



You can see by the above picture that the Target to Source method would generate different Clip Planes that the Source to Target Planes. This may allow us to find a Plane that narrows the Anti-Penumbra just that little but more allowing us to clip that little bit more of the Generator Portal and therefore increase the Accuracy of this Leafs PVS.

I will now introduce you to a new structure used by the following function to hold the Anti-Penumbra Clip Planes.It is defined as follows:-

struct CLIPPLANES
{
WORD NumberOfPlanes;
PLANE *Planes;
};

No real surprises here then. The CLIPPLANES structure contains a pointer to an array of Clip Planes and also a Count of how many planes are in the array.

Lets now step through the code of the 'ClipToAntiPenumbra' function a bit at a time.

PORTAL * ClipToAntiPenumbra(PORTAL *SourcePortal,PORTAL *TargetPortal,PORTAL *GeneratorPortal)
{
D3DXVECTOR3 EdgeVector1,EdgeVector2,Normal;
int PortalLocation;
PORTAL *FrontSplit,*BackSplit,*TempSource,*TempTarget;
CLIPPLANES ClipPlanes; // Create a ClipPlane set

ClipPlanes.NumberOfPlanes=0;
ClipPlanes.Planes=new PLANE [SourcePortal->NumberOfVertices*TargetPortal->NumberOfVertices*2];
PLANE TempPlane;
int NextVertex=0;

We start of by delaring some local variables that we will be using shortly including a CLIPPLANE structure called 'ClipPlanes'.We set its count to zero because we have not added any Planes to the list yet and also allocate enough memory in the CLIPPLANE structure to hold the maximum number of Clip Planes thats could possibly be created.In practice much fewer Clip Planes will be added to this list because remember that we are only going to accept clip planes that divide the Source and Target portals and many will not so will never be added. But to be on the safe side we reserve enough memory so that it could contain EVERY plane that was creates from the Source to Target and from the Target to Source.We multiply this by two because remember that we are also going to generate a set of clip planes from the Target to the Source Portal as well as from the Source to the Target Portal.

Our first task is to set up a simple for next loop that will loop twice.The first time through the Loop we will build Planes from the Source Portal to the Target portal and the Second time through the Loop we will swap it around and build the planes from, the Target portal to the Source Portal.Instead of having to have two copies of the code that is very similar we simply loop twice and use some temporary Pointers to point to the Source and Target portals.This means that the rest of the function code will just Build the Planes from the Temporary Source to the Temporary Portal.All we have to do in our outer loop is swap it around the second time through the loop so that the Temporary Source Portal points at the Target portal and the Temporary Target points at the Source portal like so. for (int loop=0;loop<2;loop++)
{
if (loop==0)
{
TempSource=SourcePortal;
TempTarget=TargetPortal;
}
else
{
TempSource=TargetPortal;
TempTarget=SourcePortal;
}

We now how our Temporary Source and Target portal pointers set up to point to the correct Portal depending on the current iterartion of the loop.Now its time to create the planes from each vertex in the Source portal to each Edge in the Destination portal.First we set up a for next loop to loop through each vertex in the Source (temporary Source that is).This will be the First point needed to construct the plane.What we do have to check for though is that the Source Portals Vertex is not ON the Target Portals planes .This can happen if say for example 2 portals meet in an 'L' shaped configuration.In this situation the Clip Plane can not be built so we simply skip the vertex and move on to the next one.

for (int SV=0;SV<TempSource->NumberOfVertices;SV++)
{
PortalLocation=ClassifyPoint((D3DXVECTOR3 *)&TempSource->VertexList[SV],&GetPortalPlane(TempTarget));
if (PortalLocation==CP_ONPLANE) continue; // Skip It. Because it;s 'L' shaped

Providing that the Source Vertex is not ON_PLANE with the Target Portal we now have to loop through each edge in the Target portal.Once again we have to do a little bit of conditional logic at the top of this inner loop because if we just loop through each vertex in the Target Portal and use ''Vertex' and ''Vertex+1' as our edge we will get to the point where we are at the Last vertex in the Target Portal so ''Vertex+1' does not exist.All we have to do is check if we are at the last vertex in the edge and if so wrap back around so that the last edge uses ''Vertex' and 'FirstVertex' like so:-

for (int TP=0;TP<TempTarget->NumberOfVertices;TP++)
{
if (TP==TempTarget->NumberOfVertices-1)
{
NextVertex=0;
}
else
{
NextVertex=TP+1 ;
}

You can see that TP will hold the Current Vertex (which will be used as the first vertex in each edge) and 'NextVertex' will hold the secondVertex in the edge.This will just be set to 'TP+1' unless 'TP' is the Last vertex in the Target portal in which case we will set 'NextVertex' to Zero( The First Vertex in the Target Portal).
This means we now have the SourceVertex index in 'SV' and the two Target Portal Vertex indices in variable 'TP' and 'NextVertex'.
With this information we can now create the Clip Plane.We have to create Two vectors. The first vector from the Source Vertex to the First Vertex in the Current Edge of the target portal, and the second Vector from the First Vertex in the Target Portals edge to the Second Vertex in the Target Portals edge.The reason we do this is because if we feed both these vectors into the Cross product we get returned a Vector that is Orthogonal to these two vectors.As both these vector lay on the Plane we are trying to create this means a Vector Orthogonal to these vectors is the planes Normal which we need to define our clip plane.Remember that our PLANE structure requires a Normal and a Point ON Plane.Once we have the Normal we have to normalize it and then copy it into our Temprary Plane structure 'TempPlane' .We can use ANY of the three vertices as the point on the plane to be copied into 'TempPlane'.I have just used the SourceVertex ('SV').

EdgeVector1=*((D3DXVECTOR3 *)&TempSource->VertexList[SV])-*((D3DXVECTOR3 *)&TempTarget->VertexList[TP]);
EdgeVector2=*((D3DXVECTOR3 *)&TempTarget->VertexList[NextVertex])-*((D3DXVECTOR3 *)&TempTarget->VertexList[TP]);


D3DXVec3Cross(&Normal,&EdgeVector1,&EdgeVector2);// create normal
D3DXVec3Normalize(&TempPlane.Normal,&Normal); // make it a unit normal
TempPlane.PointOnPlane=*(D3DXVECTOR3 *)&TempSource->VertexList[SV];//Any vertex will do



Now that we have a Plane stored in 'TestPlane' we have to test whether or not it is an Anti-Penumbra plane.Remember that it is only a Anti-penumbra plane IF the Plane divides the Source Portal and Target portal so that each portal lays on opposing sides of the plane.If it does it is added to our Clip Plane List 'ClipPlanes'.
The following code basically just says "If the Source Portal is on the FRONT of the Plane and the Target Portal is on the BACK of the Plane then accept it and add it to the clip list" else "If the Source Portal is on the BACK of the plane and the Target portal is on the FRONT of the plane except it also".Planes that do not divide the Portals are just ignored.

if(ClassifyPoly(&TempPlane,(POLYGON*)TempSource)== CP_FRONT)
{
if(ClassifyPoly(&TempPlane,(POLYGON*)TempTarget)== CP_BACK)
{
ClipPlanes.Planes[ClipPlanes.NumberOfPlanes] = TempPlane;
ClipPlanes.NumberOfPlanes++;
//Save clip plane
}
}

else

if(ClassifyPoly(&TempPlane,(POLYGON *)TempSource) == CP_BACK)
{
if(ClassifyPoly(&TempPlane,(POLYGON*)TempTarget) == CP_FRONT)
{
ClipPlanes.Planes[ClipPlanes.NumberOfPlanes] = TempPlane;
ClipPlanes.NumberOfPlanes++;
//Save clip plane
}
}
} // end for tp
} // end for sp
} // END FOR LOOP

At this point we have exited all the loops we started and 'ClipList' hold the Number of Clip Planes in the Anti-Penumbra and also contains a pointer to those Clip Planes.At last we have our Anti-penumbra built.All we have to do now is see if the Generator portal is Inside the Anti-penumbra or not. We do this by looping though each Clip Plane and clip each generator Portal to the Clip Plane..Just to jog your memory I will show you Diagram W) again so you can see how the clipping works:-

Remember we said earlier that if anything lays on the same side of a clip plane as the Source portal does then it can not possibly be visible. In the diagram above you can see that the Source portal is on the Back Side of the Green plane. If you then look at the Generator leaf you can see that any area behind the green line is colored grey indicating that it can not be seen also. This mean that we test the Generator Portal against each clip plane and also test the Source portal against the Clip plane and if they both lay on the same side of the Clip Plane then the Generator portal can not be seen from the Source portal so the Generator portal is deleted, the Anti-Penumbra planes are deleted and the function returns 'NULL' back to the 'RecursePVS' function ending the Recursive process in that direction.
If the Generator Portal is on the opposing side of the portal to the source portal then it passes the test for the current Clip Plane and moves on to the next loop to be tested against the next plane. If the generator portal survives all the clip planes it is reurned to the 'RecursePVS' function where the Leaf on the opposing side of the generator will be come the new Generator leaf and so the recursion continues.
If the Generator Portal is spanning the Clip Plane then it means only some of the portal is visible (the bit on the opposing side of the plane from the source portal) so the portal is Split to the plane (using our 'SplitPortal' function) and the original generator portal is deleted and so is the fragment that is on the same side of the plane as the source portal.The Fragment that is on the Opposing side of the plane to the Source portal becomes the new Generator Portal for the Rest of the clipping operations. Here is the last bit of the function that performs the clipping of the generator portal to the anti-penumbra.



for(int I = 0; I < ClipPlanes.NumberOfPlanes; i++)
{
PortalLocation= ClassifyPoly(&ClipPlanes.Planes[i],(POLYGON *)GeneratorPortal);
int SourcePortalLocation=ClassifyPoly(&ClipPlanes.Planes[i],(POLYGON *)SourcePortal);

if(PortalLocation == SourcePortalLocation || PortalLocation==CP_ONPLANE)// its entirley out side the anitpenumbra so loose it
{
delete Generator portal;
delete ClipPlanes.Planes;
return NULL;
}

if((PortalLocation == CP_BACK && SourcePortalLocation==CP_FRONT) || (PortalLocation == CP_FRONT && SourcePortalLocation==CP_BACK))// inside Anti Penumbra
{
//keep it
continue;
}

if(PortalLocation == CP_SPANNING)
{
//Clip the portal
FrontSplit=new PORTAL;
BackSplit=new PORTAL;
ZeroMemory(FrontSplit,sizeof(PORTAL));
ZeroMemory(BackSplit,sizeof(PORTAL));
SplitPortal(GeneratorPortal,&ClipPlanes.Planes[i],FrontSplit,BackSplit);

if (SourcePortalLocation==CP_FRONT)
{
delete FrontSplit;
delete GeneratorPortal;
GeneratorPortal=BackSplit;
} else

if (SourcePortalLocation==CP_BACK)
{
delete BackSplit;
delete GeneratorPortal;
GeneratorPortal=FrontSplit;
}
}
}
delete ClipPlanes.Planes;
return GeneratorPortal;
} // end function



WOW, we are nearly all done with this PVS stuff. We have certainly done all the hard bits all that left to do now is compress each Visibility Buffer and add it to the Master PVSData array.Before we do that however lets see the 'ClipToAntiPenumbra' function in its entirety

PORTAL * ClipToAntiPenumbra(PORTAL *SourcePortal,PORTAL *TargetPortal,PORTAL *GeneratorPortal)
{
D3DXVECTOR3 EdgeVector1,EdgeVector2,Normal;
int PortalLocation;
PORTAL *FrontSplit,*BackSplit,*TempSource,*TempTarget;
CLIPPLANES ClipPlanes; // Create a ClipPlane set

ClipPlanes.NumberOfPlanes=0;
ClipPlanes.Planes=new PLANE [SourcePortal->NumberOfVertices*TargetPortal->NumberOfVertices*2];
PLANE TempPlane;
int NextVertex=0;

for (int loop=0;loop<2;loop++)
{
if (loop==0)
{
TempSource=SourcePortal;
TempTarget=TargetPortal;
}
else
{
TempSource=TargetPortal;
TempTarget=SourcePortal;
}

for (int SV=0;SV<TempSource->NumberOfVertices;SV++)
{
PortalLocation=ClassifyPoint((D3DXVECTOR3 *)&TempSource->VertexList[SV],&GetPortalPlane(TempTarget));
if (PortalLocation==CP_ONPLANE) continue;

for (int TP=0;TP<TempTarget->NumberOfVertices;TP++)
{
if (TP==TempTarget->NumberOfVertices-1)
{
NextVertex=0;
}
else
{
NextVertex=TP+1 ;
}

EdgeVector1=*((D3DXVECTOR3 *)&TempSource->VertexList[SV])-*((D3DXVECTOR3 *)&TempTarget->VertexList[TP]);
EdgeVector2=*((D3DXVECTOR3 *)&TempTarget->VertexList[NextVertex])-*((D3DXVECTOR3 *)&TempTarget->VertexList[TP]);


D3DXVec3Cross(&Normal,&EdgeVector1,&EdgeVector2);// create normal
D3DXVec3Normalize(&TempPlane.Normal,&Normal); // make it a unit normal
TempPlane.PointOnPlane=*(D3DXVECTOR3 *)&TempSource->VertexList[SV];//Any vertex will do

if(ClassifyPoly(&TempPlane,(POLYGON*)TempSource)== CP_FRONT)
{
if(ClassifyPoly(&TempPlane,(POLYGON*)TempTarget)== CP_BACK)
{
ClipPlanes.Planes[ClipPlanes.NumberOfPlanes] = TempPlane;
ClipPlanes.NumberOfPlanes++;
//Save clip plane
}
}

else

if(ClassifyPoly(&TempPlane,(POLYGON *)TempSource) == CP_BACK)
{
if(ClassifyPoly(&TempPlane,(POLYGON*)TempTarget) == CP_FRONT)
{
ClipPlanes.Planes[ClipPlanes.NumberOfPlanes] = TempPlane;
ClipPlanes.NumberOfPlanes++;
//Save clip plane
}
}
} // end for tp
} // end for sp
} // END FOR LOOP

// We now have our PLane List so lets clip the Generator Portal against each one.

for(int I = 0; I < ClipPlanes.NumberOfPlanes; i++)
{
PortalLocation= ClassifyPoly(&ClipPlanes.Planes[i],(POLYGON *)GeneratorPortal);
int SourcePortalLocation=ClassifyPoly(&ClipPlanes.Planes[i],(POLYGON *)SourcePortal);

if(PortalLocation == SourcePortalLocation || PortalLocation==CP_ONPLANE)// its entirley out side the anitpenumbra so loose it
{
delete Generator portal;
delete ClipPlanes.Planes;
return NULL;
}

if((PortalLocation == CP_BACK && SourcePortalLocation==CP_FRONT) || (PortalLocation == CP_FRONT && SourcePortalLocation==CP_BACK))// inside Anti Penumbra
{
//keep it
continue;
}

if(PortalLocation == CP_SPANNING)
{
//Clip the portal
FrontSplit=new PORTAL;
BackSplit=new PORTAL;
ZeroMemory(FrontSplit,sizeof(PORTAL));
ZeroMemory(BackSplit,sizeof(PORTAL));
SplitPortal(GeneratorPortal,&ClipPlanes.Planes[i],FrontSplit,BackSplit);

if (SourcePortalLocation==CP_FRONT)
{
delete FrontSplit;
delete GeneratorPortal;
GeneratorPortal=BackSplit;
} else

if (SourcePortalLocation==CP_BACK)
{
delete BackSplit;
delete GeneratorPortal;
GeneratorPortal=FrontSplit;
}
}
}
delete ClipPlanes.Planes;
return GeneratorPortal;
} // end function

Ummm, there is no doubt about it, that function can look like one ugly son of a bitch on first viewing, but if you break it down into its two parts 'Creating The Planes' and 'Clipping the Generator portal to the Planes' it starts looking a bit more friendly.

If you have stayed with me up to this point then you should give youself a big pat on the back because our work is nearly done, certainly the actual PVS calculations are all done.If you are not with me at all but carried on reading down to this point then do not worry , this stuff will probably have to be read a few times over (in parts anyway) before it really starts to click, after all we have covered a load of stuff. If however you have not carried on reading down to this point then there is little point in me offering any encouragement because you are not reading it anyway.

You know something? I'm starting to feel like I might actually finish this tutorial and get my social life back.

Compressing the PVS using 'Zero Run Length Encoding'


If you remember back (or if not refer back now) to the 'CalculatePVS' function you will remember that we calculated each Leafs Visibility Buffer one at a time and while that leafs Visibility information was being calculated we stored the information in a Temporary Visibility Buffer called 'LeafPVS'. You may also remember that once we had finished calculating that leafs Visibility Buffer the Buffer was then passed in a function called 'CompressLeafSet'.This function (which we will look at in a moment) is responsible for taking the information out of the temporary Visibility Buffer , Compressing it using Zero Run Length Encoding and adding it to the Master PVSData Array.Lets just take a look again at how we called this function from the 'CalculatePVS' function.

PVSMasterWritePointer+=CompressLeafSet(LeafPVS,PVSMasterWritePointer);

The function is passed in the Visibility Buffer and the Position to start writing into the Master PVSData array.Remember that 'PVSMasterWritePointer' will be equal to zero the first time this function is called (for leaf 0's visibility set) so we will be telling the function to start writing the compressed information a Byte Zero in the PVSData array.This function then returns how many Compressed Bytes were written to the PVSData array so we simply add this on to the Write pointer so we can skip past all the data that has just been written to the master PVSData array so that the Write pointer now points to the offset in the array that we should start writing the Next Leafs compressed data the next time this function is called.

The code is very small and simply

long CompressLeafSet (BYTE * VisArray, long WritePos)
{
int j;
int rep;
BYTE *dest=&PVSData[WritePos];
BYTE *dest_p;
dest_p = dest;

for (j=0 ; j<BytesPerSet ; j++)
{
*dest_p++ = VisArray[j];
if (VisArray[j])
continue;

rep = 1;
for ( j++; j<BytesPerSet ; j++)
if (VisArray[j] || rep == 255)
break;
else
rep++;
*dest_p++ = rep;
j--;
}


return dest_p - dest;
}

It is sometimes hard to believe that such a small function could save so much memory.

First we set two pointers up ('dest' and 'dest_p') so that they point at the correct byte in the 'PVSData' array to start writing the new information.The 'Dest' pointer is not altered after that because it is simply there to store where we started writing from. Next we set up a the outer loop to loop through every byte in the Visibility buffer.We use 'dest_p' to write to the 'PVSData' array . If the current Byte in ''VisArray' is NOT zero then we simply copy it over into the 'PVSData' array unaltered because we can only compress Zero's remember, and we skip round to the next iteration of the loop. If however there is a zero then this is the Start of a Zero Run and we know that the next byte that has to be written into the master array is 'How Many Zeros Follow it' .We set 'rep' to 1 because we know there is at least one zero in the run and we keep looping through the rest of the bits (in the internal loop) until we hit a byte which is no longer zero OR until we have counted a run of '255' because we can store no more than 255 in a single byte.Each time we hit another zero though we add one onto 'rep' which is used to count the current length in bytes of the run.

Once we hit a non zero byte in the middle loop though the current run length is recorded and we pass pass control back to the outer loop.
Notice also how we have to back up ''j' when we leave the inner loop ''j--'. This is because ''j' is the non zero byte that will be needed to be copied into the PVS array on the next iteration of the outer loop, BUT, as the outer loop iterates it adds '1' onto ''j' so would skip past the byte we need to write. Thats why we do that.

WOW ! We Have Finished Our PVS Calculator


We now have our PVS calculated and things are looking good but there is still more we can do to further refine how many polygons have to be Processed each frame. As mentioned earlier the PVS PreCalculates everything leaf that can be seen from a Leaf. This means that however large your level may be, possibly only five or six leafs may be 'Possibly' visible and so none of the other polygons in the level have to be processed in any way.
Having said that a PVS is not perfect because it does not take into account the direction the camera will be facing during any given frame. This is obviously impossible for the PVS to do because calculating the PVS is a compile time process not a run time one.The PVS does however allow us to scale down the number of polygons and leafs that have to be processed each frame down to a Fraction of the number that we would normally need to process.This is very very cool.

However, during the run time rendering process there is yet more we can do to refine the PVS further.

During the rendering of a Leafs PVS (refer back to 'DrawTree' function) once a leaf has passed the test of being visible from the current leaf (its pvs bit is set to one) we can then check if the Bounding Box of the Leaf is within the cameras FOV. If not then we reject the Leaf (and all the polygons in it) and move on to the Next one. This is why we calculated the Bounding Boxes for each Leaf during the BSP Tree Compile process and stored the Bounding Box dimensions in each leaf.

To better see what I mean lets take a look at Fig E) again (shown Below).



As you can probably see if we were in Leaf 1 in the above diagram the PVS set for Leaf 1 would have leafs 1,2,3,5 & 6. However if we were in leaf 1 facing the south wall (Camera is the Red Arrow in the diagram) you can see that leafs 2,3,5 & 6 would not be visible at all in any way because they are out side the cameras Field of View (the Red Cone Shape).This means that not only has the PVS data cut down the number of Leafs needed to be processed by an amazing amount but, we can also minimize the PVS data by checking each Potentially Visible Leafs Bounding Box against the View frustum and reject any Leaf whose Bounding Box is completely outside the Frustum.

If you cast your mind back to the 'DrawTree' function you may have remembered seeing the following line which we did not discuss in much detail at the time.

if (pvs&mask)
{
if (LeafInFrustum(currentleaf)==true)
{
...
Draw the Leaf
...
During the RenderTree function we loop through each Bit in the Camera Leafs PVS (the Leaf the Camera is in) and if it has its bit set to '1' (if pvs&mask) then the Leaf is in the PVS set and is potentially visible from the Camera Leaf. HOWEVER, instead of just rendering the Leaf we do a second check by calling the 'LeafInFrustum' function.This function takes a Leaf index as a parameter and returns TRUE if any part of the Leafs Bounding Box is within the View Frustum(Cameras View). Therefore we don't have to render any leafs which make this Function return false. In the above diagram, this function would return false for all leave except Leaf 1 as this is the only Leaf that can be partially seen from the Camera.These tests are actually remarkably Quick because our Leafs Bounding Boxes are whats known as being 'Axis Aligned Bounding Boxes' which means they are aligned to the Major Axis of the 3D world.In other Words the Bounding Box Z Maximum reaches in the same direction as the Worlds Positive Z Axis and the Box's X Maximum is in the direction of the Worlds Positive X axis etc etc.This provides advantages because Culling Axis Aligned Bounding Boxes (AABB's) from the frustum can be done very quickly

Our next task will be to write the Functions that Test the Leafs Bounding Box against the View Frustum to see if it is within the camera FOV.


Frustum Culling Axis Aligned Bounding Boxes


When you set up your Projection Matrix in a 3D application using something like:- D3DXMatrixPerspectiveFovLH(&proj_m, 1.0f,0.77777f,0.3f, 500.0f );

you are infact describing to D3D the shape of the View Frustum you wish it to use.D3D Internally Creates 6 Clipping Planes which totally define what is in the Camera's FOV.Below you can see a picture of a View Frustum


The Six Planes (Near,Far,Left,Right,Top and Bottom) are used internally by D3D to reject any polygons Outside the Frustum (because they can not be seen) and to Clip any polygons to the Frustum (because they me be partially inside the Frustum).

We can use the same method to easily check whether an Axis Aligned Bounding Box (AABB) is inside or outside the Frustum.The Problem is though we do not at this moment have access to the 6 clipping planes that make up the View Frustum and even if we did we would need to translate them into World Space so that they are at the Cameras current Position and Orientation.There are several techniques available for Creating the Clip Planes for the Frustum but the one we are going to use is by far the easiest and quickest.

Because the Projection Matrix contains the 6 frustum Planes and the View Matrix contains the Location and Orientation of the camera we can Extract the 6 Frustum planes from the combined View/Projection Matrix.In other words, if we concentrate the View Matrix and the Projection Matrix this combined Matrix actually holds the 6 Frustum planes in WORLD space which is exactly what we need.This method is also very fast.

Extracting the Frustum Planes


We are now going to see the code to the the 'ExtractFrustumPlanes' function which does exactly that.It combines the View and Projection matrix and extracts the WORLD space frustum planes and stores them in the Plane Array passed into the function.
There are many ways that a plane can be represented and the way we have stored planes up until now has been to store a Plane Normal and a Point on the Plane.Another way to store the same Plane is to store the Normal to the Plane and the Distance to the Plane from the origin of the Coordinate system.You can think of the Distance parameter as How close the plane would come to the Origin (0,0,0) in your 3D world if you traced it back to the origin. The Combined View projection matrix holds the Frustum planes defined in such a way so i have created (for ease more than anything else) a second Plane structure that looks like so and is called 'PLANE2.'

struct PLANE2
{
float Distance;
D3DXVECTOR3 Normal;
};

We also define at the start of our program an Global array of six PLANE2 structures that will be used to hold the frustum Planes each frame.

PLANE2 FrustumPlanes[6];

You may have noticed that I used the words 'Each Frame' above. That is correct, we must extract the Frustum Planes every frame because obviously the Camera will be constantly moving and therefore so will the WORLD Space position of the Frustum Planes. In your application the 'ExtractFrustumPlanes' function should be called each frame after you have moved the Camera and updated the View Matrix but before the Tree has been rendered.So your main render function should look something like this.

void renderframe(void)
{
UpdateViewPos();
ExtractFrustumPlanes(FrustumPlanes);
lpDevice->Clear(0,NULL,D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER,D3DCOLOR_XRGB(0,0,0),1.0f,0);
lpDevice->BeginScene();
RenderTree(CameraPosition);
lpDevice->EndScene();
}

This all makes sense but before I get a load of emails telling me how my Demo does not do it this way round let me just say that this is because of some trickery I had to pull off for the Top Down View mode so that when you look down from above the Frustum Planes are NOT where the camera is (because it is up in the air looking down) but instead the Frustum Planes remain in the same position as they were when you were in First Person View.This allows you to View the Clipping process from above.In case you are interested , i did this by using two view matrices.The First one is where the Person is on the ground (or the camera if you are in First Person Mode) and the second is the acutal camera position.In the Demo i Build the Player location View Matrix First and use THIS (not the camera) matrix to extract the planes and clip the tree.Then after the tree has been culled i set the View matrix to the actual Camera Position for rendering. Any way all of this is besides the point, in a first person game you will use a render loop like the one i have used above.You can view the source code to see the changes i had to make to the Top Down mode.

There is one more thing I should mention before we look at the code to this function.The Planes extracted from the View/Projection matrix are NOT Normalized meaning that the Normals may not be a length of 1.0 and that the distance parameter is scaled accordingly.The interesting thing though is that the Normals do not actually have to be Normalized for our AABB culling to work because no matter how UnNormalized (for lack of a better word) the Planes may be they will still describe the same Plane.

You are probably thinking "Why Do People Normalize Planes then?", well if we needed to know the distance from a Point to a Plane (which is often the case) UnNormalized planes would give the wrong result.However, we only need to know whether a point is Behind or In Front of a Plane and this will remain true however UnNormalized our Planes may be.

Having said that though i DO Normalize the Planes after extraction in the following code. The reason i do this is for completeness really. You will probably at some point in your game want to test other objects against the Frustum Planes (Monsters etc) and usually Bounding Spheres are used for this purpose.With a Bounding sphere however we need the Distance to the Plane accurately calculated because there is only an intersection if the Distance to the Plane to the Sphere center is shorter than the Sphere Radius.For this reason I do Normalize the Planes.Anyway lets have a look at the code .

A Word Of Thanks:-
I would like to say a very big thankyou to Klaus Hartmann for bringing the following code to my attention. Klaus is keen to point out that he did not invent this and it is a guy called 'Gil Gribb' who deserves the credit.However, it was Klaus that was good enough to explain Not Only the following code but also the technique to Cull AABB's to the View Frustum and the code to the following two functions is based very tightly on code that he gave to me.Thanks again Klaus.


It should be noted that in the following code the Frustum Planes Extracted all have their Normals Facing Outward. This means if an Object is Behind (or Partially Behind) all of the Six Frustum Planes then it is inside the Frustum.If any Object is Completely Infront of ANY of the Six Frustum Planes then it can not possibly be within the Fustum.


void ExtractFrustumPlanes(PLANE2 *Planes)
{

D3DXMATRIX ViewMatrix,ProjMatrix,ViewProj;

lpDevice->GetTransform(D3DTS_PROJECTION,&ProjMatrix);
lpDevice->GetTransform(D3DTS_VIEW,&ViewMatrix);

D3DXMatrixMultiply(&ViewProj,&ViewMatrix,&ProjMatrix); //Combine Them

// Left clipping plane
Planes[0].Normal.x = -(ViewProj._14 + ViewProj._11);
Planes[0].Normal.y = -(ViewProj._24 + ViewProj._21);
Planes[0].Normal.z = -(ViewProj._34 + ViewProj._31);
Planes[0].Distance = -(ViewProj._44 + ViewProj._41);

// Right clipping plane
Planes[1].Normal.x = -(ViewProj._14 - ViewProj._11);
Planes[1].Normal.y = -(ViewProj._24 - ViewProj._21);
Planes[1].Normal.z = -(ViewProj._34 - ViewProj._31);
Planes[1].Distance = -(ViewProj._44 - ViewProj._41);

// Top clipping plane
Planes[2].Normal.x = -(ViewProj._14 - ViewProj._12);
Planes[2].Normal.y = -(ViewProj._24 - ViewProj._22);
Planes[2].Normal.z = -(ViewProj._34 - ViewProj._32);
Planes[2].Distance = -(ViewProj._44 - ViewProj._42);

// Bottom clipping plane
Planes[3].Normal.x = -(ViewProj._14 + ViewProj._12);
Planes[3].Normal.y = -(ViewProj._24 + ViewProj._22);
Planes[3].Normal.z = -(ViewProj._34 + ViewProj._32);
Planes[3].Distance = -(ViewProj._44 + ViewProj._42);

// Near clipping plane
Planes[4].Normal.x = -(ViewProj._14 + ViewProj._13);
Planes[4].Normal.y = -(ViewProj._24 + ViewProj._23);
Planes[4].Normal.z = -(ViewProj._34 + ViewProj._33);
Planes[4].Distance = -(ViewProj._44 + ViewProj._43);

// Far clipping plane
Planes[5].Normal.x = -(ViewProj._14 - ViewProj._13);
Planes[5].Normal.y = -(ViewProj._24 - ViewProj._23);
Planes[5].Normal.z = -(ViewProj._34 - ViewProj._33);
Planes[5].Distance = -(ViewProj._44 - ViewProj._43);


// NORMALIZE THE PLANES (Not Really Needed in Our Demo)

for (int i=0;i<6;i++)
{
float denom = 1.0f / D3DXVec3Length(&Planes[i].Normal);// Get magnitude of Vector
Planes[i].Normal.x *= denom;
Planes[i].Normal.y *= denom;
Planes[i].Normal.z *= denom;
Planes[i].Distance *= denom;
}
}// End Function Extract Clip Planes



Culling the AABB's From the Frustum


In order to test whether or not a AABB is within the Frustum we have to check it against each of the six planes against the Bounding Box.Now you are probably thinking that in order to to test whether or not ANY of the AABB is within the frustum all we have to do is check the Four corner points of the box. This is definately NOT the way as the following diagram will clearly demonstrate.

Above you can see that although the four corner points of the Bounding Box out Outside the View Frustum , the Bounding Box would be partially visible as one of its edges intersects the Frustum.

In order to get around this problem we have to test Each Plane one at a time against just a single point on the Bounding Box.This Point will be one of the Corners of our Bounding Box but which Corner Point we use is completely dependant on the Orientation of the Current Frustum Plane being Tested.If you imagine that the AABB is completely outside the frustum and just about to intersect the current Plane being tested, the point we wish to find would be the FIRST point (corner) on the Box that would inersect it. This Point is Called the 'Negative' or 'Near' Point and is the Corner of the Bounding Box that would be first to intersect the current plane if the Box was Infront of it.What we will do is look at EACH component of the Plane Normal and find which Corner Point of the AABB we have to use .We do this for EACH Plane we test because each plane will have a different Orientation so the Corner of the AABB that we use 'The Near Point' will be different for each Plane.The basic Principle is this:-

1) For Each Frustum Plane we Test the Plane Normal

2) If the 'X' Component of the Plane Normal is Negative then we use the Bounding Box's X Maximum Point as our Test Point's X Component otherwise we use the Bounding Box's X Minimum Component.

3) If the 'Y' Component of the Plane Normal is Negative then we use the Bounding Box's Y Maximum Point as our Test Point's Y Component otherwise we use the Bounding Box's Y Minimum Component.

4) If the 'Z' Component of the Plane Normal is Negative then we use the Bounding Box's Z Maximum Point as our Test Point's Z Component otherwise we use the Bounding Box's Z Minimum Component.

At this Point we have now constructed a Point (Vector) to test against the Clip Plane.If this point is In Front of the Current Plane(Remember Frustum Planes Point Outwards) then we can bail out of the function immediately and do not have to check the other Planes because if this 'Near' point is outside (In Front) of the Planes then the entire AABB must be outside the Frustum. If however the 'Near' point is Behind the Plane then we must test the other planes. For every plane that we test we have to build a new 'Near' point to test depending on the orientation of the Plane Normal.If at any Point the current 'Near' point is Infront of the Current Plane being tested then we can bail because the AABB is completely outside the frustum. If however we test all Six Planes and do not find a 'Near' point that is In front of one of the planes , this means the AABB is at least partially inside the Frustum and so the function should return 'TRUE'.

In case you are having trouble Visualizing this look at the following diagram that shows TWO AABB's (A and B) and a Set of Six Frustum Planes.

Lets imagine that first of all we want to check the Left (L) Clip plane of the Frustum and the Camera is facing due North.In this case the Left Clip planes Normal would be facing in the Negative X Direction which means that we will use the AABB X Maximum Point as our Near Point X Component.The same is true for the the Z component of the Plane Normal.This too would be a Negative component meaning we would use the AABB Maximum Z Component as the Near points Z Component (Forget about Y for now because we are looking top down and the plane has no Y orientation).
In this case then the Near Point (corner point of the AABB) used will be the Top Right of the Box as indicated by the Green square.Look at this Corner Point on 'BOX A' and remembering that Planes are Infinite you should be able to see that if this 'Near Point' is In Front of the Left Plane the the WHOLE AABB must be also.

When testing the Right Plane a Different 'Near' Point is Used because the Plane Normals X Component now faces Positive which means we use the AABB's X Minimum component and the Near Point X Component.Z is still facing the same way however (Negative) so we still use the AABB's Max Z component.This gives us the Corner indicated by the RED box's in the Diagram.

Look at 'Box B' you should also be able to see that in this case if the RED corner is Infront of the Right Plane then the entire AABB must be also.


OK so lets do a Dry run on both these Bounding Boxes Starting with Box A.First of all we test the Left Frustum Planes which creates the Green 'Near' Point as shown above. For Box 'A' the Near Point is In Front of the Left Plane so the Box is Not within the Frustum and the Funtion returns False.
Next we test Box 'B'. Once again we test the Left Plane against the 'Near' point (green corner) and discover that this Point is BEHIND the Left plane so could possibly be within the Frustum.In this case instead of returning from the function we move on the test the next Plane which in this case is the Right Plane.Because we are Testing the Right Plane now a different 'Near' point is created (the Red Corner) and tested against the right plane.In this case however the Near Point (red corner) is In Front of the Right Plane so the entire AABB must be in front also and so can not possibly be within the frustum which makes the function return true.We do this for all SIX planes unless we find the Current 'Near' Point that is Infront of the current Plane being tested , in which case the function can return false because the AABB isNOT within the Frustum.

If a AABB is within the frustum then the 'Near' Points will be behind all of the Six Planes.

Lets now have a look at the function called 'LeafInFrustum'. This is called while we are rendering to see whether a Leaf about to be drawn can actually be seen from the camera.We pass in the index of the leaf to test and the function returns True if the Leafs Bounding Box is partially (or totally) within the frustum. In other words if the function returns 'FALSE' then we do NOT want to draw the leaf. Here is the code and is basicallly just an ugly bunch of 'If' statements that are used to Build the Current 'Near' point for the Current Plane being tested. Once the Near point is found we simply use the DotProduct on the Near Point and the Plane Normal to determine if the Point is In Front or Behind the Plane.:-

bool LeafInFrustum (long Leaf)
{
D3DXVECTOR3 bMax=LeafArray[Leaf].BoundingBox.BoxMax;
D3DXVECTOR3 bMin=LeafArray[Leaf].BoundingBox.BoxMin;
D3DXVECTOR3 NearPoint,FarPoint;

PLANE2 *Plane=FrustumPlanes;

for (int i=0;i<6;i++)
{

if (Plane->Normal.x > 0.0f)
{
 if (Plane->Normal.y > 0.0f)
{
  if (Plane->Normal.z > 0.0f)
{
   NearPoint.x = bMin.x; NearPoint.y = bMin.y; NearPoint.z = bMin.z;
}
else
{
   NearPoint.x = bMin.x; NearPoint.y = bMin.y; NearPoint.z = bMax.z; }
}
else
{
 if  (Plane->Normal.z > 0.0f)
{
   NearPoint.x = bMin.x; NearPoint.y = bMax.y; NearPoint.z = bMin.z;
}
else
{
   NearPoint.x = bMin.x; NearPoint.y = bMax.y; NearPoint.z = bMax.z;
}
}
}
else
{
 if (Plane->Normal.y > 0.0f)
{
  (Plane->Normal.z > 0.0f)
{
  NearPoint.x = bMax.x; NearPoint.y = bMin.y; NearPoint.z = bMin.z;
}
else
{
  NearPoint.x = bMax.x; NearPoint.y = bMin.y; NearPoint.z = bMax.z;
}
}
else
{
 if (Plane->Normal.z > 0.0f)
{
  NearPoint.x = bMax.x; NearPoint.y = bMax.y; NearPoint.z = bMin.z;
}
else
{
  NearPoint.x = bMax.x; NearPoint.y = bMax.y; NearPoint.z = bMax.z;
}
}
}

// near extreme point is outside, and thus
// the AABB is Totally outside the polyhedron

if (D3DXVec3Dot(&Plane->Normal,&NearPoint)+Plane->Distance>0) return false;
Plane++; }

return true;
}

Thats it!!!. Although we have used the above function to test the Leafs of our Bounding Box against the Frustum , the above technique works with any AABB and so can readily be used in any other applications that need to perform Frustum Culling of AABB's. Although Frustum Culling and Extracting the Frustum Planes are NOT really BSP exclusive subjects and are generic to all 3D Applications i felt by including it in this tutoria it would make the Tutorial more complete and you well on your way to writing a lightning fast 3D engine.

Believe it or not you have actually made it to the end of this monster tutorial (and so have I) and we have covered all the code used in the Demo. I really suggest you download the demo and run it so you can see exactly what the PVS does and what Frustum Culling does.

Although we have fininshed the tutorial now I will end up by addressing some of your common concerns not covered in the tutorial such as 'Saving the BSP Tree to Disk' and 'How to make PVS work WITHOUT the Z Buffer so that the BSP still renders back to front'.

Saving and Loading the Compiled BSP Tree


Because we now have our BSP tree stored in arrays, saving and loading the BSP has now become incredibly easy to do.You could just write the 5 Arrays out to disk in their entirety (Node,Plane,Leaf,Polygon & Portal arrays).The the problem with this though is that the structures we have used to compile and BSP Tree and calculate the PVS have many fields that you will probably not need for simply rendering the level in your application.For example, the Portal Array may not be needed at all at Render Time in which case references to portals in the Leaf Structure will not be be needed also.The Nodes Bounding Boxes are not needed also as this was only used to calculate the initial portal.

What we need to do is save each array of to disk and exclude the information we no longer need.Then when we create our game engine we will create streamlined versions of the Node,Leaf & Polygon structures without this information.
In the downloadable Zip file I have included two projects.The First project 'BSP_PVS_Compiler' is the actual compiler and PVS calculator itself.This Project contains a 'SaveBSPTree' function that by defualt is Commented out so you will have to remove the two backslashes. After you do this the compiler will automatically save the BSP to Disk after it is completed (in the current working directory).The second Project is called 'Bsp_Loader_and_renderer' and is a framework for a BSP 3d renderer.It simply loads the BSP in and renders it and is a good starting point to start building your 3d engine.

Because we will not be saving certain fields of the Compiler structures (because they are not needed to render) we will need a new set of structures in our Loader and Rendering application which do not have the fields we have decided not to save.

Below we will see the difference in the Structures used by the 'Compiler' and the Structures used by the 'Loader and Renderer'.



BSP Compiler StructuresLoader and Renderer Structures
struct NODE
{
unsigned char IsLeaf;
unsigned long Plane;
unsigned long Front;
signed long Back;
BOUNDINGBOX BoundingBox;
};
struct NODE
{
unsigned char IsLeaf;
unsigned long Plane;
unsigned long Front;
signed long Back;
};
You can see that when the Compiler saves the Node Array it does not save the Bounding Box.This means our loader and renderer application can leave this field out of its Node Structure
struct LEAF{
long StartPolygon;
long EndPolygon;
long PortalIndexList[50];
long NumberOfPortals;
long PVSIndex;
BOUNDINGBOX BoundingBox;
};
struct LEAF{
long StartPolygon;
long EndPolygon;
long PVSIndex;
BOUNDINGBOX BoundingBox;
};
When we save the Leaf array from the compiler we do not bother to save the Portal Information because this will not be used by the Loader and Renderer.The portals were only needed to calculate the PVS.We will not save the Portal Array at all.
struct POLYGON
{
D3DLVERTEX *VertexList;
D3DXVECTOR3 Normal;
WORD NumberOfVertices;
WORD NumberOfIndices;
WORD *Indices;
POLYGON * Next;
bool BeenUsedAsSplitter;
long TextureIndex;
};

struct POLYGON
{
D3DLVERTEX *VertexList;
D3DXVECTOR3 Normal;
WORD NumberOfVertices;
WORD NumberOfIndices;
WORD *Indices;
POLYGON * Next;
long TextureIndex;
};

Not so much of a saving here.We obviously do not need the 'BeenUsedAsSplitter' variable any more as this was only used for compilation.Obviously we also do not save off the 'Next' pointer as this would be meaningless when loaded back in (because it points to an actual memory address) BUT our but we still have to have this field in out 'Loader and Renderer' application because if you remember, the 'Next' pointer was also used for Texture Batching during rendering.We will still want this functionailty when in our Loader and Renderer Application.

Those are the Only differences to the structure we will be saving.The 'BOUNDINGBOX' and 'PLANE' structures remain the same.

Without any further delay, lets have a look at the 'SaveBSPTree' function that the Compiler uses to save the BSP Tree to disk.This can be found in 'main.cpp'.

void SaveBSPTree(char *filename)
{
FILE *stream;
long a;
stream = fopen(filename, "w+b");

NumberOfNodes++;
fwrite(&NumberOfNodes,sizeof(long),1,stream);
NODE *n=NodeArray;
for (a=0;a<NumberOfNodes;a++)
{
fwrite(&n->IsLeaf,sizeof (unsigned char),1,stream);
fwrite(&n->Plane, sizeof (unsigned long),1,stream);
fwrite(&n->Front, sizeof (unsigned long),1,stream);
fwrite(&n->Back , sizeof (signed long),1,stream);
n++;
}

// Write the Plane Array
fwrite(&NumberOfPlanes,sizeof(long),1,stream);
fwrite(PlaneArray,sizeof(PLANE),NumberOfPlanes,stream);

// Write Leaf Array.This also has some reduntant Run Time fields so we shall remove
// these when writing.
fwrite(&NumberOfLeafs,sizeof(long),1,stream); LEAF *l=LeafArray;
for (a=0;a&lyNumberOfLeafs;a++)
{
fwrite(&l->StartPolygon,sizeof(long),1,stream);
fwrite(&l->EndPolygon,sizeof(long),1,stream);
fwrite(&l->PVSIndex,sizeof(long),1,stream);
fwrite(&l->BoundingBox,sizeof(BOUNDINGBOX),1,stream);
l++;
}

// WritePolygonArray.There are fields in this structure that were used for sompiling and are
// not needed at runtime."BeenUsedAsSplitter' is not needed and neither is the 'Next' Pointer.
fwrite(&NumberOfPolygons,sizeof(long),1,stream);
POLYGON *p=PolygonArray;
for (a=0;a<NumberOfPolygons;a++)
{
fwrite(&p->NumberOfVertices,sizeof(WORD),1,stream);
fwrite(p->VertexList,sizeof(D3DLVERTEX),p->NumberOfVertices,stream);
fwrite(&p->NumberOfIndices,sizeof(WORD),1,stream);
fwrite(p->Indices,sizeof(WORD),p->NumberOfIndices,stream);
fwrite(&p->Normal,sizeof(D3DXVECTOR3),1,stream);
fwrite(&p->TextureIndex,sizeof(long),1,stream);
p++;
}

// Now all we have to do is write the PVS Data itself
fwrite(&PVSCompressedSize,sizeof(long),1,stream);
fwrite(PVSData,sizeof(BYTE),PVSCompressedSize,stream);

// All done
fclose (stream);
}

As you can see we just write the Arrays out in the following Order .1)NodeArray 2)PlaneArray 3)Leaf Array 4)PolygonArray 5)PVSData. It is a shame actually that we have to miss out certain fields or other wise we could have written this function in just a few lines of code.Look at how the PLANE's array is written.We just throw the Whole array into the file in one go because we need it all.

One thing that may be confusing is how we Nudge the 'NumberOfNodes' variable up by one before we write it out.This is because the way that the BSP compiler works means that once the level has been compiled 'NumberOfNodes' is the Index of the last element in the NodeArray. In other words, if there are 10 Nodes (0-9) then NumberOfNodes will equal 9 instead of 10.This is why we nudge it up.

Everything else above should be fairly straight forward as it is simple file access stuff.As you can see saving the BSP now has become a hell of a lot easier.All we have to do is load it back in , in that order into the master arrays and the BSP Tree remains in tact.

The 'LoadBSPfunction' is in the 'Loader_and_Renderer' application/project.This project is a framework to get you up and running once you have compiled and saved your BSP level.The 'Loader_and_renderer' project is quite small because it only contains the code you need to Load and Render the Tree.The code to load the BSP is shown below and is like you would expect.It is basically just the 'SaveBSPTree' function in reverse.The only difference is that the memory for the arrays is now allocated at load time as shown below.

void LoadBSPTree(char *filename)
{
FILE *stream;
long a;
stream = fopen(filename, "rb");

fread(&NumberOfNodes,sizeof(long),1,stream);
NodeArray= (NODE *) malloc (NumberOfNodes*sizeof(NODE));
NODE *n=NodeArray;
for (a=0;a<NumberOfNodes;a++)
{
fread(&n->IsLeaf,sizeof (unsigned char),1,stream);
fread(&n>Plane, sizeof (unsigned long),1,stream);
fread(&n>Front, sizeof (unsigned long),1,stream);
fread(&n>Back , sizeof (signed long),1,stream);
n++;
}

// Write the Plane Array
fread(&NumberOfPlanes,sizeof(long),1,stream);
PlaneArray =(PLANE *) malloc (NumberOfPlanes*sizeof(PLANE));
fread(PlaneArray,sizeof(PLANE),NumberOfPlanes,stream);

// Write Leaf Array.This also has some reduntant Run Time fields so we shall remove
// these when writing.
fread(&NumberOfLeafs,sizeof(long),1,stream);
LeafArray =(LEAF *)malloc (NumberOfLeafs*sizeof(LEAF));
fread (LeafArray,sizeof(LEAF),NumberOfLeafs,stream);

// WritePolygonArray.
fread(&NumberOfPolygons,sizeof(long),1,stream);
PolygonArray=(POLYGON *)malloc (NumberOfPolygons*sizeof(POLYGON));
POLYGON *p=PolygonArray;
for (a=0;a<NumberOfPolygons;a++)
{
fread(&p>NumberOfVertices,sizeof(WORD),1,stream);
p>VertexList=new D3DLVERTEX[p>NumberOfVertices];
fread(p>VertexList,sizeof(D3DLVERTEX),p>NumberOfVertices,stream);
fread(&p>NumberOfIndices,sizeof(WORD),1,stream);
p>Indices=new WORD [p>NumberOfIndices];
fread(p>Indices,sizeof(WORD),p>NumberOfIndices,stream);
fread(&p>Normal,sizeof(D3DXVECTOR3),1,stream);
fread(&p>TextureIndex,sizeof(DWORD),1,stream);
p++;
}

// Now all we have to do is write the PVS Data itself
fread(&PVSCompressedSize,sizeof(long),1,stream);
PVSData =(BYTE *)malloc (PVSCompressedSize*sizeof(BYTE));
fread(PVSData,sizeof(BYTE),PVSCompressedSize,stream);

// All done
fclose (stream);
}

Piece of cake hey?

I do not want to use a Zuffer.I want to use the BSP Tree for Back to Front Sorting


One of the problems we now have is that because we are using the Z buffer we will have to do our own sorting of transparent objects.This is not a major problem as we are batching the textures before we render them anyway, so you could give each polygon an 'IsAlpha' flag so that when it is collected from the PVS during rendering it is stored in a special Alpha Linked list.You can then sort these before you render them based on distance from the camera.

If for some reason you do not wish to use the Z Buffer then you will have to implement things a little differently.This will probably be slower than the Z Buffer version (in Hardware) because you will have to traverse the ENTIRE tree each frame, although there are things you can do to speed this up.

Instead of just traversing the tree to find the Leaf that the Camera is currently in and using that leafs PVS to render the polygon, you will have to traverse the tree much in the same way you did in the Solid Node tree from part one.First you will find the Camera the leaf is in first so you can get the Current Leafs PVS set.Then you will traverse the tree and at each Node you will have to classify its location with regards to the camera and go down the tree on the opposite side to the camera first.Then when you hit a leaf you check that leafs 'BIT' in the Current Cameras leafs PVS and only render it if its bit is set to one.This can also be speeded up furthur by changing the code so that not only do Leafs have a PVS but also Nodes to too.After all, if none of a Nodes leafs are visible then the node itself is not visible.This will lead to a much larger PVS set but will save time as you no longer have to traverse the entire tree each frame.Once you find a node that is not visible from the leaf you no longer have to traverse any further down that Nodes trees.

With 3D Hardware becoming faster and faster there is little sense in denying the advantages the Z Buffers offers us. Like I said I have not tried to implement a 'Z Bufferless' (is that a word?) version myself but if you understand the code we have discussed throughout this tutorial you should have very little trouble writing a version that does not use the Z Buffer.

Illegal Geometry


Although we discussed illegal Geometry in the previous tutorial I will touch on it again here as the Solid Leaf Tree is very fussy about such things.Infact in most cases the Compiler will Fail and crash if it is fed illegal Geometry.An important thing to remember is:-

A FRONT FACE CAN NEVER EXIST IN SOLID SPACE


This is very important.Solid Space must NEVER be allowed to leak out in to Empty space.Think of a piece of Brick wall for a moment.The wall has six faces.Front,Back,Left.Right,Top and Bottom. These are ALL FRONT faces.In other words you can see them.Behind them they have bounded in Solid space.Because these faces are Front Facing we know that in front of them is empty space.

Consider the case where you may have a solid Cube with all front faces facing outwards.Imagine now that you made the top face just slightly larger so that it overhanged.See below.



Here you have illegal geometry.The Back of the Top face is a back face and is spilling out into empty space.This now makes the space ambigious to the compiler.It must be Solid space because it can see a Back face BUT then again it must be empty space because it can see a front face.If ANY space can see both Front faces and Back faces at the same time then you have illegal geometry and the compiler will crash.

If you do want an Overhang on a cube you would have to do it like so:-



This is exactly as it would be in Real life and this is actually one of the main reasons that World Editing Packages work with Cubes and carving tools so that the risk of this happening is minimized.Using Constructive Solid Geometry and Carving Tools makes this kind of illegal geometry very hard to do by accident.Also these packages then remove any illegal geometry before the level is saved to file.If however you are building your Level either on graph paper or using some kind of PolyBuilder program (this is what I used) you must be extremely careful. When I designed my level (I basically did my level on a simply Poly Builder program that I wrote in one night.It was basically just interactive Graph paper so had no geometry checking), my level would not compile at first.I would have bet my life that there was nothing illegal about the geometry but closer inspection revealed otherwise.As shown above a simple overhang can cause a major headache.

The best way to think of it is like this,if the geometry can not exist in real life,then it will not compile.For instance you can not ever see a Back Face in real life.Take a wall for example, if you walk around the back of the wall you can see the Front of the back face of the wall.You should NEVER be able to see into solid space or see the back of a face from any point within empty space.


WorldGen LE (World Editor Package) Exclusive to Mr-Gamemaker

Programmed by Adam Hoult


Luckily all of these Illegal Geometry headaches will be a thing of the past as I can bring you news of a great new World Editing Package called 'WorldGen LE'.This is Due for Release Very soon and has been written by Adam Hoult who has been the Co-Worker with me on the code to this tutorial.You will be able to download 'WorldGen LE' from this website as soon as it is released at and at NO CHARGE. The world files Exported by 'WorldGen LE' will be able to be loaded straight into our Loading and Rendering Application and has the BSP/PVS compiler built in.

WorldGen (tm) in action below:-



WorldGen LE (tm) is a complete world editing package which allows you to quickly put together even the most complex indoor or outdor scenes. Listed below are just a few of it's many features :


* Realtime Preview of compiled level at all times, with no need to rebuild after modifications.

* Dynamic CSG Brushes allow carving of geometry with any other brush,

* Hidden surface removal ensures 100% legal geometry for BSP Compilation

* Texturing made easy with the ability to World Align textures for easy tiling.

* Optionally generates Lightmaps, PVS and BSP Tree information on final compilation.

* Imports / Exports from and to several formats, including Mr-Gamemaker specific formats =)

* Prefab support allows you to store objects you have created (such as desk lamps) for use in other levels.

* Supports Brush, Face or Specific Vertex Editing.

* MDI Interface allows you to easily copy and paste geometry from one scene to another

Well I am sure you will agree that this is something to really look forward to.If you are on the Mr-GameMaker.Com Mailing List you will be informed the moment it is released.


Downloading the Source Code


The Zip file below contains all the Source code to both the 'BSP/PVS Compiler and Renderer' as well as the 'Loading and Rendering Framework' source code.Both the project files for Visual C++ are also included for easy building.Also contained are the textures and geometry files used by the demos. I have also included the compiled 'EXE' files of both the 'PVS/BSP Compiler and Renderer' and the 'Loading and Rendering Framework' so you can run the demos as soon as you download them to see what it is all about.

Please make sure that when you unzip the files that you tell WinZip to retain the directory structure.Thats it and have fun.

Download Bsp_PVS.zip





I hope you have enjoyed this tutorial and found it to be of help.If you have any question please feel free to post them on the Mr-GameMaker Message Board or try and catch me in the Mr-GameMaker Chat Room for a live discussion.

Before I go I would like to say thanks to some people who made this tutorial possible.

Adam Hoult:
Adam has been my Co-Worker on this tutorial as well as being a good friend .I could not have done it without him. Adam was always in the Chat Room when I had things I wanted to discuss and together we got this code working.This tutorial could never have been done if I had not had Adam working along side me.Although Adam is very busy himself he always dropped what he was doing and helped me when I had a problem . Either Adam or I can often be found in the Mr-Gamemaker Chat room to answer any questions you may have.

Matthias Liertzer
Matt was always there to answer any questions I had on the GameDev forums. Matt explained Portal Generation to me and also helped me to understand the Anti-Penumbra clipping.Matt goes under the name of 'Matt2000' on the GameDev forums.Thankyou again so much Matt for all your help.I am sure that I would still be struggling away at this now if you had not been so helpful.

Klaus Hartmann
Klaus helped me to understand Culling AABB's to the Frustum as already mentioned elsewhere in this tutorial.

www.graphtallica.com
This superb website is an excellent resource for textures.All the textures in the demo were taken from graphtallica and there many to choose from and the site is constantly updated with new professional looking textures.Pay this site a visit NOW.


Thats it, I am off. Having spent three months working on the code and text for this tutorial I feel like I never want to look at another BSP again as long as I live.It has been a very long road and most people thought I had given up on the site because of the lack of updates due to working so hard on this.Hope you find it to be of use.

Thats it then, until Part 3 ofcourse when we will look at using a BSP tree to perform 'Constructive Solid Geometry'. No emails please... Part 3 will be ready when its ready.


Home Page