This tutorial is part of a Collection: 03. DirectX 11 - Braynzar Soft Tutorials
rate up
0
rate down
7655
views
bookmark
27. Loading An MD5 Model

MD5 models are split into two separate files; "md5mesh" and "md5anim". This is convenient since I was planning on splitting this lesson into two lessons anyway, one for loading the MD5 model, and one for animation. This lesson being the first of two, will teach you how to load the MD5 model from the "md5mesh" file and set up the vertex positions based on the layout of the joints. The next lesson will teach you how to load the animation for the model from the "md5anim" file and animate your model. In this lesson, we will cover the following: - The "md5mesh" format - A breif introduction to quaternions - How "bones" work in skinned models (You may be wondering why it's taken me so long to complete this lesson. The reason is because i'm so bad with creating 3D models, and I would like to have a decent looking model for this lesson, since we will be animating it and fun things like that ;)

1351 downloads
##Introduction## The reason I am doing another lesson on loading a 3d model, is because I want to do a lesson (next lesson) on animation. The obj format does not store animation (although you could use the obj format for keyframe animation, which is storing a separate model for each frame of animation. This is actually nice because you know that every frame of the model will look exactly like you plan it to look, and it's also good for performance as it's very fast to switch between models. The Downside is it's completely static, and a huge memory consumer), so I had to pick another format which does. After some research, I feel I have come across a solid format for this lesson; The MD5 format. If you don't know anything about the MD5 format, it was the format used for Doom 3. The MD5 format uses a skeletal (joints) structure to define the vertex positions, which is why I have chosen this format. The MD5 format comes with two files, the "md5mesh" and "md5anim". These files are stored in ascii, so they are very easy to read. The "md5mesh", which is the one this lesson will focus on, stores the model information, such as geometry and materials. The "md5anim" file, which is what the next lesson will focus on, stores the animation for the model. Before I continue, I want to give credit to two articles I referred to when making this lesson. They are .[http://www.3dgep.com/loading-and-animating-md5-models-with-opengl/][here] and .[http://tfc.duke.free.fr/coding/md5-specs-en.html][here]. ##Skeletal Animation (Brief Intro.)## Skeletal Animation is an alternative to keyframe animation (which is storing a separate version of a model for each frame of animation). Skeletal animation defines a sort of "skeleton" for a model, where each of the models vertices are "weighted" to one or more bones or joints. When the skeleton moves or changes position, so do the vertices "weighted" to them. Each vertex can have one or more weights. The weights define a position relative to the joint or bone, and a bias factor. The bias factor determines how much control the weight has over a specific vertex, since each vertex can contain more than one weight. All of the vertex's weight's bias must add up to "1". There are two different kinds of skeletal animation structures. One uses joints, and one uses a system more like actual bones. The joint system, which we will be using, defines the position of the joint, and the orientation of the joint, along with its parent joint. All joints must have a parent joint, except for one (the Root Joint), which is at the top of the hierarchy, whose parent is set to "-1". When a parent bone is rotated or moved, so are the child bones. For example, if you move the upper arm of a person, the lower arm, hand and fingers will also move. The children joints are ALWAYS attached to the parent joints, so separating the joints is not possible, for example if you wanted your model to "explode". However, if you wanted someones head to fall off after getting shot, you would only need to set the orientation of the joint to zero, so that it would appear that the head was blown off (video games... ;) The other bone system specifies two positions for the bones, one for each end of the bone, and a parent bone. This system is nice for exploding models, since the bones are able to separate. Although this is directed towards OpenGL, It's a great article on animation. .[http://content.gpwiki.org/index.php/OpenGL:Tutorials:Basic_Bones_System][More Animation information] ##The MD5 Format## As I explained above, the MD5 format contains two files. In this lesson, we will only be using the "md5mesh", which stores the model's "bind-pose". The "bind-pose" is the default position of the model. In the next lesson, we will be learning how to animate the model, so that it does not just stand still. I also mentioned the files are in ascii format, so they are easy to read. Every meaningfull line starts with a string explaining what that line contains (or in a couple cases what the following lines contain). The next couple sections will explain what each of these lines are for, titled by the name of the string defining the line or lines. **"MD5Version"** Following this string is a number describing the version of the file. Our loader was designed specifically for version "10" (although I decided to skip the actual check in the loader). You can find out information about other version, although every md5 model i've downloaded (which I won't lie, is not many) has been version 10. MD5Version 10 **"commandline"** This line contains something that we will not have to worry about ;) commandline "" **"numJoints"** This line contains the number of joints in the model. numJoints 27 **"numMeshes"** This is the number of meshes (which we will call "subsets) in the model. Each mesh (or subset) defines the vertices (not their positions, since their positions will be calculated using weights, and not the normals either, we can calculate those on our own), triangles, and weights. numMeshes 5 **"joints"** This is the start of the joint descriptions. The joint descriptions start on the next line (after "joints {") and go until a line containing a closing bracket ("}") is reached. Each line after "joints {" is a new joint. Each of these lines starts with a string inside two quotation marks, which is the name of the joint. Following the name of the joint is the ID number of that joint. The joint with the ID number "-1" is the root joint. After the ID number is a 3D vector (contained in two parentheses) describing the joints position. After that is another 3D vector (also inside parentheses) which describes the joints "bind-pose" orientation. Although the joint orientation is stored as a 3D vector inside the file, it is actually used as a quaternion (or 4D vector). I will briefly explain quaternions below. joints { "Bip01" -1 ( 0.569962 0.0 -6.39413 ) ( 0.0 0.0 0.707106 ) ... } **"mesh"** This string is the start of a mesh or subset. Everything between the two opening and closing brackets ({ ... }) defines this specific subset. The "md5mesh" file contains only one section for the joints, but can contain one or more sections for each subset or mesh. Luckily the header of the file defines the number of subsets, so you will know when the last one is read. mesh { **"shader"** The first usefull line (sometimes a comment is included on the line directly after the "mesh {" line, saying the name of the subset), is the shader. This line contains the name of a material or texture that this model will use (again the string is in quotation marks, so you will have to remove them after reading the string in). The MD5 format does not contain a material library file like the OBJ format does, so you will need to create your own material library if you wish to use materials for your model (since by default, at least in my 3ds max exporter, the material name is used instead of the texture filename). Otherwise, to make this simpler, I have changed the name of the material for each subset to the filename of the texture I want to use for each mesh. Most likely you will want to use a material library, so consider this an exercise ;) shader "face.jpg" **"numverts"** This is the number of vertices for this subset. The vertex definitions immediately follow this line. numverts 99 **"vert"** The contents of the line after this string is the definition of the vertex. Following the "vert" string is an integer describing the index of the vertex. After that is the texture coordinates for this vertex (in parentheses). Then there is another integer, which is the index or ID of the "start weight" for this vertex. After the "start weight" is the number of weights this vertex uses. Since each vertex can be bound to one or more weights, the ID of the first weight is used, and the next "n-1" weights directly after this start weight will also be used to calculate the vertex's position (where "n" is the number of weights stored in this vertex). vert 0 ( 0.453487 0.77956 ) 0 4 **"numtris"** Here is the number of triangles in the subset. The lines following this line are the indices, or triangles that make up this subset. numtris 139 **"tri"** Lines that start with "tri" are part of the index list making up this subset. The integer right after "tri" is the ID or index value of the current triangle, and the next three integers are the ID's or index values of the vertices that make up this triangle. tri 0 0 2 1 **"numweights"** The number of weights used in this subset. The next lines are descriptions of the weights used. numweights 391 **"weight"** This line defines a weight. The first value after "weight" is the ID or index value of the current weight. The value after that is an integer describing the ID or index value of the joint that this weight is bound to. each weight can only be bound to one joint. After that is the "bias" value, or how much influence this weight has over the vertex that uses it. All the weights that a vertex is bound to must have their "bias"s add up to "1". The last part in this line is a 3D vector (in parentheses) describing the weights position in "joint space", or its position relative to the joints position (so that the joints position is the point (0,0,0) when looking at it from the weight's point of view) weight 0 23 0.0681065 ( -61.2806 8.07771 -3.0823 ) ##Quaternion rotations (Brief Intro.)## Like I mentioned earlier, the MD5 format uses quaternions for the orientation of it's joints, calculating a weights positions, and ultimately resulting in the final vertices position. Although the math behind quaternions can be more than slightly complex (I won't pretend to understand them completely, or any more than is needed to use them for rotations), the idea behind using them for rotations is not difficult at all. Quaternions are a 4D vector, which when used for rotations, can take the place of rotation matrices. Not only do they only use 4 components (while the equivilent matrix uses 16), they may be faster to compute. They also avoid something called .[https://en.wikipedia.org/wiki/Gimbal_lock][gimbal lock] I'm not really going to get into much math for quaternions, but I will try to explain how to use them for spacial rotations (as we will need to rotate the weight around the joint). I've found a lot of definitions for quaternions, but they all seemed much more complex than what I was really looking for. Like I said, quaternions contain 4 components, (w,x,y,z), or the order that directx seems to store them (x,y,z,w). The (x,y,z) components define the "axis" of rotation, so if the xyz were defined as (0,1,0), the rotation would be done around the y axis. The w component is what makes quaternion rotations work. w is basicaly the rotation itself. It is a value between 0 and 1, 0 being 0 degrees, and 1 being 360 degrees. Quaternions are noncomunitive, meaning they are like matrices so that the order in which you multiply them DOES matter. To turn a 3D vector into a quaternion (which we will need to do), all you have to do is store "0" into the w component. And when you want to turn a quaternion into a 3D vector, you can just ignore the w component. We'll have to compute the w component after we have stored our joint orientations. When doing rotations with quaternions, we will be using unit quaternions (lenth of quaternion is "1"). A unit quaternion satisfies this equation: sqrt(w² + x² + y² + z²) = 1 Just like a unit 3D vector can satisfy this one: sqrt(x² + y² + z²) = 1 Knowing this, we can compute the w component for the quaternion like this: float t = 1.0f - ( x * x ) - ( y * y ) - ( z * z ); if ( t < 0.0f ) w = 0.0f; else w = -sqrtf(t); We can rotate a point (around (0,0,0)) using this equation: rotatedPoint = quaternionRotation * point * -quaternionRotation the "-quaternionRotation" is called the "conjugate" of "quaternionRotation", and can easily be found by the inverse of the x, y, and z components of "quaternionRotation". Or like this: quaternion q; quaternion conjugate = quaternion(-q.x, -q,y, -q.z, q.w); Multiplying two quaternions together is not exactly straight forward, which makes it even nicer that the xna math library which we always use has a function to do this for us: XMQuaternionMultiply(q1, q2). If you want to learn more about quaternions, there are plenty of places to do it other than here, but I think .[http://www.cprogramming.com/tutorial/3d/quaternions.html][this link] would be worth checking out for a more detailed explanation of what I tried to explain. ##Calculating Vertex's Final Position## After all that, we come to the thing that makes MD5 so much different than OBJ ;) To calculate the vertices final position, we need to first calculate the weights position based on the joints position and orientation. The first thing we do is go through each of the weights that a certain vertex is bound to. We then calculate the weights final position in joint space, then translate it to where the joint is in object space. We then multiply this position with the weights bias, and add it to the final vertices position. I tried my best to explain this in the code as its happening. Ok, that was the brief intro. Now we will look at this in more detail. Remember, each vertex is now storing an integer for the start weight's index value, and an integer for the number of weights to use. First we enter a loop that will go through the number of weights that the vertex specifies. The first of course being the start weight, and the following being the next weights directly after the start weight in the weight index list. We find the joint that this weight is bound to, then calculate the conjugate of the joints orientation. We then follow the equation to rotate a point like this: XMFLOAT3 rotatedPoint; XMStoreFloat3(&rotatedPoint, XMQuaternionMultiply(XMQuaternionMultiply(tempJointOrientation, tempWeightPos), tempJointOrientationConjugate)); This will rotate the weight around (0,0,0), which we call joint space at this time. So now we need to translate the rotated point to the joints position in model space. We have already covered rotations, so that shouldn't be too hard to understand. Finally, we multiply the final position by the weights bias factor, and add the result to the final vertices position. We go through this loop for each of the weights that affect the vertices position. ##Updated Vertex Structure## We have to update the vertex structure to store the vertices start weight index value, and the number of weights that this vertex is bound to. I want you to also notice, that these two elements will NOT be sent to the shader, since the shader will not be using them. This is easy to do. All we have to do is put the stuff that won't be sent to the shader at the end of the vertex structure, and just not include them in the vertex layout, as you can see below. struct Vertex //Overloaded Vertex Structure { Vertex(){} Vertex(float x, float y, float z, float u, float v, float nx, float ny, float nz, float tx, float ty, float tz) : pos(x,y,z), texCoord(u, v), normal(nx, ny, nz), tangent(tx, ty, tz){} XMFLOAT3 pos; XMFLOAT2 texCoord; XMFLOAT3 normal; XMFLOAT3 tangent; XMFLOAT3 biTangent; ///////////////**************new**************//////////////////// // Will not be sent to shader int StartWeight; int WeightCount; ///////////////**************new**************//////////////////// }; D3D11_INPUT_ELEMENT_DESC layout[] = { { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 }, { "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 }, { "NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 20, D3D11_INPUT_PER_VERTEX_DATA, 0}, { "TANGENT", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 32, D3D11_INPUT_PER_VERTEX_DATA, 0} }; UINT numElements = ARRAYSIZE(layout); ##Joint Structure## We have a couple new structures to make things much easier. They are for the joints, weights, the models subsets, and the model itself. The first new one is for the joints. This structure will just store all the joint information we talked about from the md5mesh file. struct Joint { std::wstring name; int parentID; XMFLOAT3 pos; XMFLOAT4 orientation; }; ##Weight Structure## Like the structure above, the weights structure will store the information retreived from the md5mesh file. struct Weight { int jointID; float bias; XMFLOAT3 pos; }; ##The ModelSubset Structure## This structure will store all the important stuff of our model. Each subset of our model will get their own one of these structures. Notice how we will create a new vertex and index buffer for each subset. We do this so that when the time comes to animate, we don't have to update the entire model's vertex buffer, just the subsets' vertex buffers that changed. We also have an array of positions. This will be usefull for when we want to do collision detection, picking, or whatever else we need them for, instead of taking an entire array of vertices with all the extra stuff like texture coordinates and whatever. struct ModelSubset { int texArrayIndex; int numTriangles; std::vector<Vertex> vertices; std::vector<DWORD> indices; std::vector<Weight> weights; std::vector<XMFLOAT3> positions; ID3D11Buffer* vertBuff; ID3D11Buffer* indexBuff; }; ##The Model3D Structure## This structure will hold information that applies to the model as a whole. I'm sure you can understand just by looking at it. struct Model3D { int numSubsets; int numJoints; std::vector<Joint> joints; std::vector<ModelSubset> subsets; }; ##New Globals## We only have two new global variables for this lesson. The first is a world matrix for our model (since the model I made was way too big for the scene), and a Model3D that will store the model's information. XMMATRIX smilesWorld; Model3D NewMD5Model; ##The LoadMD5Model() Function Prototype## Here is the prototype of the function that will load in and store our md5 model. The first parameter is a string containing the md5's filename, the second is a pointer to a Model3D object, the third (like we did when loading the OBJ file) is a pointer to a vector of shader resource views, and the fourth is a vector of texture filenames, so that we can check if the texture has already been loaded or not. bool LoadMD5Model(std::wstring filename, Model3D& MD5Model, std::vector<ID3D11ShaderResourceView*>& shaderResourceViewArray, std::vector<std::wstring> texFileNameArray); ##CleanUp() function## Go down to the CleanUp() function, where we will be releasing the vertex and index buffers for each of the models subsets. We enter a loop, which goes through as many times as there are subsets in the model, and each time releasing that subsets vertex and index buffers. for(int i = 0; i < NewMD5Model.numSubsets; i++) { NewMD5Model.subsets[i].indexBuff->Release(); NewMD5Model.subsets[i].vertBuff->Release(); } ##The LoadMD5Model() Function## I just wanted to show you the entire function before I start disecting it, in case you just want to copy the whole thing for yourself. bool LoadMD5Model(std::wstring filename, Model3D& MD5Model, std::vector<ID3D11ShaderResourceView*>& shaderResourceViewArray, std::vector<std::wstring> texFileNameArray) { std::wifstream fileIn (filename.c_str()); // Open file std::wstring checkString; // Stores the next string from our file if(fileIn) // Check if the file was opened { while(fileIn) // Loop until the end of the file is reached { fileIn >> checkString; // Get next string from file if(checkString == L"MD5Version") // Get MD5 version (this function supports version 10) { /*fileIn >> checkString; MessageBox(0, checkString.c_str(), //display message L"MD5Version", MB_OK);*/ } else if ( checkString == L"commandline" ) { std::getline(fileIn, checkString); // Ignore the rest of this line } else if ( checkString == L"numJoints" ) { fileIn >> MD5Model.numJoints; // Store number of joints } else if ( checkString == L"numMeshes" ) { fileIn >> MD5Model.numSubsets; // Store number of meshes or subsets which we will call them } else if ( checkString == L"joints" ) { Joint tempJoint; fileIn >> checkString; // Skip the "{" for(int i = 0; i < MD5Model.numJoints; i++) { fileIn >> tempJoint.name; // Store joints name // Sometimes the names might contain spaces. If that is the case, we need to continue // to read the name until we get to the closing " (quotation marks) if(tempJoint.name[tempJoint.name.size()-1] != '"') { wchar_t checkChar; bool jointNameFound = false; while(!jointNameFound) { checkChar = fileIn.get(); if(checkChar == '"') jointNameFound = true; tempJoint.name += checkChar; } } fileIn >> tempJoint.parentID; // Store Parent joint's ID fileIn >> checkString; // Skip the "(" // Store position of this joint (swap y and z axis if model was made in RH Coord Sys) fileIn >> tempJoint.pos.x >> tempJoint.pos.z >> tempJoint.pos.y; fileIn >> checkString >> checkString; // Skip the ")" and "(" // Store orientation of this joint fileIn >> tempJoint.orientation.x >> tempJoint.orientation.z >> tempJoint.orientation.y; // Remove the quotation marks from joints name tempJoint.name.erase(0, 1); //tempJoint.name.erase(tempJoint.name.size()-1, 1); // Compute the w axis of the quaternion (The MD5 model uses a 3D vector to describe the // direction the bone is facing. However, we need to turn this into a quaternion, and the way // quaternions work, is the xyz values describe the axis of rotation, while the w is a value // between 0 and 1 which describes the angle of rotation) float t = 1.0f - ( tempJoint.orientation.x * tempJoint.orientation.x ) - ( tempJoint.orientation.y * tempJoint.orientation.y ) - ( tempJoint.orientation.z * tempJoint.orientation.z ); if ( t < 0.0f ) { tempJoint.orientation.w = 0.0f; } else { tempJoint.orientation.w = -sqrtf(t); } std::getline(fileIn, checkString); // Skip rest of this line MD5Model.joints.push_back(tempJoint); // Store the joint into this models joint vector } fileIn >> checkString; // Skip the "}" } else if ( checkString == L"mesh") { ModelSubset subset; int numVerts, numTris, numWeights; fileIn >> checkString; // Skip the "{" fileIn >> checkString; while ( checkString != L"}" ) // Read until '}' { // In this lesson, for the sake of simplicity, we will assume a textures filename is givin here. // Usually though, the name of a material (stored in a material library. Think back to the lesson on // loading .obj files, where the material library was contained in the file .mtl) is givin. Let this // be an exercise to load the material from a material library such as obj's .mtl file, instead of // just the texture like we will do here. if(checkString == L"shader") // Load the texture or material { std::wstring fileNamePath; fileIn >> fileNamePath; // Get texture's filename // Take spaces into account if filename or material name has a space in it if(fileNamePath[fileNamePath.size()-1] != '"') { wchar_t checkChar; bool fileNameFound = false; while(!fileNameFound) { checkChar = fileIn.get(); if(checkChar == '"') fileNameFound = true; fileNamePath += checkChar; } } // Remove the quotation marks from texture path fileNamePath.erase(0, 1); fileNamePath.erase(fileNamePath.size()-1, 1); //check if this texture has already been loaded bool alreadyLoaded = false; for(int i = 0; i < texFileNameArray.size(); ++i) { if(fileNamePath == texFileNameArray[i]) { alreadyLoaded = true; subset.texArrayIndex = i; } } //if the texture is not already loaded, load it now if(!alreadyLoaded) { ID3D11ShaderResourceView* tempMeshSRV; hr = D3DX11CreateShaderResourceViewFromFile( d3d11Device, fileNamePath.c_str(), NULL, NULL, &tempMeshSRV, NULL ); if(SUCCEEDED(hr)) { texFileNameArray.push_back(fileNamePath.c_str()); subset.texArrayIndex = shaderResourceViewArray.size(); shaderResourceViewArray.push_back(tempMeshSRV); } else { MessageBox(0, fileNamePath.c_str(), //display message L"Could Not Open:", MB_OK); return false; } } std::getline(fileIn, checkString); // Skip rest of this line } else if ( checkString == L"numverts") { fileIn >> numVerts; // Store number of vertices std::getline(fileIn, checkString); // Skip rest of this line for(int i = 0; i < numVerts; i++) { Vertex tempVert; fileIn >> checkString // Skip "vert # (" >> checkString >> checkString; fileIn >> tempVert.texCoord.x // Store tex coords >> tempVert.texCoord.y; fileIn >> checkString; // Skip ")" fileIn >> tempVert.StartWeight; // Index of first weight this vert will be weighted to fileIn >> tempVert.WeightCount; // Number of weights for this vertex std::getline(fileIn, checkString); // Skip rest of this line subset.vertices.push_back(tempVert); // Push back this vertex into subsets vertex vector } } else if ( checkString == L"numtris") { fileIn >> numTris; subset.numTriangles = numTris; std::getline(fileIn, checkString); // Skip rest of this line for(int i = 0; i < numTris; i++) // Loop through each triangle { DWORD tempIndex; fileIn >> checkString; // Skip "tri" fileIn >> checkString; // Skip tri counter for(int k = 0; k < 3; k++) // Store the 3 indices { fileIn >> tempIndex; subset.indices.push_back(tempIndex); } std::getline(fileIn, checkString); // Skip rest of this line } } else if ( checkString == L"numweights") { fileIn >> numWeights; std::getline(fileIn, checkString); // Skip rest of this line for(int i = 0; i < numWeights; i++) { Weight tempWeight; fileIn >> checkString >> checkString; // Skip "weight #" fileIn >> tempWeight.jointID; // Store weight's joint ID fileIn >> tempWeight.bias; // Store weight's influence over a vertex fileIn >> checkString; // Skip "(" fileIn >> tempWeight.pos.x // Store weight's pos in joint's local space >> tempWeight.pos.z >> tempWeight.pos.y; std::getline(fileIn, checkString); // Skip rest of this line subset.weights.push_back(tempWeight); // Push back tempWeight into subsets Weight array } } else std::getline(fileIn, checkString); // Skip anything else fileIn >> checkString; // Skip "}" } //*** find each vertex's position using the joints and weights ***// for ( int i = 0; i < subset.vertices.size(); ++i ) { Vertex tempVert = subset.vertices[i]; tempVert.pos = XMFLOAT3(0, 0, 0); // Make sure the vertex's pos is cleared first // Sum up the joints and weights information to get vertex's position for ( int j = 0; j < tempVert.WeightCount; ++j ) { Weight tempWeight = subset.weights[tempVert.StartWeight + j]; Joint tempJoint = MD5Model.joints[tempWeight.jointID]; // Convert joint orientation and weight pos to vectors for easier computation // When converting a 3d vector to a quaternion, you should put 0 for "w", and // When converting a quaternion to a 3d vector, you can just ignore the "w" XMVECTOR tempJointOrientation = XMVectorSet(tempJoint.orientation.x, tempJoint.orientation.y, tempJoint.orientation.z, tempJoint.orientation.w); XMVECTOR tempWeightPos = XMVectorSet(tempWeight.pos.x, tempWeight.pos.y, tempWeight.pos.z, 0.0f); // We will need to use the conjugate of the joint orientation quaternion // To get the conjugate of a quaternion, all you have to do is inverse the x, y, and z XMVECTOR tempJointOrientationConjugate = XMVectorSet(-tempJoint.orientation.x, -tempJoint.orientation.y, -tempJoint.orientation.z, tempJoint.orientation.w); // Calculate vertex position (in joint space, eg. rotate the point around (0,0,0)) for this weight using the joint orientation quaternion and its conjugate // We can rotate a point using a quaternion with the equation "rotatedPoint = quaternion * point * quaternionConjugate" XMFLOAT3 rotatedPoint; XMStoreFloat3(&rotatedPoint, XMQuaternionMultiply(XMQuaternionMultiply(tempJointOrientation, tempWeightPos), tempJointOrientationConjugate)); // Now move the verices position from joint space (0,0,0) to the joints position in world space, taking the weights bias into account // The weight bias is used because multiple weights might have an effect on the vertices final position. Each weight is attached to one joint. tempVert.pos.x += ( tempJoint.pos.x + rotatedPoint.x ) * tempWeight.bias; tempVert.pos.y += ( tempJoint.pos.y + rotatedPoint.y ) * tempWeight.bias; tempVert.pos.z += ( tempJoint.pos.z + rotatedPoint.z ) * tempWeight.bias; // Basically what has happened above, is we have taken the weights position relative to the joints position // we then rotate the weights position (so that the weight is actually being rotated around (0, 0, 0) in world space) using // the quaternion describing the joints rotation. We have stored this rotated point in rotatedPoint, which we then add to // the joints position (because we rotated the weight's position around (0,0,0) in world space, and now need to translate it // so that it appears to have been rotated around the joints position). Finally we multiply the answer with the weights bias, // or how much control the weight has over the final vertices position. All weight's bias effecting a single vertex's position // must add up to 1. } subset.positions.push_back(tempVert.pos); // Store the vertices position in the position vector instead of straight into the vertex vector // since we can use the positions vector for certain things like collision detection or picking // without having to work with the entire vertex structure. } // Put the positions into the vertices for this subset for(int i = 0; i < subset.vertices.size(); i++) { subset.vertices[i].pos = subset.positions[i]; } //*** Calculate vertex normals using normal averaging ***/// std::vector<XMFLOAT3> tempNormal; //normalized and unnormalized normals XMFLOAT3 unnormalized = XMFLOAT3(0.0f, 0.0f, 0.0f); //Used to get vectors (sides) from the position of the verts float vecX, vecY, vecZ; //Two edges of our triangle XMVECTOR edge1 = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); XMVECTOR edge2 = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); //Compute face normals for(int i = 0; i < subset.numTriangles; ++i) { //Get the vector describing one edge of our triangle (edge 0,2) vecX = subset.vertices[subset.indices[(i*3)]].pos.x - subset.vertices[subset.indices[(i*3)+2]].pos.x; vecY = subset.vertices[subset.indices[(i*3)]].pos.y - subset.vertices[subset.indices[(i*3)+2]].pos.y; vecZ = subset.vertices[subset.indices[(i*3)]].pos.z - subset.vertices[subset.indices[(i*3)+2]].pos.z; edge1 = XMVectorSet(vecX, vecY, vecZ, 0.0f); //Create our first edge //Get the vector describing another edge of our triangle (edge 2,1) vecX = subset.vertices[subset.indices[(i*3)+2]].pos.x - subset.vertices[subset.indices[(i*3)+1]].pos.x; vecY = subset.vertices[subset.indices[(i*3)+2]].pos.y - subset.vertices[subset.indices[(i*3)+1]].pos.y; vecZ = subset.vertices[subset.indices[(i*3)+2]].pos.z - subset.vertices[subset.indices[(i*3)+1]].pos.z; edge2 = XMVectorSet(vecX, vecY, vecZ, 0.0f); //Create our second edge //Cross multiply the two edge vectors to get the un-normalized face normal XMStoreFloat3(&unnormalized, XMVector3Cross(edge1, edge2)); tempNormal.push_back(unnormalized); } //Compute vertex normals (normal Averaging) XMVECTOR normalSum = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); int facesUsing = 0; float tX, tY, tZ; //temp axis variables //Go through each vertex for(int i = 0; i < subset.vertices.size(); ++i) { //Check which triangles use this vertex for(int j = 0; j < subset.numTriangles; ++j) { if(subset.indices[j*3] == i || subset.indices[(j*3)+1] == i || subset.indices[(j*3)+2] == i) { tX = XMVectorGetX(normalSum) + tempNormal[j].x; tY = XMVectorGetY(normalSum) + tempNormal[j].y; tZ = XMVectorGetZ(normalSum) + tempNormal[j].z; normalSum = XMVectorSet(tX, tY, tZ, 0.0f); //If a face is using the vertex, add the unormalized face normal to the normalSum facesUsing++; } } //Get the actual normal by dividing the normalSum by the number of faces sharing the vertex normalSum = normalSum / facesUsing; //Normalize the normalSum vector normalSum = XMVector3Normalize(normalSum); //Store the normal and tangent in our current vertex subset.vertices[i].normal.x = -XMVectorGetX(normalSum); subset.vertices[i].normal.y = -XMVectorGetY(normalSum); subset.vertices[i].normal.z = -XMVectorGetZ(normalSum); //Clear normalSum, facesUsing for next vertex normalSum = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); facesUsing = 0; } // Create index buffer D3D11_BUFFER_DESC indexBufferDesc; ZeroMemory( &indexBufferDesc, sizeof(indexBufferDesc) ); indexBufferDesc.Usage = D3D11_USAGE_DEFAULT; indexBufferDesc.ByteWidth = sizeof(DWORD) * subset.numTriangles * 3; indexBufferDesc.BindFlags = D3D11_BIND_INDEX_BUFFER; indexBufferDesc.CPUAccessFlags = 0; indexBufferDesc.MiscFlags = 0; D3D11_SUBRESOURCE_DATA iinitData; iinitData.pSysMem = &subset.indices[0]; d3d11Device->CreateBuffer(&indexBufferDesc, &iinitData, &subset.indexBuff); //Create Vertex Buffer D3D11_BUFFER_DESC vertexBufferDesc; ZeroMemory( &vertexBufferDesc, sizeof(vertexBufferDesc) ); vertexBufferDesc.Usage = D3D11_USAGE_DYNAMIC; // We will be updating this buffer, so we must set as dynamic vertexBufferDesc.ByteWidth = sizeof( Vertex ) * subset.vertices.size(); vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; vertexBufferDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; // Give CPU power to write to buffer vertexBufferDesc.MiscFlags = 0; D3D11_SUBRESOURCE_DATA vertexBufferData; ZeroMemory( &vertexBufferData, sizeof(vertexBufferData) ); vertexBufferData.pSysMem = &subset.vertices[0]; hr = d3d11Device->CreateBuffer( &vertexBufferDesc, &vertexBufferData, &subset.vertBuff); // Push back the temp subset into the models subset vector MD5Model.subsets.push_back(subset); } } } else { SwapChain->SetFullscreenState(false, NULL); // Make sure we are out of fullscreen // create message std::wstring message = L"Could not open: "; message += filename; MessageBox(0, message.c_str(), // display message L"Error", MB_OK); return false; } return true; } ##Opening the File## First we create an input filestream, then a string to store strings returned from the filestream. After that we check if the file was open or not. If it wasn't, we display a message, and if it was we enter a loop which will go until the end of the file is reached. bool LoadMD5Model(std::wstring filename, Model3D& MD5Model, std::vector<ID3D11ShaderResourceView*>& shaderResourceViewArray, std::vector<std::wstring> texFileNameArray) { std::wifstream fileIn (filename.c_str()); // Open file std::wstring checkString; // Stores the next string from our file if(fileIn) // Check if the file was opened { while(fileIn) // Loop until the end of the file is reached { ... } } else { SwapChain->SetFullscreenState(false, NULL); // Make sure we are out of fullscreen // create message std::wstring message = L"Could not open: "; message += filename; MessageBox(0, message.c_str(), // display message L"Error", MB_OK); return false; } return true; } ##Read the Header Info## We get a string from the filestream. Then we check what that string is. If it's one of the header's, we act accordingly, usually storing the information. fileIn >> checkString; // Get next string from file if(checkString == L"MD5Version") // Get MD5 version (this function supports version 10) { /*fileIn >> checkString; MessageBox(0, checkString.c_str(), //display message L"MD5Version", MB_OK);*/ } else if ( checkString == L"commandline" ) { std::getline(fileIn, checkString); // Ignore the rest of this line } else if ( checkString == L"numJoints" ) { fileIn >> MD5Model.numJoints; // Store number of joints } else if ( checkString == L"numMeshes" ) { fileIn >> MD5Model.numSubsets; // Store number of meshes or subsets which we will call them } ##Reading In the Joints## If the string was "joints", we know that the following number of lines (or until the closing bracket ("}") is reached) are the actual joints for the model. We enter a loop that goes through the number of joints in the model, storing the joints information for each loop. We store that information into a temporary joint (which will be pushed back into the models joint vector at the end). Remember we need to compute the w component of the joints orientation quaternion, which you can see below, and understand (you don't REALLY need to understand if you don't want to) from above. I'll explain two things about the joints name. First is that sometimes spaces are included in the joints name. If this is the case, we need to make sure we read the name until the closing quotation marks. The other thing, is that we will want to remove the quotation marks after we have read the full name. else if ( checkString == L"joints" ) { Joint tempJoint; fileIn >> checkString; // Skip the "{" for(int i = 0; i < MD5Model.numJoints; i++) { fileIn >> tempJoint.name; // Store joints name // Sometimes the names might contain spaces. If that is the case, we need to continue // to read the name until we get to the closing " (quotation marks) if(tempJoint.name[tempJoint.name.size()-1] != '"') { wchar_t checkChar; bool jointNameFound = false; while(!jointNameFound) { checkChar = fileIn.get(); if(checkChar == '"') jointNameFound = true; tempJoint.name += checkChar; } } fileIn >> tempJoint.parentID; // Store Parent joint's ID fileIn >> checkString; // Skip the "(" // Store position of this joint (swap y and z axis if model was made in RH Coord Sys) fileIn >> tempJoint.pos.x >> tempJoint.pos.z >> tempJoint.pos.y; fileIn >> checkString >> checkString; // Skip the ")" and "(" // Store orientation of this joint fileIn >> tempJoint.orientation.x >> tempJoint.orientation.z >> tempJoint.orientation.y; // Remove the quotation marks from joints name tempJoint.name.erase(0, 1); tempJoint.name.erase(tempJoint.name.size()-1, 1); // Compute the w axis of the quaternion (The MD5 model uses a 3D vector to describe the // direction the bone is facing. However, we need to turn this into a quaternion, and the way // quaternions work, is the xyz values describe the axis of rotation, while the w is a value // between 0 and 1 which describes the angle of rotation) float t = 1.0f - ( tempJoint.orientation.x * tempJoint.orientation.x ) - ( tempJoint.orientation.y * tempJoint.orientation.y ) - ( tempJoint.orientation.z * tempJoint.orientation.z ); if ( t < 0.0f ) { tempJoint.orientation.w = 0.0f; } else { tempJoint.orientation.w = -sqrtf(t); } std::getline(fileIn, checkString); // Skip rest of this line MD5Model.joints.push_back(tempJoint); // Store the joint into this models joint vector } fileIn >> checkString; // Skip the "}" } ##Reading In the Subset Specific Information## Now we get to the part where we need to read in the subset specific information. We explained most of this above. When we load in the shader, we do the same thing as we did in the obj model loader, which is first check if the texture has already been loaded, and if it hasen't, load it now. We then store that shader resource view into the shader resource view vector, and the index of that resource into our subset. else if ( checkString == L"mesh") { ModelSubset subset; int numVerts, numTris, numWeights; fileIn >> checkString; // Skip the "{" fileIn >> checkString; while ( checkString != L"}" ) // Read until '}' { // In this lesson, for the sake of simplicity, we will assume a textures filename is givin here. // Usually though, the name of a material (stored in a material library. Think back to the lesson on // loading .obj files, where the material library was contained in the file .mtl) is givin. Let this // be an exercise to load the material from a material library such as obj's .mtl file, instead of // just the texture like we will do here. if(checkString == L"shader") // Load the texture or material { std::wstring fileNamePath; fileIn >> fileNamePath; // Get texture's filename // Take spaces into account if filename or material name has a space in it if(fileNamePath[fileNamePath.size()-1] != '"') { wchar_t checkChar; bool fileNameFound = false; while(!fileNameFound) { checkChar = fileIn.get(); if(checkChar == '"') fileNameFound = true; fileNamePath += checkChar; } } // Remove the quotation marks from texture path fileNamePath.erase(0, 1); fileNamePath.erase(fileNamePath.size()-1, 1); //check if this texture has already been loaded bool alreadyLoaded = false; for(int i = 0; i < texFileNameArray.size(); ++i) { if(fileNamePath == texFileNameArray[i]) { alreadyLoaded = true; subset.texArrayIndex = i; } } //if the texture is not already loaded, load it now if(!alreadyLoaded) { ID3D11ShaderResourceView* tempMeshSRV; hr = D3DX11CreateShaderResourceViewFromFile( d3d11Device, fileNamePath.c_str(), NULL, NULL, &tempMeshSRV, NULL ); if(SUCCEEDED(hr)) { texFileNameArray.push_back(fileNamePath.c_str()); subset.texArrayIndex = shaderResourceViewArray.size(); shaderResourceViewArray.push_back(tempMeshSRV); } else { MessageBox(0, fileNamePath.c_str(), //display message L"Could Not Open:", MB_OK); return false; } } std::getline(fileIn, checkString); // Skip rest of this line } else if ( checkString == L"numverts") { fileIn >> numVerts; // Store number of vertices std::getline(fileIn, checkString); // Skip rest of this line for(int i = 0; i < numVerts; i++) { Vertex tempVert; fileIn >> checkString // Skip "vert # (" >> checkString >> checkString; fileIn >> tempVert.texCoord.x // Store tex coords >> tempVert.texCoord.y; fileIn >> checkString; // Skip ")" fileIn >> tempVert.StartWeight; // Index of first weight this vert will be weighted to fileIn >> tempVert.WeightCount; // Number of weights for this vertex std::getline(fileIn, checkString); // Skip rest of this line subset.vertices.push_back(tempVert); // Push back this vertex into subsets vertex vector } } else if ( checkString == L"numtris") { fileIn >> numTris; subset.numTriangles = numTris; std::getline(fileIn, checkString); // Skip rest of this line for(int i = 0; i < numTris; i++) // Loop through each triangle { DWORD tempIndex; fileIn >> checkString; // Skip "tri" fileIn >> checkString; // Skip tri counter for(int k = 0; k < 3; k++) // Store the 3 indices { fileIn >> tempIndex; subset.indices.push_back(tempIndex); } std::getline(fileIn, checkString); // Skip rest of this line } } else if ( checkString == L"numweights") { fileIn >> numWeights; std::getline(fileIn, checkString); // Skip rest of this line for(int i = 0; i < numWeights; i++) { Weight tempWeight; fileIn >> checkString >> checkString; // Skip "weight #" fileIn >> tempWeight.jointID; // Store weight's joint ID fileIn >> tempWeight.bias; // Store weight's influence over a vertex fileIn >> checkString; // Skip "(" fileIn >> tempWeight.pos.x // Store weight's pos in joint's local space >> tempWeight.pos.z >> tempWeight.pos.y; std::getline(fileIn, checkString); // Skip rest of this line subset.weights.push_back(tempWeight); // Push back tempWeight into subsets Weight array } } else std::getline(fileIn, checkString); // Skip anything else fileIn >> checkString; // Skip "}" } ##Calculating the Vertex Position## We come to the more interesting part of this lesson now. The part where we actually calculate the vertex position based on the position and orientation of the joints. I tried my best to comment the code so you know whats happening, but i'll try to explain it here too (I did explain it above already). The first thing we do is loop through each of the vertices in our subset. We store the vertex into a temporary vertex, and set it's position to zero. We then enter another loop for each of the weights this vertex is attached to. We store the weight into a temporary weight, and then store the joint that that weight is attached to into a temporary joint. We then create three quaternions (XMVECTORS always have 4 components, even if you don't use all of them), one for the joints orientation, one for the weights position (since the weights position is a 3D vector, we just set "w" to zero ("0")), and one for the joint orientations conjugate. We then create another variable, a 3D vector this time, which will store the final position of this weight. Then we do the calculation which determines the weights position in JOINT SPACE. What this means is that the joint is actually being rotated around the point (0,0,0), even though the joint is probably not in that exact position in MODEL SPACE (remember, the weights position is relative to the joints position, and not relative to the point (0,0,0) in model space). So now that we have rotate the weight around the joint in joint space, we need to transform that weights position to model space. This is very easy, as all we need to do is add the position of the joint to the final position of the weight. Before we store this almost final position, we need to do one more thing, and that is to take into account the weights bias factor for this vertex. We do that by multiplying the new almost final position by the weights bias factor, then ADD it to the vertices absolute final position. I hope you understood all that. If not, I think if you really try to follow the code with all your focus and concentration, you will see it's actually pretty simple (looking past the mathematical details of the quaternion multiplication stuff). P.S. I probably shouldn't belittle your intelligence by saying you will understand if you look hard enough, so if there's something you don't understand, send me a comment and i'll do my best to personally help you out. //*** find each vertex's position using the joints and weights ***// for ( int i = 0; i < subset.vertices.size(); ++i ) { Vertex tempVert = subset.vertices[i]; tempVert.pos = XMFLOAT3(0, 0, 0); // Make sure the vertex's pos is cleared first // Sum up the joints and weights information to get vertex's position for ( int j = 0; j < tempVert.WeightCount; ++j ) { Weight tempWeight = subset.weights[tempVert.StartWeight + j]; Joint tempJoint = MD5Model.joints[tempWeight.jointID]; // Convert joint orientation and weight pos to vectors for easier computation // When converting a 3d vector to a quaternion, you should put 0 for "w", and // When converting a quaternion to a 3d vector, you can just ignore the "w" XMVECTOR tempJointOrientation = XMVectorSet(tempJoint.orientation.x, tempJoint.orientation.y, tempJoint.orientation.z, tempJoint.orientation.w); XMVECTOR tempWeightPos = XMVectorSet(tempWeight.pos.x, tempWeight.pos.y, tempWeight.pos.z, 0.0f); // We will need to use the conjugate of the joint orientation quaternion // To get the conjugate of a quaternion, all you have to do is inverse the x, y, and z XMVECTOR tempJointOrientationConjugate = XMVectorSet(-tempJoint.orientation.x, -tempJoint.orientation.y, -tempJoint.orientation.z, tempJoint.orientation.w); // Calculate vertex position (in joint space, eg. rotate the point around (0,0,0)) for this weight using the joint orientation quaternion and its conjugate // We can rotate a point using a quaternion with the equation "rotatedPoint = quaternion * point * quaternionConjugate" XMFLOAT3 rotatedPoint; XMStoreFloat3(&rotatedPoint, XMQuaternionMultiply(XMQuaternionMultiply(tempJointOrientation, tempWeightPos), tempJointOrientationConjugate)); // Now move the verices position from joint space (0,0,0) to the joints position in world space, taking the weights bias into account // The weight bias is used because multiple weights might have an effect on the vertices final position. Each weight is attached to one joint. tempVert.pos.x += ( tempJoint.pos.x + rotatedPoint.x ) * tempWeight.bias; tempVert.pos.y += ( tempJoint.pos.y + rotatedPoint.y ) * tempWeight.bias; tempVert.pos.z += ( tempJoint.pos.z + rotatedPoint.z ) * tempWeight.bias; // Basically what has happened above, is we have taken the weights position relative to the joints position // we then rotate the weights position (so that the weight is actually being rotated around (0, 0, 0) in world space) using // the quaternion describing the joints rotation. We have stored this rotated point in rotatedPoint, which we then add to // the joints position (because we rotated the weight's position around (0,0,0) in world space, and now need to translate it // so that it appears to have been rotated around the joints position). Finally we multiply the answer with the weights bias, // or how much control the weight has over the final vertices position. All weight's bias effecting a single vertex's position // must add up to 1. } subset.positions.push_back(tempVert.pos); // Store the vertices position in the position vector instead of straight into the vertex vector // since we can use the positions vector for certain things like collision detection or picking // without having to work with the entire vertex structure. } // Put the positions into the vertices for this subset for(int i = 0; i < subset.vertices.size(); i++) { subset.vertices[i].pos = subset.positions[i]; } ##Calculating the Vertex Normal## This section is almost an exact copy and paste from the OBJ loader, when we calculated the vertex normals for that, so if you don't understand whats happening, check out the obj loader lesson here. This method can also be used to find the tangent and bitangent for normal maps for vertices. //*** Calculate vertex normals using normal averaging ***/// std::vector<XMFLOAT3> tempNormal; //normalized and unnormalized normals XMFLOAT3 unnormalized = XMFLOAT3(0.0f, 0.0f, 0.0f); //Used to get vectors (sides) from the position of the verts float vecX, vecY, vecZ; //Two edges of our triangle XMVECTOR edge1 = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); XMVECTOR edge2 = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); //Compute face normals for(int i = 0; i < subset.numTriangles; ++i) { //Get the vector describing one edge of our triangle (edge 0,2) vecX = subset.vertices[subset.indices[(i*3)]].pos.x - subset.vertices[subset.indices[(i*3)+2]].pos.x; vecY = subset.vertices[subset.indices[(i*3)]].pos.y - subset.vertices[subset.indices[(i*3)+2]].pos.y; vecZ = subset.vertices[subset.indices[(i*3)]].pos.z - subset.vertices[subset.indices[(i*3)+2]].pos.z; edge1 = XMVectorSet(vecX, vecY, vecZ, 0.0f); //Create our first edge //Get the vector describing another edge of our triangle (edge 2,1) vecX = subset.vertices[subset.indices[(i*3)+2]].pos.x - subset.vertices[subset.indices[(i*3)+1]].pos.x; vecY = subset.vertices[subset.indices[(i*3)+2]].pos.y - subset.vertices[subset.indices[(i*3)+1]].pos.y; vecZ = subset.vertices[subset.indices[(i*3)+2]].pos.z - subset.vertices[subset.indices[(i*3)+1]].pos.z; edge2 = XMVectorSet(vecX, vecY, vecZ, 0.0f); //Create our second edge //Cross multiply the two edge vectors to get the un-normalized face normal XMStoreFloat3(&unnormalized, XMVector3Cross(edge1, edge2)); tempNormal.push_back(unnormalized); } //Compute vertex normals (normal Averaging) XMVECTOR normalSum = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); int facesUsing = 0; float tX, tY, tZ; //temp axis variables //Go through each vertex for(int i = 0; i < subset.vertices.size(); ++i) { //Check which triangles use this vertex for(int j = 0; j < subset.numTriangles; ++j) { if(subset.indices[j*3] == i || subset.indices[(j*3)+1] == i || subset.indices[(j*3)+2] == i) { tX = XMVectorGetX(normalSum) + tempNormal[j].x; tY = XMVectorGetY(normalSum) + tempNormal[j].y; tZ = XMVectorGetZ(normalSum) + tempNormal[j].z; normalSum = XMVectorSet(tX, tY, tZ, 0.0f); //If a face is using the vertex, add the unormalized face normal to the normalSum facesUsing++; } } //Get the actual normal by dividing the normalSum by the number of faces sharing the vertex normalSum = normalSum / facesUsing; //Normalize the normalSum vector normalSum = XMVector3Normalize(normalSum); //Store the normal and tangent in our current vertex subset.vertices[i].normal.x = -XMVectorGetX(normalSum); subset.vertices[i].normal.y = -XMVectorGetY(normalSum); subset.vertices[i].normal.z = -XMVectorGetZ(normalSum); //Clear normalSum, facesUsing for next vertex normalSum = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); facesUsing = 0; } ##Creating the Index and Vertex Buffers## We finally come to the end of the MD5 loader. We will be creating a vertex and index buffer for each of the subsets in the model. This has been done many times in the previous lessons, so not much needs explaining, except for the vertex buffer. Notice how we have set the vertex buffer up to be a dynamic buffer, with cpu write access. This is because we will now need to be updating the buffer (next lesson) throughout our scene to do the animations. We will go over this in more detail in the next lesson. After all that, we push the temporary subset into our model objects subset array. // Create index buffer D3D11_BUFFER_DESC indexBufferDesc; ZeroMemory( &indexBufferDesc, sizeof(indexBufferDesc) ); indexBufferDesc.Usage = D3D11_USAGE_DEFAULT; indexBufferDesc.ByteWidth = sizeof(DWORD) * subset.numTriangles * 3; indexBufferDesc.BindFlags = D3D11_BIND_INDEX_BUFFER; indexBufferDesc.CPUAccessFlags = 0; indexBufferDesc.MiscFlags = 0; D3D11_SUBRESOURCE_DATA iinitData; iinitData.pSysMem = &subset.indices[0]; d3d11Device->CreateBuffer(&indexBufferDesc, &iinitData, &subset.indexBuff); //Create Vertex Buffer D3D11_BUFFER_DESC vertexBufferDesc; ZeroMemory( &vertexBufferDesc, sizeof(vertexBufferDesc) ); vertexBufferDesc.Usage = D3D11_USAGE_DYNAMIC; // We will be updating this buffer, so we must set as dynamic vertexBufferDesc.ByteWidth = sizeof( Vertex ) * subset.vertices.size(); vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; vertexBufferDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; // Give CPU power to write to buffer vertexBufferDesc.MiscFlags = 0; D3D11_SUBRESOURCE_DATA vertexBufferData; ZeroMemory( &vertexBufferData, sizeof(vertexBufferData) ); vertexBufferData.pSysMem = &subset.vertices[0]; hr = d3d11Device->CreateBuffer( &vertexBufferDesc, &vertexBufferData, &subset.vertBuff); // Push back the temp subset into the models subset vector MD5Model.subsets.push_back(subset); } ##Calling the LoadMD5Model() Function## Now we go down to the initscene() function, where we call the function that loads our model. We check to make sure it was successfully loaded, and if not, return false. if(!LoadMD5Model(L"boy.md5mesh", NewMD5Model, meshSRV, textureNameArray)) return false; ##Updating the Models World Space Matrix## The model I created is awefully large for the scene, so we will now scale it to me much much smaller. Also, the center of the model is by default halfway below the ground level in our scene, so we will translate it up a bit. Scale = XMMatrixScaling( 0.04f, 0.04f, 0.04f ); // The model is a bit too large for our scene, so make it smaller Translation = XMMatrixTranslation( 0.0f, 3.0f, 0.0f ); smilesWorld = Scale * Translation; ##Drawing the Model## Here we will draw our model in the drawscene function. We will loop through each of the models subsets, binding that subsets vertex and index buffers to the IA. We then send the WVP and constant buffer stuff to the shaders, then finally draw the subset. All this has been explained in previous lessons (this technique was explained specifically in the OBJ loader lesson). ///***Draw MD5 Model***/// for(int i = 0; i < NewMD5Model.numSubsets; i ++) { //Set the grounds index buffer d3d11DevCon->IASetIndexBuffer( NewMD5Model.subsets[i].indexBuff, DXGI_FORMAT_R32_UINT, 0); //Set the grounds vertex buffer d3d11DevCon->IASetVertexBuffers( 0, 1, &NewMD5Model.subsets[i].vertBuff, &stride, &offset ); //Set the WVP matrix and send it to the constant buffer in effect file WVP = smilesWorld * camView * camProjection; cbPerObj.WVP = XMMatrixTranspose(WVP); cbPerObj.World = XMMatrixTranspose(smilesWorld); cbPerObj.hasTexture = true; // We'll assume all md5 subsets have textures cbPerObj.hasNormMap = false; // We'll also assume md5 models have no normal map (easy to change later though) d3d11DevCon->UpdateSubresource( cbPerObjectBuffer, 0, NULL, &cbPerObj, 0, 0 ); d3d11DevCon->VSSetConstantBuffers( 0, 1, &cbPerObjectBuffer ); d3d11DevCon->PSSetConstantBuffers( 1, 1, &cbPerObjectBuffer ); d3d11DevCon->PSSetShaderResources( 0, 1, &meshSRV[NewMD5Model.subsets[i].texArrayIndex] ); d3d11DevCon->PSSetSamplers( 0, 1, &CubesTexSamplerState ); d3d11DevCon->RSSetState(RSCullNone); d3d11DevCon->DrawIndexed( NewMD5Model.subsets[i].indices.size(), 0, 0 ); } That sums up this lesson. We are now able to load in a model containg a skeletal structure, and we're ready for animating our model! I hope (as usual) you have found this lesson helpful in some way! ##Exercise:## 1. Create a material library, and update the MD5 loader function to use the material library. 2. Play with the quaternions. Try using quaternions for rotations instead of a rotation matrix. 3. Send me a comment! 4. Have a great day! Here's the final code: main.cpp //Include and link appropriate libraries and headers// #pragma comment(lib, "d3d11.lib") #pragma comment(lib, "d3dx11.lib") #pragma comment(lib, "d3dx10.lib") #pragma comment (lib, "D3D10_1.lib") #pragma comment (lib, "DXGI.lib") #pragma comment (lib, "D2D1.lib") #pragma comment (lib, "dwrite.lib") #pragma comment (lib, "dinput8.lib") #pragma comment (lib, "dxguid.lib") #include <windows.h> #include <d3d11.h> #include <d3dx11.h> #include <d3dx10.h> #include <xnamath.h> #include <D3D10_1.h> #include <DXGI.h> #include <D2D1.h> #include <sstream> #include <dwrite.h> #include <dinput.h> #include <vector> #include <fstream> #include <istream> //Global Declarations - Interfaces// IDXGISwapChain* SwapChain; ID3D11Device* d3d11Device; ID3D11DeviceContext* d3d11DevCon; ID3D11RenderTargetView* renderTargetView; ID3D11DepthStencilView* depthStencilView; ID3D11Texture2D* depthStencilBuffer; ID3D11VertexShader* VS; ID3D11PixelShader* PS; ID3D11PixelShader* D2D_PS; ID3D10Blob* D2D_PS_Buffer; ID3D10Blob* VS_Buffer; ID3D10Blob* PS_Buffer; ID3D11InputLayout* vertLayout; ID3D11Buffer* cbPerObjectBuffer; ID3D11BlendState* d2dTransparency; ID3D11RasterizerState* CCWcullMode; ID3D11RasterizerState* CWcullMode; ID3D11SamplerState* CubesTexSamplerState; ID3D11Buffer* cbPerFrameBuffer; ID3D10Device1 *d3d101Device; IDXGIKeyedMutex *keyedMutex11; IDXGIKeyedMutex *keyedMutex10; ID2D1RenderTarget *D2DRenderTarget; ID2D1SolidColorBrush *Brush; ID3D11Texture2D *BackBuffer11; ID3D11Texture2D *sharedTex11; ID3D11Buffer *d2dVertBuffer; ID3D11Buffer *d2dIndexBuffer; ID3D11ShaderResourceView *d2dTexture; IDWriteFactory *DWriteFactory; IDWriteTextFormat *TextFormat; IDirectInputDevice8* DIKeyboard; IDirectInputDevice8* DIMouse; ID3D11Buffer* sphereIndexBuffer; ID3D11Buffer* sphereVertBuffer; ID3D11VertexShader* SKYMAP_VS; ID3D11PixelShader* SKYMAP_PS; ID3D10Blob* SKYMAP_VS_Buffer; ID3D10Blob* SKYMAP_PS_Buffer; ID3D11ShaderResourceView* smrv; ID3D11DepthStencilState* DSLessEqual; ID3D11RasterizerState* RSCullNone; ID3D11BlendState* Transparency; //Mesh variables. Each loaded mesh will need its own set of these ID3D11Buffer* meshVertBuff; ID3D11Buffer* meshIndexBuff; XMMATRIX meshWorld; int meshSubsets = 0; std::vector<int> meshSubsetIndexStart; std::vector<int> meshSubsetTexture; //Textures and material variables, used for all mesh's loaded std::vector<ID3D11ShaderResourceView*> meshSRV; std::vector<std::wstring> textureNameArray; std::wstring printText; //Global Declarations - Others// LPCTSTR WndClassName = L"firstwindow"; HWND hwnd = NULL; HRESULT hr; int Width = 1920; int Height = 1200; DIMOUSESTATE mouseLastState; LPDIRECTINPUT8 DirectInput; float rotx = 0; float rotz = 0; float scaleX = 1.0f; float scaleY = 1.0f; XMMATRIX Rotationx; XMMATRIX Rotationz; XMMATRIX Rotationy; XMMATRIX WVP; XMMATRIX camView; XMMATRIX camProjection; XMMATRIX d2dWorld; XMVECTOR camPosition; XMVECTOR camTarget; XMVECTOR camUp; XMVECTOR DefaultForward = XMVectorSet(0.0f,0.0f,1.0f, 0.0f); XMVECTOR DefaultRight = XMVectorSet(1.0f,0.0f,0.0f, 0.0f); XMVECTOR camForward = XMVectorSet(0.0f,0.0f,1.0f, 0.0f); XMVECTOR camRight = XMVectorSet(1.0f,0.0f,0.0f, 0.0f); XMMATRIX camRotationMatrix; float moveLeftRight = 0.0f; float moveBackForward = 0.0f; float camYaw = 0.0f; float camPitch = 0.0f; int NumSphereVertices; int NumSphereFaces; XMMATRIX sphereWorld; XMMATRIX Rotation; XMMATRIX Scale; XMMATRIX Translation; float rot = 0.01f; double countsPerSecond = 0.0; __int64 CounterStart = 0; int frameCount = 0; int fps = 0; __int64 frameTimeOld = 0; double frameTime; //Function Prototypes// bool InitializeDirect3d11App(HINSTANCE hInstance); void CleanUp(); bool InitScene(); void DrawScene(); bool InitD2D_D3D101_DWrite(IDXGIAdapter1 *Adapter); void InitD2DScreenTexture(); void UpdateScene(double time); void UpdateCamera(); void RenderText(std::wstring text, int inInt); void StartTimer(); double GetTime(); double GetFrameTime(); bool InitializeWindow(HINSTANCE hInstance, int ShowWnd, int width, int height, bool windowed); int messageloop(); bool InitDirectInput(HINSTANCE hInstance); void DetectInput(double time); void CreateSphere(int LatLines, int LongLines); LRESULT CALLBACK WndProc(HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam); //Create effects constant buffer's structure// struct cbPerObject { XMMATRIX WVP; XMMATRIX World; //These will be used for the pixel shader XMFLOAT4 difColor; BOOL hasTexture; //Because of HLSL structure packing, we will use windows BOOL //instead of bool because HLSL packs things into 4 bytes, and //bool is only one byte, where BOOL is 4 bytes BOOL hasNormMap; }; cbPerObject cbPerObj; //Create material structure struct SurfaceMaterial { std::wstring matName; XMFLOAT4 difColor; int texArrayIndex; int normMapTexArrayIndex; bool hasNormMap; bool hasTexture; bool transparent; }; std::vector<SurfaceMaterial> material; //Define LoadObjModel function after we create surfaceMaterial structure bool LoadObjModel(std::wstring filename, //.obj filename ID3D11Buffer** vertBuff, //mesh vertex buffer ID3D11Buffer** indexBuff, //mesh index buffer std::vector<int>& subsetIndexStart, //start index of each subset std::vector<int>& subsetMaterialArray, //index value of material for each subset std::vector<SurfaceMaterial>& material, //vector of material structures int& subsetCount, //Number of subsets in mesh bool isRHCoordSys, //true if model was created in right hand coord system bool computeNormals); //true to compute the normals, false to use the files normals struct Light { Light() { ZeroMemory(this, sizeof(Light)); } XMFLOAT3 pos; float range; XMFLOAT3 dir; float cone; XMFLOAT3 att; float pad2; XMFLOAT4 ambient; XMFLOAT4 diffuse; }; Light light; struct cbPerFrame { Light light; }; cbPerFrame constbuffPerFrame; struct Vertex //Overloaded Vertex Structure { Vertex(){} Vertex(float x, float y, float z, float u, float v, float nx, float ny, float nz, float tx, float ty, float tz) : pos(x,y,z), texCoord(u, v), normal(nx, ny, nz), tangent(tx, ty, tz){} XMFLOAT3 pos; XMFLOAT2 texCoord; XMFLOAT3 normal; XMFLOAT3 tangent; XMFLOAT3 biTangent; ///////////////**************new**************//////////////////// // Will not be sent to shader int StartWeight; int WeightCount; ///////////////**************new**************//////////////////// }; D3D11_INPUT_ELEMENT_DESC layout[] = { { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 }, { "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 }, { "NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 20, D3D11_INPUT_PER_VERTEX_DATA, 0}, { "TANGENT", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 32, D3D11_INPUT_PER_VERTEX_DATA, 0} }; UINT numElements = ARRAYSIZE(layout); ///////////////**************new**************//////////////////// struct Joint { std::wstring name; int parentID; XMFLOAT3 pos; XMFLOAT4 orientation; }; struct Weight { int jointID; float bias; XMFLOAT3 pos; }; struct ModelSubset { int texArrayIndex; int numTriangles; std::vector<Vertex> vertices; std::vector<DWORD> indices; std::vector<Weight> weights; std::vector<XMFLOAT3> positions; ID3D11Buffer* vertBuff; ID3D11Buffer* indexBuff; }; struct Model3D { int numSubsets; int numJoints; std::vector<Joint> joints; std::vector<ModelSubset> subsets; }; XMMATRIX smilesWorld; Model3D NewMD5Model; //LoadMD5Model() function prototype bool LoadMD5Model(std::wstring filename, Model3D& MD5Model, std::vector<ID3D11ShaderResourceView*>& shaderResourceViewArray, std::vector<std::wstring> texFileNameArray); ///////////////**************new**************//////////////////// int WINAPI WinMain(HINSTANCE hInstance, //Main windows function HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nShowCmd) { if(!InitializeWindow(hInstance, nShowCmd, Width, Height, true)) { MessageBox(0, L"Window Initialization - Failed", L"Error", MB_OK); return 0; } if(!InitializeDirect3d11App(hInstance)) //Initialize Direct3D { MessageBox(0, L"Direct3D Initialization - Failed", L"Error", MB_OK); return 0; } if(!InitScene()) //Initialize our scene { MessageBox(0, L"Scene Initialization - Failed", L"Error", MB_OK); return 0; } if(!InitDirectInput(hInstance)) { MessageBox(0, L"Direct Input Initialization - Failed", L"Error", MB_OK); return 0; } messageloop(); CleanUp(); return 0; } bool InitializeWindow(HINSTANCE hInstance, int ShowWnd, int width, int height, bool windowed) { typedef struct _WNDCLASS { UINT cbSize; UINT style; WNDPROC lpfnWndProc; int cbClsExtra; int cbWndExtra; HANDLE hInstance; HICON hIcon; HCURSOR hCursor; HBRUSH hbrBackground; LPCTSTR lpszMenuName; LPCTSTR lpszClassName; } WNDCLASS; WNDCLASSEX wc; wc.cbSize = sizeof(WNDCLASSEX); wc.style = CS_HREDRAW | CS_VREDRAW; wc.lpfnWndProc = WndProc; wc.cbClsExtra = NULL; wc.cbWndExtra = NULL; wc.hInstance = hInstance; wc.hIcon = LoadIcon(NULL, IDI_APPLICATION); wc.hCursor = LoadCursor(NULL, IDC_ARROW); wc.hbrBackground = (HBRUSH)(COLOR_WINDOW + 1); wc.lpszMenuName = NULL; wc.lpszClassName = WndClassName; wc.hIconSm = LoadIcon(NULL, IDI_APPLICATION); if (!RegisterClassEx(&wc)) { MessageBox(NULL, L"Error registering class", L"Error", MB_OK | MB_ICONERROR); return 1; } hwnd = CreateWindowEx( NULL, WndClassName, L"Lesson 4 - Begin Drawing", WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, CW_USEDEFAULT, width, height, NULL, NULL, hInstance, NULL ); if (!hwnd) { MessageBox(NULL, L"Error creating window", L"Error", MB_OK | MB_ICONERROR); return 1; } ShowWindow(hwnd, ShowWnd); UpdateWindow(hwnd); return true; } bool InitializeDirect3d11App(HINSTANCE hInstance) { //Describe our SwapChain Buffer DXGI_MODE_DESC bufferDesc; ZeroMemory(&bufferDesc, sizeof(DXGI_MODE_DESC)); bufferDesc.Width = Width; bufferDesc.Height = Height; bufferDesc.RefreshRate.Numerator = 60; bufferDesc.RefreshRate.Denominator = 1; bufferDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM; bufferDesc.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED; bufferDesc.Scaling = DXGI_MODE_SCALING_UNSPECIFIED; //Describe our SwapChain DXGI_SWAP_CHAIN_DESC swapChainDesc; ZeroMemory(&swapChainDesc, sizeof(DXGI_SWAP_CHAIN_DESC)); swapChainDesc.BufferDesc = bufferDesc; swapChainDesc.SampleDesc.Count = 1; swapChainDesc.SampleDesc.Quality = 0; swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; swapChainDesc.BufferCount = 1; swapChainDesc.OutputWindow = hwnd; ///////////////**************new**************//////////////////// swapChainDesc.Windowed = true; ///////////////**************new**************//////////////////// swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_DISCARD; // Create DXGI factory to enumerate adapters/////////////////////////////////////////////////////////////////////////// IDXGIFactory1 *DXGIFactory; HRESULT hr = CreateDXGIFactory1(__uuidof(IDXGIFactory1), (void**)&DXGIFactory); // Use the first adapter IDXGIAdapter1 *Adapter; hr = DXGIFactory->EnumAdapters1(0, &Adapter); DXGIFactory->Release(); //Create our Direct3D 11 Device and SwapChain////////////////////////////////////////////////////////////////////////// hr = D3D11CreateDeviceAndSwapChain(Adapter, D3D_DRIVER_TYPE_UNKNOWN, NULL, D3D11_CREATE_DEVICE_BGRA_SUPPORT, NULL, NULL, D3D11_SDK_VERSION, &swapChainDesc, &SwapChain, &d3d11Device, NULL, &d3d11DevCon); //Initialize Direct2D, Direct3D 10.1, DirectWrite InitD2D_D3D101_DWrite(Adapter); //Release the Adapter interface Adapter->Release(); //Create our BackBuffer and Render Target hr = SwapChain->GetBuffer( 0, __uuidof( ID3D11Texture2D ), (void**)&BackBuffer11 ); hr = d3d11Device->CreateRenderTargetView( BackBuffer11, NULL, &renderTargetView ); //Describe our Depth/Stencil Buffer D3D11_TEXTURE2D_DESC depthStencilDesc; depthStencilDesc.Width = Width; depthStencilDesc.Height = Height; depthStencilDesc.MipLevels = 1; depthStencilDesc.ArraySize = 1; depthStencilDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT; depthStencilDesc.SampleDesc.Count = 1; depthStencilDesc.SampleDesc.Quality = 0; depthStencilDesc.Usage = D3D11_USAGE_DEFAULT; depthStencilDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL; depthStencilDesc.CPUAccessFlags = 0; depthStencilDesc.MiscFlags = 0; //Create the Depth/Stencil View d3d11Device->CreateTexture2D(&depthStencilDesc, NULL, &depthStencilBuffer); d3d11Device->CreateDepthStencilView(depthStencilBuffer, NULL, &depthStencilView); return true; } bool InitD2D_D3D101_DWrite(IDXGIAdapter1 *Adapter) { //Create our Direc3D 10.1 Device/////////////////////////////////////////////////////////////////////////////////////// hr = D3D10CreateDevice1(Adapter, D3D10_DRIVER_TYPE_HARDWARE, NULL,D3D10_CREATE_DEVICE_BGRA_SUPPORT, D3D10_FEATURE_LEVEL_9_3, D3D10_1_SDK_VERSION, &d3d101Device ); //Create Shared Texture that Direct3D 10.1 will render on////////////////////////////////////////////////////////////// D3D11_TEXTURE2D_DESC sharedTexDesc; ZeroMemory(&sharedTexDesc, sizeof(sharedTexDesc)); sharedTexDesc.Width = Width; sharedTexDesc.Height = Height; sharedTexDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM; sharedTexDesc.MipLevels = 1; sharedTexDesc.ArraySize = 1; sharedTexDesc.SampleDesc.Count = 1; sharedTexDesc.Usage = D3D11_USAGE_DEFAULT; sharedTexDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET; sharedTexDesc.MiscFlags = D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX; hr = d3d11Device->CreateTexture2D(&sharedTexDesc, NULL, &sharedTex11); // Get the keyed mutex for the shared texture (for D3D11)/////////////////////////////////////////////////////////////// hr = sharedTex11->QueryInterface(__uuidof(IDXGIKeyedMutex), (void**)&keyedMutex11); // Get the shared handle needed to open the shared texture in D3D10.1/////////////////////////////////////////////////// IDXGIResource *sharedResource10; HANDLE sharedHandle10; hr = sharedTex11->QueryInterface(__uuidof(IDXGIResource), (void**)&sharedResource10); hr = sharedResource10->GetSharedHandle(&sharedHandle10); sharedResource10->Release(); // Open the surface for the shared texture in D3D10.1/////////////////////////////////////////////////////////////////// IDXGISurface1 *sharedSurface10; hr = d3d101Device->OpenSharedResource(sharedHandle10, __uuidof(IDXGISurface1), (void**)(&sharedSurface10)); hr = sharedSurface10->QueryInterface(__uuidof(IDXGIKeyedMutex), (void**)&keyedMutex10); // Create D2D factory/////////////////////////////////////////////////////////////////////////////////////////////////// ID2D1Factory *D2DFactory; hr = D2D1CreateFactory(D2D1_FACTORY_TYPE_SINGLE_THREADED, __uuidof(ID2D1Factory), (void**)&D2DFactory); D2D1_RENDER_TARGET_PROPERTIES renderTargetProperties; ZeroMemory(&renderTargetProperties, sizeof(renderTargetProperties)); renderTargetProperties.type = D2D1_RENDER_TARGET_TYPE_HARDWARE; renderTargetProperties.pixelFormat = D2D1::PixelFormat(DXGI_FORMAT_UNKNOWN, D2D1_ALPHA_MODE_PREMULTIPLIED); hr = D2DFactory->CreateDxgiSurfaceRenderTarget(sharedSurface10, &renderTargetProperties, &D2DRenderTarget); sharedSurface10->Release(); D2DFactory->Release(); // Create a solid color brush to draw something with hr = D2DRenderTarget->CreateSolidColorBrush(D2D1::ColorF(1.0f, 1.0f, 1.0f, 1.0f), &Brush); //DirectWrite/////////////////////////////////////////////////////////////////////////////////////////////////////////// hr = DWriteCreateFactory(DWRITE_FACTORY_TYPE_SHARED, __uuidof(IDWriteFactory), reinterpret_cast<IUnknown**>(&DWriteFactory)); hr = DWriteFactory->CreateTextFormat( L"Script", NULL, DWRITE_FONT_WEIGHT_REGULAR, DWRITE_FONT_STYLE_NORMAL, DWRITE_FONT_STRETCH_NORMAL, 24.0f, L"en-us", &TextFormat ); hr = TextFormat->SetTextAlignment(DWRITE_TEXT_ALIGNMENT_LEADING); hr = TextFormat->SetParagraphAlignment(DWRITE_PARAGRAPH_ALIGNMENT_NEAR); d3d101Device->IASetPrimitiveTopology(D3D10_PRIMITIVE_TOPOLOGY_POINTLIST); return true; } bool InitDirectInput(HINSTANCE hInstance) { hr = DirectInput8Create(hInstance, DIRECTINPUT_VERSION, IID_IDirectInput8, (void**)&DirectInput, NULL); hr = DirectInput->CreateDevice(GUID_SysKeyboard, &DIKeyboard, NULL); hr = DirectInput->CreateDevice(GUID_SysMouse, &DIMouse, NULL); hr = DIKeyboard->SetDataFormat(&c_dfDIKeyboard); hr = DIKeyboard->SetCooperativeLevel(hwnd, DISCL_FOREGROUND | DISCL_NONEXCLUSIVE); hr = DIMouse->SetDataFormat(&c_dfDIMouse); hr = DIMouse->SetCooperativeLevel(hwnd, DISCL_EXCLUSIVE | DISCL_NOWINKEY | DISCL_FOREGROUND); return true; } void UpdateCamera() { camRotationMatrix = XMMatrixRotationRollPitchYaw(camPitch, camYaw, 0); camTarget = XMVector3TransformCoord(DefaultForward, camRotationMatrix ); camTarget = XMVector3Normalize(camTarget); XMMATRIX RotateYTempMatrix; RotateYTempMatrix = XMMatrixRotationY(camYaw); // Walk //camRight = XMVector3TransformCoord(DefaultRight, RotateYTempMatrix); //camUp = XMVector3TransformCoord(camUp, RotateYTempMatrix); //camForward = XMVector3TransformCoord(DefaultForward, RotateYTempMatrix); // Free Cam camRight = XMVector3TransformCoord(DefaultRight, camRotationMatrix); camForward = XMVector3TransformCoord(DefaultForward, camRotationMatrix); camUp = XMVector3Cross(camForward, camRight); camPosition += moveLeftRight*camRight; camPosition += moveBackForward*camForward; moveLeftRight = 0.0f; moveBackForward = 0.0f; camTarget = camPosition + camTarget; camView = XMMatrixLookAtLH( camPosition, camTarget, camUp ); } void DetectInput(double time) { DIMOUSESTATE mouseCurrState; BYTE keyboardState[256]; DIKeyboard->Acquire(); DIMouse->Acquire(); DIMouse->GetDeviceState(sizeof(DIMOUSESTATE), &mouseCurrState); DIKeyboard->GetDeviceState(sizeof(keyboardState),(LPVOID)&keyboardState); if(keyboardState[DIK_ESCAPE] & 0x80) PostMessage(hwnd, WM_DESTROY, 0, 0); float speed = 10.0f * time; if(keyboardState[DIK_A] & 0x80) { moveLeftRight -= speed; } if(keyboardState[DIK_D] & 0x80) { moveLeftRight += speed; } if(keyboardState[DIK_W] & 0x80) { moveBackForward += speed; } if(keyboardState[DIK_S] & 0x80) { moveBackForward -= speed; } if((mouseCurrState.lX != mouseLastState.lX) || (mouseCurrState.lY != mouseLastState.lY)) { camYaw += mouseLastState.lX * 0.001f; camPitch += mouseCurrState.lY * 0.001f; mouseLastState = mouseCurrState; } UpdateCamera(); return; } void CleanUp() { SwapChain->SetFullscreenState(false, NULL); PostMessage(hwnd, WM_DESTROY, 0, 0); //Release the COM Objects we created SwapChain->Release(); d3d11Device->Release(); d3d11DevCon->Release(); renderTargetView->Release(); VS->Release(); PS->Release(); VS_Buffer->Release(); PS_Buffer->Release(); vertLayout->Release(); depthStencilView->Release(); depthStencilBuffer->Release(); cbPerObjectBuffer->Release(); Transparency->Release(); CCWcullMode->Release(); CWcullMode->Release(); d3d101Device->Release(); keyedMutex11->Release(); keyedMutex10->Release(); D2DRenderTarget->Release(); Brush->Release(); BackBuffer11->Release(); sharedTex11->Release(); DWriteFactory->Release(); TextFormat->Release(); d2dTexture->Release(); cbPerFrameBuffer->Release(); DIKeyboard->Unacquire(); DIMouse->Unacquire(); DirectInput->Release(); sphereIndexBuffer->Release(); sphereVertBuffer->Release(); SKYMAP_VS->Release(); SKYMAP_PS->Release(); SKYMAP_VS_Buffer->Release(); SKYMAP_PS_Buffer->Release(); smrv->Release(); DSLessEqual->Release(); RSCullNone->Release(); meshVertBuff->Release(); meshIndexBuff->Release(); ///////////////**************new**************//////////////////// for(int i = 0; i < NewMD5Model.numSubsets; i++) { NewMD5Model.subsets[i].indexBuff->Release(); NewMD5Model.subsets[i].vertBuff->Release(); } ///////////////**************new**************//////////////////// } ///////////////**************new**************//////////////////// bool LoadMD5Model(std::wstring filename, Model3D& MD5Model, std::vector<ID3D11ShaderResourceView*>& shaderResourceViewArray, std::vector<std::wstring> texFileNameArray) { std::wifstream fileIn (filename.c_str()); // Open file std::wstring checkString; // Stores the next string from our file if(fileIn) // Check if the file was opened { while(fileIn) // Loop until the end of the file is reached { fileIn >> checkString; // Get next string from file if(checkString == L"MD5Version") // Get MD5 version (this function supports version 10) { /*fileIn >> checkString; MessageBox(0, checkString.c_str(), //display message L"MD5Version", MB_OK);*/ } else if ( checkString == L"commandline" ) { std::getline(fileIn, checkString); // Ignore the rest of this line } else if ( checkString == L"numJoints" ) { fileIn >> MD5Model.numJoints; // Store number of joints } else if ( checkString == L"numMeshes" ) { fileIn >> MD5Model.numSubsets; // Store number of meshes or subsets which we will call them } else if ( checkString == L"joints" ) { Joint tempJoint; fileIn >> checkString; // Skip the "{" for(int i = 0; i < MD5Model.numJoints; i++) { fileIn >> tempJoint.name; // Store joints name // Sometimes the names might contain spaces. If that is the case, we need to continue // to read the name until we get to the closing " (quotation marks) if(tempJoint.name[tempJoint.name.size()-1] != '"') { wchar_t checkChar; bool jointNameFound = false; while(!jointNameFound) { checkChar = fileIn.get(); if(checkChar == '"') jointNameFound = true; tempJoint.name += checkChar; } } fileIn >> tempJoint.parentID; // Store Parent joint's ID fileIn >> checkString; // Skip the "(" // Store position of this joint (swap y and z axis if model was made in RH Coord Sys) fileIn >> tempJoint.pos.x >> tempJoint.pos.z >> tempJoint.pos.y; fileIn >> checkString >> checkString; // Skip the ")" and "(" // Store orientation of this joint fileIn >> tempJoint.orientation.x >> tempJoint.orientation.z >> tempJoint.orientation.y; // Remove the quotation marks from joints name tempJoint.name.erase(0, 1); tempJoint.name.erase(tempJoint.name.size()-1, 1); // Compute the w axis of the quaternion (The MD5 model uses a 3D vector to describe the // direction the bone is facing. However, we need to turn this into a quaternion, and the way // quaternions work, is the xyz values describe the axis of rotation, while the w is a value // between 0 and 1 which describes the angle of rotation) float t = 1.0f - ( tempJoint.orientation.x * tempJoint.orientation.x ) - ( tempJoint.orientation.y * tempJoint.orientation.y ) - ( tempJoint.orientation.z * tempJoint.orientation.z ); if ( t < 0.0f ) { tempJoint.orientation.w = 0.0f; } else { tempJoint.orientation.w = -sqrtf(t); } std::getline(fileIn, checkString); // Skip rest of this line MD5Model.joints.push_back(tempJoint); // Store the joint into this models joint vector } fileIn >> checkString; // Skip the "}" } else if ( checkString == L"mesh") { ModelSubset subset; int numVerts, numTris, numWeights; fileIn >> checkString; // Skip the "{" fileIn >> checkString; while ( checkString != L"}" ) // Read until '}' { // In this lesson, for the sake of simplicity, we will assume a textures filename is givin here. // Usually though, the name of a material (stored in a material library. Think back to the lesson on // loading .obj files, where the material library was contained in the file .mtl) is givin. Let this // be an exercise to load the material from a material library such as obj's .mtl file, instead of // just the texture like we will do here. if(checkString == L"shader") // Load the texture or material { std::wstring fileNamePath; fileIn >> fileNamePath; // Get texture's filename // Take spaces into account if filename or material name has a space in it if(fileNamePath[fileNamePath.size()-1] != '"') { wchar_t checkChar; bool fileNameFound = false; while(!fileNameFound) { checkChar = fileIn.get(); if(checkChar == '"') fileNameFound = true; fileNamePath += checkChar; } } // Remove the quotation marks from texture path fileNamePath.erase(0, 1); fileNamePath.erase(fileNamePath.size()-1, 1); //check if this texture has already been loaded bool alreadyLoaded = false; for(int i = 0; i < texFileNameArray.size(); ++i) { if(fileNamePath == texFileNameArray[i]) { alreadyLoaded = true; subset.texArrayIndex = i; } } //if the texture is not already loaded, load it now if(!alreadyLoaded) { ID3D11ShaderResourceView* tempMeshSRV; hr = D3DX11CreateShaderResourceViewFromFile( d3d11Device, fileNamePath.c_str(), NULL, NULL, &tempMeshSRV, NULL ); if(SUCCEEDED(hr)) { texFileNameArray.push_back(fileNamePath.c_str()); subset.texArrayIndex = shaderResourceViewArray.size(); shaderResourceViewArray.push_back(tempMeshSRV); } else { MessageBox(0, fileNamePath.c_str(), //display message L"Could Not Open:", MB_OK); return false; } } std::getline(fileIn, checkString); // Skip rest of this line } else if ( checkString == L"numverts") { fileIn >> numVerts; // Store number of vertices std::getline(fileIn, checkString); // Skip rest of this line for(int i = 0; i < numVerts; i++) { Vertex tempVert; fileIn >> checkString // Skip "vert # (" >> checkString >> checkString; fileIn >> tempVert.texCoord.x // Store tex coords >> tempVert.texCoord.y; fileIn >> checkString; // Skip ")" fileIn >> tempVert.StartWeight; // Index of first weight this vert will be weighted to fileIn >> tempVert.WeightCount; // Number of weights for this vertex std::getline(fileIn, checkString); // Skip rest of this line subset.vertices.push_back(tempVert); // Push back this vertex into subsets vertex vector } } else if ( checkString == L"numtris") { fileIn >> numTris; subset.numTriangles = numTris; std::getline(fileIn, checkString); // Skip rest of this line for(int i = 0; i < numTris; i++) // Loop through each triangle { DWORD tempIndex; fileIn >> checkString; // Skip "tri" fileIn >> checkString; // Skip tri counter for(int k = 0; k < 3; k++) // Store the 3 indices { fileIn >> tempIndex; subset.indices.push_back(tempIndex); } std::getline(fileIn, checkString); // Skip rest of this line } } else if ( checkString == L"numweights") { fileIn >> numWeights; std::getline(fileIn, checkString); // Skip rest of this line for(int i = 0; i < numWeights; i++) { Weight tempWeight; fileIn >> checkString >> checkString; // Skip "weight #" fileIn >> tempWeight.jointID; // Store weight's joint ID fileIn >> tempWeight.bias; // Store weight's influence over a vertex fileIn >> checkString; // Skip "(" fileIn >> tempWeight.pos.x // Store weight's pos in joint's local space >> tempWeight.pos.z >> tempWeight.pos.y; std::getline(fileIn, checkString); // Skip rest of this line subset.weights.push_back(tempWeight); // Push back tempWeight into subsets Weight array } } else std::getline(fileIn, checkString); // Skip anything else fileIn >> checkString; // Skip "}" } //*** find each vertex's position using the joints and weights ***// for ( int i = 0; i < subset.vertices.size(); ++i ) { Vertex tempVert = subset.vertices[i]; tempVert.pos = XMFLOAT3(0, 0, 0); // Make sure the vertex's pos is cleared first // Sum up the joints and weights information to get vertex's position for ( int j = 0; j < tempVert.WeightCount; ++j ) { Weight tempWeight = subset.weights[tempVert.StartWeight + j]; Joint tempJoint = MD5Model.joints[tempWeight.jointID]; // Convert joint orientation and weight pos to vectors for easier computation // When converting a 3d vector to a quaternion, you should put 0 for "w", and // When converting a quaternion to a 3d vector, you can just ignore the "w" XMVECTOR tempJointOrientation = XMVectorSet(tempJoint.orientation.x, tempJoint.orientation.y, tempJoint.orientation.z, tempJoint.orientation.w); XMVECTOR tempWeightPos = XMVectorSet(tempWeight.pos.x, tempWeight.pos.y, tempWeight.pos.z, 0.0f); // We will need to use the conjugate of the joint orientation quaternion // To get the conjugate of a quaternion, all you have to do is inverse the x, y, and z XMVECTOR tempJointOrientationConjugate = XMVectorSet(-tempJoint.orientation.x, -tempJoint.orientation.y, -tempJoint.orientation.z, tempJoint.orientation.w); // Calculate vertex position (in joint space, eg. rotate the point around (0,0,0)) for this weight using the joint orientation quaternion and its conjugate // We can rotate a point using a quaternion with the equation "rotatedPoint = quaternion * point * quaternionConjugate" XMFLOAT3 rotatedPoint; XMStoreFloat3(&rotatedPoint, XMQuaternionMultiply(XMQuaternionMultiply(tempJointOrientation, tempWeightPos), tempJointOrientationConjugate)); // Now move the verices position from joint space (0,0,0) to the joints position in world space, taking the weights bias into account // The weight bias is used because multiple weights might have an effect on the vertices final position. Each weight is attached to one joint. tempVert.pos.x += ( tempJoint.pos.x + rotatedPoint.x ) * tempWeight.bias; tempVert.pos.y += ( tempJoint.pos.y + rotatedPoint.y ) * tempWeight.bias; tempVert.pos.z += ( tempJoint.pos.z + rotatedPoint.z ) * tempWeight.bias; // Basically what has happened above, is we have taken the weights position relative to the joints position // we then rotate the weights position (so that the weight is actually being rotated around (0, 0, 0) in world space) using // the quaternion describing the joints rotation. We have stored this rotated point in rotatedPoint, which we then add to // the joints position (because we rotated the weight's position around (0,0,0) in world space, and now need to translate it // so that it appears to have been rotated around the joints position). Finally we multiply the answer with the weights bias, // or how much control the weight has over the final vertices position. All weight's bias effecting a single vertex's position // must add up to 1. } subset.positions.push_back(tempVert.pos); // Store the vertices position in the position vector instead of straight into the vertex vector // since we can use the positions vector for certain things like collision detection or picking // without having to work with the entire vertex structure. } // Put the positions into the vertices for this subset for(int i = 0; i < subset.vertices.size(); i++) { subset.vertices[i].pos = subset.positions[i]; } //*** Calculate vertex normals using normal averaging ***/// std::vector<XMFLOAT3> tempNormal; //normalized and unnormalized normals XMFLOAT3 unnormalized = XMFLOAT3(0.0f, 0.0f, 0.0f); //Used to get vectors (sides) from the position of the verts float vecX, vecY, vecZ; //Two edges of our triangle XMVECTOR edge1 = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); XMVECTOR edge2 = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); //Compute face normals for(int i = 0; i < subset.numTriangles; ++i) { //Get the vector describing one edge of our triangle (edge 0,2) vecX = subset.vertices[subset.indices[(i*3)]].pos.x - subset.vertices[subset.indices[(i*3)+2]].pos.x; vecY = subset.vertices[subset.indices[(i*3)]].pos.y - subset.vertices[subset.indices[(i*3)+2]].pos.y; vecZ = subset.vertices[subset.indices[(i*3)]].pos.z - subset.vertices[subset.indices[(i*3)+2]].pos.z; edge1 = XMVectorSet(vecX, vecY, vecZ, 0.0f); //Create our first edge //Get the vector describing another edge of our triangle (edge 2,1) vecX = subset.vertices[subset.indices[(i*3)+2]].pos.x - subset.vertices[subset.indices[(i*3)+1]].pos.x; vecY = subset.vertices[subset.indices[(i*3)+2]].pos.y - subset.vertices[subset.indices[(i*3)+1]].pos.y; vecZ = subset.vertices[subset.indices[(i*3)+2]].pos.z - subset.vertices[subset.indices[(i*3)+1]].pos.z; edge2 = XMVectorSet(vecX, vecY, vecZ, 0.0f); //Create our second edge //Cross multiply the two edge vectors to get the un-normalized face normal XMStoreFloat3(&unnormalized, XMVector3Cross(edge1, edge2)); tempNormal.push_back(unnormalized); } //Compute vertex normals (normal Averaging) XMVECTOR normalSum = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); int facesUsing = 0; float tX, tY, tZ; //temp axis variables //Go through each vertex for(int i = 0; i < subset.vertices.size(); ++i) { //Check which triangles use this vertex for(int j = 0; j < subset.numTriangles; ++j) { if(subset.indices[j*3] == i || subset.indices[(j*3)+1] == i || subset.indices[(j*3)+2] == i) { tX = XMVectorGetX(normalSum) + tempNormal[j].x; tY = XMVectorGetY(normalSum) + tempNormal[j].y; tZ = XMVectorGetZ(normalSum) + tempNormal[j].z; normalSum = XMVectorSet(tX, tY, tZ, 0.0f); //If a face is using the vertex, add the unormalized face normal to the normalSum facesUsing++; } } //Get the actual normal by dividing the normalSum by the number of faces sharing the vertex normalSum = normalSum / facesUsing; //Normalize the normalSum vector normalSum = XMVector3Normalize(normalSum); //Store the normal and tangent in our current vertex subset.vertices[i].normal.x = -XMVectorGetX(normalSum); subset.vertices[i].normal.y = -XMVectorGetY(normalSum); subset.vertices[i].normal.z = -XMVectorGetZ(normalSum); //Clear normalSum, facesUsing for next vertex normalSum = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); facesUsing = 0; } // Create index buffer D3D11_BUFFER_DESC indexBufferDesc; ZeroMemory( &indexBufferDesc, sizeof(indexBufferDesc) ); indexBufferDesc.Usage = D3D11_USAGE_DEFAULT; indexBufferDesc.ByteWidth = sizeof(DWORD) * subset.numTriangles * 3; indexBufferDesc.BindFlags = D3D11_BIND_INDEX_BUFFER; indexBufferDesc.CPUAccessFlags = 0; indexBufferDesc.MiscFlags = 0; D3D11_SUBRESOURCE_DATA iinitData; iinitData.pSysMem = &subset.indices[0]; d3d11Device->CreateBuffer(&indexBufferDesc, &iinitData, &subset.indexBuff); //Create Vertex Buffer D3D11_BUFFER_DESC vertexBufferDesc; ZeroMemory( &vertexBufferDesc, sizeof(vertexBufferDesc) ); vertexBufferDesc.Usage = D3D11_USAGE_DYNAMIC; // We will be updating this buffer, so we must set as dynamic vertexBufferDesc.ByteWidth = sizeof( Vertex ) * subset.vertices.size(); vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; vertexBufferDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; // Give CPU power to write to buffer vertexBufferDesc.MiscFlags = 0; D3D11_SUBRESOURCE_DATA vertexBufferData; ZeroMemory( &vertexBufferData, sizeof(vertexBufferData) ); vertexBufferData.pSysMem = &subset.vertices[0]; hr = d3d11Device->CreateBuffer( &vertexBufferDesc, &vertexBufferData, &subset.vertBuff); // Push back the temp subset into the models subset vector MD5Model.subsets.push_back(subset); } } } else { SwapChain->SetFullscreenState(false, NULL); // Make sure we are out of fullscreen // create message std::wstring message = L"Could not open: "; message += filename; MessageBox(0, message.c_str(), // display message L"Error", MB_OK); return false; } return true; } ///////////////**************new**************//////////////////// bool LoadObjModel(std::wstring filename, ID3D11Buffer** vertBuff, ID3D11Buffer** indexBuff, std::vector<int>& subsetIndexStart, std::vector<int>& subsetMaterialArray, std::vector<SurfaceMaterial>& material, int& subsetCount, bool isRHCoordSys, bool computeNormals) { HRESULT hr = 0; std::wifstream fileIn (filename.c_str()); //Open file std::wstring meshMatLib; //String to hold our obj material library filename //Arrays to store our model's information std::vector<DWORD> indices; std::vector<XMFLOAT3> vertPos; std::vector<XMFLOAT3> vertNorm; std::vector<XMFLOAT2> vertTexCoord; std::vector<std::wstring> meshMaterials; //Vertex definition indices std::vector<int> vertPosIndex; std::vector<int> vertNormIndex; std::vector<int> vertTCIndex; //Make sure we have a default if no tex coords or normals are defined bool hasTexCoord = false; bool hasNorm = false; //Temp variables to store into vectors std::wstring meshMaterialsTemp; int vertPosIndexTemp; int vertNormIndexTemp; int vertTCIndexTemp; wchar_t checkChar; //The variable we will use to store one char from file at a time std::wstring face; //Holds the string containing our face vertices int vIndex = 0; //Keep track of our vertex index count int triangleCount = 0; //Total Triangles int totalVerts = 0; int meshTriangles = 0; //Check to see if the file was opened if (fileIn) { while(fileIn) { checkChar = fileIn.get(); //Get next char switch (checkChar) { case '#': checkChar = fileIn.get(); while(checkChar != '\n') checkChar = fileIn.get(); break; case 'v': //Get Vertex Descriptions checkChar = fileIn.get(); if(checkChar == ' ') //v - vert position { float vz, vy, vx; fileIn >> vx >> vy >> vz; //Store the next three types if(isRHCoordSys) //If model is from an RH Coord System vertPos.push_back(XMFLOAT3( vx, vy, vz * -1.0f)); //Invert the Z axis else vertPos.push_back(XMFLOAT3( vx, vy, vz)); } if(checkChar == 't') //vt - vert tex coords { float vtcu, vtcv; fileIn >> vtcu >> vtcv; //Store next two types if(isRHCoordSys) //If model is from an RH Coord System vertTexCoord.push_back(XMFLOAT2(vtcu, 1.0f-vtcv)); //Reverse the "v" axis else vertTexCoord.push_back(XMFLOAT2(vtcu, vtcv)); hasTexCoord = true; //We know the model uses texture coords } //Since we compute the normals later, we don't need to check for normals //In the file, but i'll do it here anyway if(checkChar == 'n') //vn - vert normal { float vnx, vny, vnz; fileIn >> vnx >> vny >> vnz; //Store next three types if(isRHCoordSys) //If model is from an RH Coord System vertNorm.push_back(XMFLOAT3( vnx, vny, vnz * -1.0f )); //Invert the Z axis else vertNorm.push_back(XMFLOAT3( vnx, vny, vnz )); hasNorm = true; //We know the model defines normals } break; //New group (Subset) case 'g': //g - defines a group checkChar = fileIn.get(); if(checkChar == ' ') { subsetIndexStart.push_back(vIndex); //Start index for this subset subsetCount++; } break; //Get Face Index case 'f': //f - defines the faces checkChar = fileIn.get(); if(checkChar == ' ') { face = L""; std::wstring VertDef; //Holds one vertex definition at a time triangleCount = 0; checkChar = fileIn.get(); while(checkChar != '\n') { face += checkChar; //Add the char to our face string checkChar = fileIn.get(); //Get the next Character if(checkChar == ' ') //If its a space... triangleCount++; //Increase our triangle count } //Check for space at the end of our face string if(face[face.length()-1] == ' ') triangleCount--; //Each space adds to our triangle count triangleCount -= 1; //Ever vertex in the face AFTER the first two are new faces std::wstringstream ss(face); if(face.length() > 0) { int firstVIndex, lastVIndex; //Holds the first and last vertice's index for(int i = 0; i < 3; ++i) //First three vertices (first triangle) { ss >> VertDef; //Get vertex definition (vPos/vTexCoord/vNorm) std::wstring vertPart; int whichPart = 0; //(vPos, vTexCoord, or vNorm) //Parse this string for(int j = 0; j < VertDef.length(); ++j) { if(VertDef[j] != '/') //If there is no divider "/", add a char to our vertPart vertPart += VertDef[j]; //If the current char is a divider "/", or its the last character in the string if(VertDef[j] == '/' || j == VertDef.length()-1) { std::wistringstream wstringToInt(vertPart); //Used to convert wstring to int if(whichPart == 0) //If vPos { wstringToInt >> vertPosIndexTemp; vertPosIndexTemp -= 1; //subtract one since c++ arrays start with 0, and obj start with 1 //Check to see if the vert pos was the only thing specified if(j == VertDef.length()-1) { vertNormIndexTemp = 0; vertTCIndexTemp = 0; } } else if(whichPart == 1) //If vTexCoord { if(vertPart != L"") //Check to see if there even is a tex coord { wstringToInt >> vertTCIndexTemp; vertTCIndexTemp -= 1; //subtract one since c++ arrays start with 0, and obj start with 1 } else //If there is no tex coord, make a default vertTCIndexTemp = 0; //If the cur. char is the second to last in the string, then //there must be no normal, so set a default normal if(j == VertDef.length()-1) vertNormIndexTemp = 0; } else if(whichPart == 2) //If vNorm { std::wistringstream wstringToInt(vertPart); wstringToInt >> vertNormIndexTemp; vertNormIndexTemp -= 1; //subtract one since c++ arrays start with 0, and obj start with 1 } vertPart = L""; //Get ready for next vertex part whichPart++; //Move on to next vertex part } } //Check to make sure there is at least one subset if(subsetCount == 0) { subsetIndexStart.push_back(vIndex); //Start index for this subset subsetCount++; } //Avoid duplicate vertices bool vertAlreadyExists = false; if(totalVerts >= 3) //Make sure we at least have one triangle to check { //Loop through all the vertices for(int iCheck = 0; iCheck < totalVerts; ++iCheck) { //If the vertex position and texture coordinate in memory are the same //As the vertex position and texture coordinate we just now got out //of the obj file, we will set this faces vertex index to the vertex's //index value in memory. This makes sure we don't create duplicate vertices if(vertPosIndexTemp == vertPosIndex[iCheck] && !vertAlreadyExists) { if(vertTCIndexTemp == vertTCIndex[iCheck]) { indices.push_back(iCheck); //Set index for this vertex vertAlreadyExists = true; //If we've made it here, the vertex already exists } } } } //If this vertex is not already in our vertex arrays, put it there if(!vertAlreadyExists) { vertPosIndex.push_back(vertPosIndexTemp); vertTCIndex.push_back(vertTCIndexTemp); vertNormIndex.push_back(vertNormIndexTemp); totalVerts++; //We created a new vertex indices.push_back(totalVerts-1); //Set index for this vertex } //If this is the very first vertex in the face, we need to //make sure the rest of the triangles use this vertex if(i == 0) { firstVIndex = indices[vIndex]; //The first vertex index of this FACE } //If this was the last vertex in the first triangle, we will make sure //the next triangle uses this one (eg. tri1(1,2,3) tri2(1,3,4) tri3(1,4,5)) if(i == 2) { lastVIndex = indices[vIndex]; //The last vertex index of this TRIANGLE } vIndex++; //Increment index count } meshTriangles++; //One triangle down //If there are more than three vertices in the face definition, we need to make sure //we convert the face to triangles. We created our first triangle above, now we will //create a new triangle for every new vertex in the face, using the very first vertex //of the face, and the last vertex from the triangle before the current triangle for(int l = 0; l < triangleCount-1; ++l) //Loop through the next vertices to create new triangles { //First vertex of this triangle (the very first vertex of the face too) indices.push_back(firstVIndex); //Set index for this vertex vIndex++; //Second Vertex of this triangle (the last vertex used in the tri before this one) indices.push_back(lastVIndex); //Set index for this vertex vIndex++; //Get the third vertex for this triangle ss >> VertDef; std::wstring vertPart; int whichPart = 0; //Parse this string (same as above) for(int j = 0; j < VertDef.length(); ++j) { if(VertDef[j] != '/') vertPart += VertDef[j]; if(VertDef[j] == '/' || j == VertDef.length()-1) { std::wistringstream wstringToInt(vertPart); if(whichPart == 0) { wstringToInt >> vertPosIndexTemp; vertPosIndexTemp -= 1; //Check to see if the vert pos was the only thing specified if(j == VertDef.length()-1) { vertTCIndexTemp = 0; vertNormIndexTemp = 0; } } else if(whichPart == 1) { if(vertPart != L"") { wstringToInt >> vertTCIndexTemp; vertTCIndexTemp -= 1; } else vertTCIndexTemp = 0; if(j == VertDef.length()-1) vertNormIndexTemp = 0; } else if(whichPart == 2) { std::wistringstream wstringToInt(vertPart); wstringToInt >> vertNormIndexTemp; vertNormIndexTemp -= 1; } vertPart = L""; whichPart++; } } //Check for duplicate vertices bool vertAlreadyExists = false; if(totalVerts >= 3) //Make sure we at least have one triangle to check { for(int iCheck = 0; iCheck < totalVerts; ++iCheck) { if(vertPosIndexTemp == vertPosIndex[iCheck] && !vertAlreadyExists) { if(vertTCIndexTemp == vertTCIndex[iCheck]) { indices.push_back(iCheck); //Set index for this vertex vertAlreadyExists = true; //If we've made it here, the vertex already exists } } } } if(!vertAlreadyExists) { vertPosIndex.push_back(vertPosIndexTemp); vertTCIndex.push_back(vertTCIndexTemp); vertNormIndex.push_back(vertNormIndexTemp); totalVerts++; //New vertex created, add to total verts indices.push_back(totalVerts-1); //Set index for this vertex } //Set the second vertex for the next triangle to the last vertex we got lastVIndex = indices[vIndex]; //The last vertex index of this TRIANGLE meshTriangles++; //New triangle defined vIndex++; } } } break; case 'm': //mtllib - material library filename checkChar = fileIn.get(); if(checkChar == 't') { checkChar = fileIn.get(); if(checkChar == 'l') { checkChar = fileIn.get(); if(checkChar == 'l') { checkChar = fileIn.get(); if(checkChar == 'i') { checkChar = fileIn.get(); if(checkChar == 'b') { checkChar = fileIn.get(); if(checkChar == ' ') { //Store the material libraries file name fileIn >> meshMatLib; } } } } } } break; case 'u': //usemtl - which material to use checkChar = fileIn.get(); if(checkChar == 's') { checkChar = fileIn.get(); if(checkChar == 'e') { checkChar = fileIn.get(); if(checkChar == 'm') { checkChar = fileIn.get(); if(checkChar == 't') { checkChar = fileIn.get(); if(checkChar == 'l') { checkChar = fileIn.get(); if(checkChar == ' ') { meshMaterialsTemp = L""; //Make sure this is cleared fileIn >> meshMaterialsTemp; //Get next type (string) meshMaterials.push_back(meshMaterialsTemp); } } } } } } break; default: break; } } } else //If we could not open the file { SwapChain->SetFullscreenState(false, NULL); //Make sure we are out of fullscreen //create message std::wstring message = L"Could not open: "; message += filename; MessageBox(0, message.c_str(), //display message L"Error", MB_OK); return false; } subsetIndexStart.push_back(vIndex); //There won't be another index start after our last subset, so set it here //sometimes "g" is defined at the very top of the file, then again before the first group of faces. //This makes sure the first subset does not conatain "0" indices. if(subsetIndexStart[1] == 0) { subsetIndexStart.erase(subsetIndexStart.begin()+1); meshSubsets--; } //Make sure we have a default for the tex coord and normal //if one or both are not specified if(!hasNorm) vertNorm.push_back(XMFLOAT3(0.0f, 0.0f, 0.0f)); if(!hasTexCoord) vertTexCoord.push_back(XMFLOAT2(0.0f, 0.0f)); //Close the obj file, and open the mtl file fileIn.close(); fileIn.open(meshMatLib.c_str()); std::wstring lastStringRead; int matCount = material.size(); //total materials //kdset - If our diffuse color was not set, we can use the ambient color (which is usually the same) //If the diffuse color WAS set, then we don't need to set our diffuse color to ambient bool kdset = false; if (fileIn) { while(fileIn) { checkChar = fileIn.get(); //Get next char switch (checkChar) { //Check for comment case '#': checkChar = fileIn.get(); while(checkChar != '\n') checkChar = fileIn.get(); break; //Set diffuse color case 'K': checkChar = fileIn.get(); if(checkChar == 'd') //Diffuse Color { checkChar = fileIn.get(); //remove space fileIn >> material[matCount-1].difColor.x; fileIn >> material[matCount-1].difColor.y; fileIn >> material[matCount-1].difColor.z; kdset = true; } //Ambient Color (We'll store it in diffuse if there isn't a diffuse already) if(checkChar == 'a') { checkChar = fileIn.get(); //remove space if(!kdset) { fileIn >> material[matCount-1].difColor.x; fileIn >> material[matCount-1].difColor.y; fileIn >> material[matCount-1].difColor.z; } } break; //Check for transparency case 'T': checkChar = fileIn.get(); if(checkChar == 'r') { checkChar = fileIn.get(); //remove space float Transparency; fileIn >> Transparency; material[matCount-1].difColor.w = Transparency; if(Transparency > 0.0f) material[matCount-1].transparent = true; } break; //Some obj files specify d for transparency case 'd': checkChar = fileIn.get(); if(checkChar == ' ') { float Transparency; fileIn >> Transparency; //'d' - 0 being most transparent, and 1 being opaque, opposite of Tr Transparency = 1.0f - Transparency; material[matCount-1].difColor.w = Transparency; if(Transparency > 0.0f) material[matCount-1].transparent = true; } break; //Get the diffuse map (texture) case 'm': checkChar = fileIn.get(); if(checkChar == 'a') { checkChar = fileIn.get(); if(checkChar == 'p') { checkChar = fileIn.get(); if(checkChar == '_') { //map_Kd - Diffuse map checkChar = fileIn.get(); if(checkChar == 'K') { checkChar = fileIn.get(); if(checkChar == 'd') { std::wstring fileNamePath; fileIn.get(); //Remove whitespace between map_Kd and file //Get the file path - We read the pathname char by char since //pathnames can sometimes contain spaces, so we will read until //we find the file extension bool texFilePathEnd = false; while(!texFilePathEnd) { checkChar = fileIn.get(); fileNamePath += checkChar; if(checkChar == '.') { for(int i = 0; i < 3; ++i) fileNamePath += fileIn.get(); texFilePathEnd = true; } } //check if this texture has already been loaded bool alreadyLoaded = false; for(int i = 0; i < textureNameArray.size(); ++i) { if(fileNamePath == textureNameArray[i]) { alreadyLoaded = true; material[matCount-1].texArrayIndex = i; material[matCount-1].hasTexture = true; } } //if the texture is not already loaded, load it now if(!alreadyLoaded) { ID3D11ShaderResourceView* tempMeshSRV; hr = D3DX11CreateShaderResourceViewFromFile( d3d11Device, fileNamePath.c_str(), NULL, NULL, &tempMeshSRV, NULL ); if(SUCCEEDED(hr)) { textureNameArray.push_back(fileNamePath.c_str()); material[matCount-1].texArrayIndex = meshSRV.size(); meshSRV.push_back(tempMeshSRV); material[matCount-1].hasTexture = true; } } } } //map_d - alpha map else if(checkChar == 'd') { //Alpha maps are usually the same as the diffuse map //So we will assume that for now by only enabling //transparency for this material, as we will already //be using the alpha channel in the diffuse map material[matCount-1].transparent = true; } //map_bump - bump map (we're usinga normal map though) else if(checkChar == 'b') { checkChar = fileIn.get(); if(checkChar == 'u') { checkChar = fileIn.get(); if(checkChar == 'm') { checkChar = fileIn.get(); if(checkChar == 'p') { std::wstring fileNamePath; fileIn.get(); //Remove whitespace between map_bump and file //Get the file path - We read the pathname char by char since //pathnames can sometimes contain spaces, so we will read until //we find the file extension bool texFilePathEnd = false; while(!texFilePathEnd) { checkChar = fileIn.get(); fileNamePath += checkChar; if(checkChar == '.') { for(int i = 0; i < 3; ++i) fileNamePath += fileIn.get(); texFilePathEnd = true; } } //check if this texture has already been loaded bool alreadyLoaded = false; for(int i = 0; i < textureNameArray.size(); ++i) { if(fileNamePath == textureNameArray[i]) { alreadyLoaded = true; material[matCount-1].normMapTexArrayIndex = i; material[matCount-1].hasNormMap = true; } } //if the texture is not already loaded, load it now if(!alreadyLoaded) { ID3D11ShaderResourceView* tempMeshSRV; hr = D3DX11CreateShaderResourceViewFromFile( d3d11Device, fileNamePath.c_str(), NULL, NULL, &tempMeshSRV, NULL ); if(SUCCEEDED(hr)) { textureNameArray.push_back(fileNamePath.c_str()); material[matCount-1].normMapTexArrayIndex = meshSRV.size(); meshSRV.push_back(tempMeshSRV); material[matCount-1].hasNormMap = true; } } } } } } } } } break; case 'n': //newmtl - Declare new material checkChar = fileIn.get(); if(checkChar == 'e') { checkChar = fileIn.get(); if(checkChar == 'w') { checkChar = fileIn.get(); if(checkChar == 'm') { checkChar = fileIn.get(); if(checkChar == 't') { checkChar = fileIn.get(); if(checkChar == 'l') { checkChar = fileIn.get(); if(checkChar == ' ') { //New material, set its defaults SurfaceMaterial tempMat; material.push_back(tempMat); fileIn >> material[matCount].matName; material[matCount].transparent = false; material[matCount].hasTexture = false; material[matCount].hasNormMap = false; material[matCount].normMapTexArrayIndex = 0; material[matCount].texArrayIndex = 0; matCount++; kdset = false; } } } } } } break; default: break; } } } else { SwapChain->SetFullscreenState(false, NULL); //Make sure we are out of fullscreen std::wstring message = L"Could not open: "; message += meshMatLib; MessageBox(0, message.c_str(), L"Error", MB_OK); return false; } //Set the subsets material to the index value //of the its material in our material array for(int i = 0; i < meshSubsets; ++i) { bool hasMat = false; for(int j = 0; j < material.size(); ++j) { if(meshMaterials[i] == material[j].matName) { subsetMaterialArray.push_back(j); hasMat = true; } } if(!hasMat) subsetMaterialArray.push_back(0); //Use first material in array } std::vector<Vertex> vertices; Vertex tempVert; //Create our vertices using the information we got //from the file and store them in a vector for(int j = 0 ; j < totalVerts; ++j) { tempVert.pos = vertPos[vertPosIndex[j]]; tempVert.normal = vertNorm[vertNormIndex[j]]; tempVert.texCoord = vertTexCoord[vertTCIndex[j]]; vertices.push_back(tempVert); } //////////////////////Compute Normals/////////////////////////// //If computeNormals was set to true then we will create our own //normals, if it was set to false we will use the obj files normals if(computeNormals) { std::vector<XMFLOAT3> tempNormal; //normalized and unnormalized normals XMFLOAT3 unnormalized = XMFLOAT3(0.0f, 0.0f, 0.0f); //tangent stuff std::vector<XMFLOAT3> tempTangent; XMFLOAT3 tangent = XMFLOAT3(0.0f, 0.0f, 0.0f); float tcU1, tcV1, tcU2, tcV2; //Used to get vectors (sides) from the position of the verts float vecX, vecY, vecZ; //Two edges of our triangle XMVECTOR edge1 = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); XMVECTOR edge2 = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); //Compute face normals //And Tangents for(int i = 0; i < meshTriangles; ++i) { //Get the vector describing one edge of our triangle (edge 0,2) vecX = vertices[indices[(i*3)]].pos.x - vertices[indices[(i*3)+2]].pos.x; vecY = vertices[indices[(i*3)]].pos.y - vertices[indices[(i*3)+2]].pos.y; vecZ = vertices[indices[(i*3)]].pos.z - vertices[indices[(i*3)+2]].pos.z; edge1 = XMVectorSet(vecX, vecY, vecZ, 0.0f); //Create our first edge //Get the vector describing another edge of our triangle (edge 2,1) vecX = vertices[indices[(i*3)+2]].pos.x - vertices[indices[(i*3)+1]].pos.x; vecY = vertices[indices[(i*3)+2]].pos.y - vertices[indices[(i*3)+1]].pos.y; vecZ = vertices[indices[(i*3)+2]].pos.z - vertices[indices[(i*3)+1]].pos.z; edge2 = XMVectorSet(vecX, vecY, vecZ, 0.0f); //Create our second edge //Cross multiply the two edge vectors to get the un-normalized face normal XMStoreFloat3(&unnormalized, XMVector3Cross(edge1, edge2)); tempNormal.push_back(unnormalized); //Find first texture coordinate edge 2d vector tcU1 = vertices[indices[(i*3)]].texCoord.x - vertices[indices[(i*3)+2]].texCoord.x; tcV1 = vertices[indices[(i*3)]].texCoord.y - vertices[indices[(i*3)+2]].texCoord.y; //Find second texture coordinate edge 2d vector tcU2 = vertices[indices[(i*3)+2]].texCoord.x - vertices[indices[(i*3)+1]].texCoord.x; tcV2 = vertices[indices[(i*3)+2]].texCoord.y - vertices[indices[(i*3)+1]].texCoord.y; //Find tangent using both tex coord edges and position edges tangent.x = (tcV1 * XMVectorGetX(edge1) - tcV2 * XMVectorGetX(edge2)) * (1.0f / (tcU1 * tcV2 - tcU2 * tcV1)); tangent.y = (tcV1 * XMVectorGetY(edge1) - tcV2 * XMVectorGetY(edge2)) * (1.0f / (tcU1 * tcV2 - tcU2 * tcV1)); tangent.z = (tcV1 * XMVectorGetZ(edge1) - tcV2 * XMVectorGetZ(edge2)) * (1.0f / (tcU1 * tcV2 - tcU2 * tcV1)); tempTangent.push_back(tangent); } //Compute vertex normals (normal Averaging) XMVECTOR normalSum = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); XMVECTOR tangentSum = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); int facesUsing = 0; float tX, tY, tZ; //temp axis variables //Go through each vertex for(int i = 0; i < totalVerts; ++i) { //Check which triangles use this vertex for(int j = 0; j < meshTriangles; ++j) { if(indices[j*3] == i || indices[(j*3)+1] == i || indices[(j*3)+2] == i) { tX = XMVectorGetX(normalSum) + tempNormal[j].x; tY = XMVectorGetY(normalSum) + tempNormal[j].y; tZ = XMVectorGetZ(normalSum) + tempNormal[j].z; normalSum = XMVectorSet(tX, tY, tZ, 0.0f); //If a face is using the vertex, add the unormalized face normal to the normalSum //We can reuse tX, tY, tZ to sum up tangents tX = XMVectorGetX(tangentSum) + tempTangent[j].x; tY = XMVectorGetY(tangentSum) + tempTangent[j].y; tZ = XMVectorGetZ(tangentSum) + tempTangent[j].z; tangentSum = XMVectorSet(tX, tY, tZ, 0.0f); //sum up face tangents using this vertex facesUsing++; } } //Get the actual normal by dividing the normalSum by the number of faces sharing the vertex normalSum = normalSum / facesUsing; tangentSum = tangentSum / facesUsing; //Normalize the normalSum vector and tangent normalSum = XMVector3Normalize(normalSum); tangentSum = XMVector3Normalize(tangentSum); //Store the normal and tangent in our current vertex vertices[i].normal.x = XMVectorGetX(normalSum); vertices[i].normal.y = XMVectorGetY(normalSum); vertices[i].normal.z = XMVectorGetZ(normalSum); vertices[i].tangent.x = XMVectorGetX(tangentSum); vertices[i].tangent.y = XMVectorGetY(tangentSum); vertices[i].tangent.z = XMVectorGetZ(tangentSum); //Clear normalSum, tangentSum and facesUsing for next vertex normalSum = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); tangentSum = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); facesUsing = 0; } } //Create index buffer D3D11_BUFFER_DESC indexBufferDesc; ZeroMemory( &indexBufferDesc, sizeof(indexBufferDesc) ); indexBufferDesc.Usage = D3D11_USAGE_DEFAULT; indexBufferDesc.ByteWidth = sizeof(DWORD) * meshTriangles*3; indexBufferDesc.BindFlags = D3D11_BIND_INDEX_BUFFER; indexBufferDesc.CPUAccessFlags = 0; indexBufferDesc.MiscFlags = 0; D3D11_SUBRESOURCE_DATA iinitData; iinitData.pSysMem = &indices[0]; d3d11Device->CreateBuffer(&indexBufferDesc, &iinitData, indexBuff); //Create Vertex Buffer D3D11_BUFFER_DESC vertexBufferDesc; ZeroMemory( &vertexBufferDesc, sizeof(vertexBufferDesc) ); vertexBufferDesc.Usage = D3D11_USAGE_DEFAULT; vertexBufferDesc.ByteWidth = sizeof( Vertex ) * totalVerts; vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; vertexBufferDesc.CPUAccessFlags = 0; vertexBufferDesc.MiscFlags = 0; D3D11_SUBRESOURCE_DATA vertexBufferData; ZeroMemory( &vertexBufferData, sizeof(vertexBufferData) ); vertexBufferData.pSysMem = &vertices[0]; hr = d3d11Device->CreateBuffer( &vertexBufferDesc, &vertexBufferData, vertBuff); return true; } void CreateSphere(int LatLines, int LongLines) { NumSphereVertices = ((LatLines-2) * LongLines) + 2; NumSphereFaces = ((LatLines-3)*(LongLines)*2) + (LongLines*2); float sphereYaw = 0.0f; float spherePitch = 0.0f; std::vector<Vertex> vertices(NumSphereVertices); XMVECTOR currVertPos = XMVectorSet(0.0f, 0.0f, 1.0f, 0.0f); vertices[0].pos.x = 0.0f; vertices[0].pos.y = 0.0f; vertices[0].pos.z = 1.0f; for(DWORD i = 0; i < LatLines-2; ++i) { spherePitch = (i+1) * (3.14f/(LatLines-1)); Rotationx = XMMatrixRotationX(spherePitch); for(DWORD j = 0; j < LongLines; ++j) { sphereYaw = j * (6.28f/(LongLines)); Rotationy = XMMatrixRotationZ(sphereYaw); currVertPos = XMVector3TransformNormal( XMVectorSet(0.0f, 0.0f, 1.0f, 0.0f), (Rotationx * Rotationy) ); currVertPos = XMVector3Normalize( currVertPos ); vertices[i*LongLines+j+1].pos.x = XMVectorGetX(currVertPos); vertices[i*LongLines+j+1].pos.y = XMVectorGetY(currVertPos); vertices[i*LongLines+j+1].pos.z = XMVectorGetZ(currVertPos); } } vertices[NumSphereVertices-1].pos.x = 0.0f; vertices[NumSphereVertices-1].pos.y = 0.0f; vertices[NumSphereVertices-1].pos.z = -1.0f; D3D11_BUFFER_DESC vertexBufferDesc; ZeroMemory( &vertexBufferDesc, sizeof(vertexBufferDesc) ); vertexBufferDesc.Usage = D3D11_USAGE_DEFAULT; vertexBufferDesc.ByteWidth = sizeof( Vertex ) * NumSphereVertices; vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; vertexBufferDesc.CPUAccessFlags = 0; vertexBufferDesc.MiscFlags = 0; D3D11_SUBRESOURCE_DATA vertexBufferData; ZeroMemory( &vertexBufferData, sizeof(vertexBufferData) ); vertexBufferData.pSysMem = &vertices[0]; hr = d3d11Device->CreateBuffer( &vertexBufferDesc, &vertexBufferData, &sphereVertBuffer); std::vector<DWORD> indices(NumSphereFaces * 3); int k = 0; for(DWORD l = 0; l < LongLines-1; ++l) { indices[k] = 0; indices[k+1] = l+1; indices[k+2] = l+2; k += 3; } indices[k] = 0; indices[k+1] = LongLines; indices[k+2] = 1; k += 3; for(DWORD i = 0; i < LatLines-3; ++i) { for(DWORD j = 0; j < LongLines-1; ++j) { indices[k] = i*LongLines+j+1; indices[k+1] = i*LongLines+j+2; indices[k+2] = (i+1)*LongLines+j+1; indices[k+3] = (i+1)*LongLines+j+1; indices[k+4] = i*LongLines+j+2; indices[k+5] = (i+1)*LongLines+j+2; k += 6; // next quad } indices[k] = (i*LongLines)+LongLines; indices[k+1] = (i*LongLines)+1; indices[k+2] = ((i+1)*LongLines)+LongLines; indices[k+3] = ((i+1)*LongLines)+LongLines; indices[k+4] = (i*LongLines)+1; indices[k+5] = ((i+1)*LongLines)+1; k += 6; } for(DWORD l = 0; l < LongLines-1; ++l) { indices[k] = NumSphereVertices-1; indices[k+1] = (NumSphereVertices-1)-(l+1); indices[k+2] = (NumSphereVertices-1)-(l+2); k += 3; } indices[k] = NumSphereVertices-1; indices[k+1] = (NumSphereVertices-1)-LongLines; indices[k+2] = NumSphereVertices-2; D3D11_BUFFER_DESC indexBufferDesc; ZeroMemory( &indexBufferDesc, sizeof(indexBufferDesc) ); indexBufferDesc.Usage = D3D11_USAGE_DEFAULT; indexBufferDesc.ByteWidth = sizeof(DWORD) * NumSphereFaces * 3; indexBufferDesc.BindFlags = D3D11_BIND_INDEX_BUFFER; indexBufferDesc.CPUAccessFlags = 0; indexBufferDesc.MiscFlags = 0; D3D11_SUBRESOURCE_DATA iinitData; iinitData.pSysMem = &indices[0]; d3d11Device->CreateBuffer(&indexBufferDesc, &iinitData, &sphereIndexBuffer); } void InitD2DScreenTexture() { //Create the vertex buffer Vertex v[] = { // Front Face Vertex(-1.0f, -1.0f, -1.0f, 0.0f, 1.0f,-1.0f, -1.0f, -1.0f, 0.0f, 0.0f, 0.0f), Vertex(-1.0f, 1.0f, -1.0f, 0.0f, 0.0f,-1.0f, 1.0f, -1.0f, 0.0f, 0.0f, 0.0f), Vertex( 1.0f, 1.0f, -1.0f, 1.0f, 0.0f, 1.0f, 1.0f, -1.0f, 0.0f, 0.0f, 0.0f), Vertex( 1.0f, -1.0f, -1.0f, 1.0f, 1.0f, 1.0f, -1.0f, -1.0f, 0.0f, 0.0f, 0.0f), }; DWORD indices[] = { // Front Face 0, 1, 2, 0, 2, 3, }; D3D11_BUFFER_DESC indexBufferDesc; ZeroMemory( &indexBufferDesc, sizeof(indexBufferDesc) ); indexBufferDesc.Usage = D3D11_USAGE_DEFAULT; indexBufferDesc.ByteWidth = sizeof(DWORD) * 2 * 3; indexBufferDesc.BindFlags = D3D11_BIND_INDEX_BUFFER; indexBufferDesc.CPUAccessFlags = 0; indexBufferDesc.MiscFlags = 0; D3D11_SUBRESOURCE_DATA iinitData; iinitData.pSysMem = indices; d3d11Device->CreateBuffer(&indexBufferDesc, &iinitData, &d2dIndexBuffer); D3D11_BUFFER_DESC vertexBufferDesc; ZeroMemory( &vertexBufferDesc, sizeof(vertexBufferDesc) ); vertexBufferDesc.Usage = D3D11_USAGE_DEFAULT; vertexBufferDesc.ByteWidth = sizeof( Vertex ) * 4; vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; vertexBufferDesc.CPUAccessFlags = 0; vertexBufferDesc.MiscFlags = 0; D3D11_SUBRESOURCE_DATA vertexBufferData; ZeroMemory( &vertexBufferData, sizeof(vertexBufferData) ); vertexBufferData.pSysMem = v; hr = d3d11Device->CreateBuffer( &vertexBufferDesc, &vertexBufferData, &d2dVertBuffer); //Create A shader resource view from the texture D2D will render to, //So we can use it to texture a square which overlays our scene d3d11Device->CreateShaderResourceView(sharedTex11, NULL, &d2dTexture); } bool InitScene() { InitD2DScreenTexture(); CreateSphere(10, 10); if(!LoadObjModel(L"ground.obj", &meshVertBuff, &meshIndexBuff, meshSubsetIndexStart, meshSubsetTexture, material, meshSubsets, true, true)) return false; ///////////////**************new**************//////////////////// if(!LoadMD5Model(L"boy.md5mesh", NewMD5Model, meshSRV, textureNameArray)) return false; ///////////////**************new**************//////////////////// //Compile Shaders from shader file hr = D3DX11CompileFromFile(L"Effects.fx", 0, 0, "VS", "vs_4_0", 0, 0, 0, &VS_Buffer, 0, 0); hr = D3DX11CompileFromFile(L"Effects.fx", 0, 0, "PS", "ps_4_0", 0, 0, 0, &PS_Buffer, 0, 0); hr = D3DX11CompileFromFile(L"Effects.fx", 0, 0, "D2D_PS", "ps_4_0", 0, 0, 0, &D2D_PS_Buffer, 0, 0); hr = D3DX11CompileFromFile(L"Effects.fx", 0, 0, "SKYMAP_VS", "vs_4_0", 0, 0, 0, &SKYMAP_VS_Buffer, 0, 0); hr = D3DX11CompileFromFile(L"Effects.fx", 0, 0, "SKYMAP_PS", "ps_4_0", 0, 0, 0, &SKYMAP_PS_Buffer, 0, 0); //Create the Shader Objects hr = d3d11Device->CreateVertexShader(VS_Buffer->GetBufferPointer(), VS_Buffer->GetBufferSize(), NULL, &VS); hr = d3d11Device->CreatePixelShader(PS_Buffer->GetBufferPointer(), PS_Buffer->GetBufferSize(), NULL, &PS); hr = d3d11Device->CreatePixelShader(D2D_PS_Buffer->GetBufferPointer(), D2D_PS_Buffer->GetBufferSize(), NULL, &D2D_PS); hr = d3d11Device->CreateVertexShader(SKYMAP_VS_Buffer->GetBufferPointer(), SKYMAP_VS_Buffer->GetBufferSize(), NULL, &SKYMAP_VS); hr = d3d11Device->CreatePixelShader(SKYMAP_PS_Buffer->GetBufferPointer(), SKYMAP_PS_Buffer->GetBufferSize(), NULL, &SKYMAP_PS); //Set Vertex and Pixel Shaders d3d11DevCon->VSSetShader(VS, 0, 0); d3d11DevCon->PSSetShader(PS, 0, 0); light.pos = XMFLOAT3(0.0f, 7.0f, 0.0f); light.dir = XMFLOAT3(-0.5f, 0.75f, -0.5f); light.range = 1000.0f; light.cone = 12.0f; light.att = XMFLOAT3(0.4f, 0.02f, 0.000f); light.ambient = XMFLOAT4(0.2f, 0.2f, 0.2f, 1.0f); light.diffuse = XMFLOAT4(1.0f, 1.0f, 1.0f, 1.0f); //Create the Input Layout hr = d3d11Device->CreateInputLayout( layout, numElements, VS_Buffer->GetBufferPointer(), VS_Buffer->GetBufferSize(), &vertLayout ); //Set the Input Layout d3d11DevCon->IASetInputLayout( vertLayout ); //Set Primitive Topology d3d11DevCon->IASetPrimitiveTopology( D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST ); //Create the Viewport D3D11_VIEWPORT viewport; ZeroMemory(&viewport, sizeof(D3D11_VIEWPORT)); viewport.TopLeftX = 0; viewport.TopLeftY = 0; viewport.Width = Width; viewport.Height = Height; viewport.MinDepth = 0.0f; viewport.MaxDepth = 1.0f; //Set the Viewport d3d11DevCon->RSSetViewports(1, &viewport); //Create the buffer to send to the cbuffer in effect file D3D11_BUFFER_DESC cbbd; ZeroMemory(&cbbd, sizeof(D3D11_BUFFER_DESC)); cbbd.Usage = D3D11_USAGE_DEFAULT; cbbd.ByteWidth = sizeof(cbPerObject); cbbd.BindFlags = D3D11_BIND_CONSTANT_BUFFER; cbbd.CPUAccessFlags = 0; cbbd.MiscFlags = 0; hr = d3d11Device->CreateBuffer(&cbbd, NULL, &cbPerObjectBuffer); //Create the buffer to send to the cbuffer per frame in effect file ZeroMemory(&cbbd, sizeof(D3D11_BUFFER_DESC)); cbbd.Usage = D3D11_USAGE_DEFAULT; cbbd.ByteWidth = sizeof(cbPerFrame); cbbd.BindFlags = D3D11_BIND_CONSTANT_BUFFER; cbbd.CPUAccessFlags = 0; cbbd.MiscFlags = 0; hr = d3d11Device->CreateBuffer(&cbbd, NULL, &cbPerFrameBuffer); //Camera information camPosition = XMVectorSet( 0.0f, 5.0f, -8.0f, 0.0f ); camTarget = XMVectorSet( 0.0f, 0.5f, 0.0f, 0.0f ); camUp = XMVectorSet( 0.0f, 1.0f, 0.0f, 0.0f ); //Set the View matrix camView = XMMatrixLookAtLH( camPosition, camTarget, camUp ); //Set the Projection matrix camProjection = XMMatrixPerspectiveFovLH( 0.4f*3.14f, (float)Width/Height, 1.0f, 1000.0f); D3D11_BLEND_DESC blendDesc; ZeroMemory( &blendDesc, sizeof(blendDesc) ); D3D11_RENDER_TARGET_BLEND_DESC rtbd; ZeroMemory( &rtbd, sizeof(rtbd) ); rtbd.BlendEnable = true; rtbd.SrcBlend = D3D11_BLEND_SRC_COLOR; rtbd.DestBlend = D3D11_BLEND_INV_SRC_ALPHA; rtbd.BlendOp = D3D11_BLEND_OP_ADD; rtbd.SrcBlendAlpha = D3D11_BLEND_ONE; rtbd.DestBlendAlpha = D3D11_BLEND_ZERO; rtbd.BlendOpAlpha = D3D11_BLEND_OP_ADD; rtbd.RenderTargetWriteMask = D3D10_COLOR_WRITE_ENABLE_ALL; blendDesc.AlphaToCoverageEnable = false; blendDesc.RenderTarget[0] = rtbd; d3d11Device->CreateBlendState(&blendDesc, &d2dTransparency); ZeroMemory( &rtbd, sizeof(rtbd) ); rtbd.BlendEnable = true; rtbd.SrcBlend = D3D11_BLEND_INV_SRC_ALPHA; rtbd.DestBlend = D3D11_BLEND_SRC_ALPHA; rtbd.BlendOp = D3D11_BLEND_OP_ADD; rtbd.SrcBlendAlpha = D3D11_BLEND_INV_SRC_ALPHA; rtbd.DestBlendAlpha = D3D11_BLEND_SRC_ALPHA; rtbd.BlendOpAlpha = D3D11_BLEND_OP_ADD; rtbd.RenderTargetWriteMask = D3D10_COLOR_WRITE_ENABLE_ALL; blendDesc.AlphaToCoverageEnable = false; blendDesc.RenderTarget[0] = rtbd; d3d11Device->CreateBlendState(&blendDesc, &Transparency); ///Load Skymap's cube texture/// D3DX11_IMAGE_LOAD_INFO loadSMInfo; loadSMInfo.MiscFlags = D3D11_RESOURCE_MISC_TEXTURECUBE; ID3D11Texture2D* SMTexture = 0; hr = D3DX11CreateTextureFromFile(d3d11Device, L"skymap.dds", &loadSMInfo, 0, (ID3D11Resource**)&SMTexture, 0); D3D11_TEXTURE2D_DESC SMTextureDesc; SMTexture->GetDesc(&SMTextureDesc); D3D11_SHADER_RESOURCE_VIEW_DESC SMViewDesc; SMViewDesc.Format = SMTextureDesc.Format; SMViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURECUBE; SMViewDesc.TextureCube.MipLevels = SMTextureDesc.MipLevels; SMViewDesc.TextureCube.MostDetailedMip = 0; hr = d3d11Device->CreateShaderResourceView(SMTexture, &SMViewDesc, &smrv); // Describe the Sample State D3D11_SAMPLER_DESC sampDesc; ZeroMemory( &sampDesc, sizeof(sampDesc) ); sampDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR; sampDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP; sampDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP; sampDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP; sampDesc.ComparisonFunc = D3D11_COMPARISON_NEVER; sampDesc.MinLOD = 0; sampDesc.MaxLOD = D3D11_FLOAT32_MAX; //Create the Sample State hr = d3d11Device->CreateSamplerState( &sampDesc, &CubesTexSamplerState ); D3D11_RASTERIZER_DESC cmdesc; ZeroMemory(&cmdesc, sizeof(D3D11_RASTERIZER_DESC)); cmdesc.FillMode = D3D11_FILL_SOLID; cmdesc.CullMode = D3D11_CULL_BACK; cmdesc.FrontCounterClockwise = true; hr = d3d11Device->CreateRasterizerState(&cmdesc, &CCWcullMode); cmdesc.FrontCounterClockwise = false; hr = d3d11Device->CreateRasterizerState(&cmdesc, &CWcullMode); cmdesc.CullMode = D3D11_CULL_NONE; //cmdesc.FillMode = D3D11_FILL_WIREFRAME; hr = d3d11Device->CreateRasterizerState(&cmdesc, &RSCullNone); D3D11_DEPTH_STENCIL_DESC dssDesc; ZeroMemory(&dssDesc, sizeof(D3D11_DEPTH_STENCIL_DESC)); dssDesc.DepthEnable = true; dssDesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL; dssDesc.DepthFunc = D3D11_COMPARISON_LESS_EQUAL; d3d11Device->CreateDepthStencilState(&dssDesc, &DSLessEqual); return true; } void StartTimer() { LARGE_INTEGER frequencyCount; QueryPerformanceFrequency(&frequencyCount); countsPerSecond = double(frequencyCount.QuadPart); QueryPerformanceCounter(&frequencyCount); CounterStart = frequencyCount.QuadPart; } double GetTime() { LARGE_INTEGER currentTime; QueryPerformanceCounter(&currentTime); return double(currentTime.QuadPart-CounterStart)/countsPerSecond; } double GetFrameTime() { LARGE_INTEGER currentTime; __int64 tickCount; QueryPerformanceCounter(&currentTime); tickCount = currentTime.QuadPart-frameTimeOld; frameTimeOld = currentTime.QuadPart; if(tickCount < 0.0f) tickCount = 0.0f; return float(tickCount)/countsPerSecond; } void UpdateScene(double time) { //Reset sphereWorld sphereWorld = XMMatrixIdentity(); //Define sphereWorld's world space matrix Scale = XMMatrixScaling( 5.0f, 5.0f, 5.0f ); //Make sure the sphere is always centered around camera Translation = XMMatrixTranslation( XMVectorGetX(camPosition), XMVectorGetY(camPosition), XMVectorGetZ(camPosition) ); //Set sphereWorld's world space using the transformations sphereWorld = Scale * Translation; //the loaded models world space meshWorld = XMMatrixIdentity(); Rotation = XMMatrixRotationY(3.14f); Scale = XMMatrixScaling( 1.0f, 1.0f, 1.0f ); Translation = XMMatrixTranslation( 0.0f, 0.0f, 0.0f ); meshWorld = Rotation * Scale * Translation; ///////////////**************new**************//////////////////// Scale = XMMatrixScaling( 0.04f, 0.04f, 0.04f ); // The model is a bit too large for our scene, so make it smaller Translation = XMMatrixTranslation( 0.0f, 3.0f, 0.0f ); smilesWorld = Scale * Translation; ///////////////**************new**************//////////////////// } void RenderText(std::wstring text, int inInt) { d3d11DevCon->PSSetShader(D2D_PS, 0, 0); //Release the D3D 11 Device keyedMutex11->ReleaseSync(0); //Use D3D10.1 device keyedMutex10->AcquireSync(0, 5); //Draw D2D content D2DRenderTarget->BeginDraw(); //Clear D2D Background D2DRenderTarget->Clear(D2D1::ColorF(0.0f, 0.0f, 0.0f, 0.0f)); //Create our string std::wostringstream printString; printString << text << inInt; printText = printString.str(); //Set the Font Color D2D1_COLOR_F FontColor = D2D1::ColorF(1.0f, 1.0f, 1.0f, 1.0f); //Set the brush color D2D will use to draw with Brush->SetColor(FontColor); //Create the D2D Render Area D2D1_RECT_F layoutRect = D2D1::RectF(0, 0, Width, Height); //Draw the Text D2DRenderTarget->DrawText( printText.c_str(), wcslen(printText.c_str()), TextFormat, layoutRect, Brush ); D2DRenderTarget->EndDraw(); //Release the D3D10.1 Device keyedMutex10->ReleaseSync(1); //Use the D3D11 Device keyedMutex11->AcquireSync(1, 5); //Use the shader resource representing the direct2d render target //to texture a square which is rendered in screen space so it //overlays on top of our entire scene. We use alpha blending so //that the entire background of the D2D render target is "invisible", //And only the stuff we draw with D2D will be visible (the text) //Set the blend state for D2D render target texture objects d3d11DevCon->OMSetBlendState(d2dTransparency, NULL, 0xffffffff); //Set the d2d Index buffer d3d11DevCon->IASetIndexBuffer( d2dIndexBuffer, DXGI_FORMAT_R32_UINT, 0); //Set the d2d vertex buffer UINT stride = sizeof( Vertex ); UINT offset = 0; d3d11DevCon->IASetVertexBuffers( 0, 1, &d2dVertBuffer, &stride, &offset ); WVP = XMMatrixIdentity(); cbPerObj.WVP = XMMatrixTranspose(WVP); d3d11DevCon->UpdateSubresource( cbPerObjectBuffer, 0, NULL, &cbPerObj, 0, 0 ); d3d11DevCon->VSSetConstantBuffers( 0, 1, &cbPerObjectBuffer ); d3d11DevCon->PSSetShaderResources( 0, 1, &d2dTexture ); d3d11DevCon->PSSetSamplers( 0, 1, &CubesTexSamplerState ); d3d11DevCon->RSSetState(CWcullMode); d3d11DevCon->DrawIndexed( 6, 0, 0 ); } void DrawScene() { //Clear our render target and depth/stencil view float bgColor[4] = { 0.1f, 0.1f, 0.1f, 1.0f }; d3d11DevCon->ClearRenderTargetView(renderTargetView, bgColor); d3d11DevCon->ClearDepthStencilView(depthStencilView, D3D11_CLEAR_DEPTH|D3D11_CLEAR_STENCIL, 1.0f, 0); constbuffPerFrame.light = light; d3d11DevCon->UpdateSubresource( cbPerFrameBuffer, 0, NULL, &constbuffPerFrame, 0, 0 ); d3d11DevCon->PSSetConstantBuffers(0, 1, &cbPerFrameBuffer); //Set our Render Target d3d11DevCon->OMSetRenderTargets( 1, &renderTargetView, depthStencilView ); //Set the default blend state (no blending) for opaque objects d3d11DevCon->OMSetBlendState(0, 0, 0xffffffff); //Set Vertex and Pixel Shaders d3d11DevCon->VSSetShader(VS, 0, 0); d3d11DevCon->PSSetShader(PS, 0, 0); UINT stride = sizeof( Vertex ); UINT offset = 0; ///////////////**************new**************//////////////////// ///***Draw MD5 Model***/// for(int i = 0; i < NewMD5Model.numSubsets; i ++) { //Set the grounds index buffer d3d11DevCon->IASetIndexBuffer( NewMD5Model.subsets[i].indexBuff, DXGI_FORMAT_R32_UINT, 0); //Set the grounds vertex buffer d3d11DevCon->IASetVertexBuffers( 0, 1, &NewMD5Model.subsets[i].vertBuff, &stride, &offset ); //Set the WVP matrix and send it to the constant buffer in effect file WVP = smilesWorld * camView * camProjection; cbPerObj.WVP = XMMatrixTranspose(WVP); cbPerObj.World = XMMatrixTranspose(smilesWorld); cbPerObj.hasTexture = true; // We'll assume all md5 subsets have textures cbPerObj.hasNormMap = false; // We'll also assume md5 models have no normal map (easy to change later though) d3d11DevCon->UpdateSubresource( cbPerObjectBuffer, 0, NULL, &cbPerObj, 0, 0 ); d3d11DevCon->VSSetConstantBuffers( 0, 1, &cbPerObjectBuffer ); d3d11DevCon->PSSetConstantBuffers( 1, 1, &cbPerObjectBuffer ); d3d11DevCon->PSSetShaderResources( 0, 1, &meshSRV[NewMD5Model.subsets[i].texArrayIndex] ); d3d11DevCon->PSSetSamplers( 0, 1, &CubesTexSamplerState ); d3d11DevCon->RSSetState(RSCullNone); d3d11DevCon->DrawIndexed( NewMD5Model.subsets[i].indices.size(), 0, 0 ); } ///////////////**************new**************//////////////////// /////Draw our model's NON-transparent subsets///// for(int i = 0; i < meshSubsets; ++i) { //Set the grounds index buffer d3d11DevCon->IASetIndexBuffer( meshIndexBuff, DXGI_FORMAT_R32_UINT, 0); //Set the grounds vertex buffer d3d11DevCon->IASetVertexBuffers( 0, 1, &meshVertBuff, &stride, &offset ); //Set the WVP matrix and send it to the constant buffer in effect file WVP = meshWorld * camView * camProjection; cbPerObj.WVP = XMMatrixTranspose(WVP); cbPerObj.World = XMMatrixTranspose(meshWorld); cbPerObj.difColor = material[meshSubsetTexture[i]].difColor; cbPerObj.hasTexture = material[meshSubsetTexture[i]].hasTexture; cbPerObj.hasNormMap = material[meshSubsetTexture[i]].hasNormMap; d3d11DevCon->UpdateSubresource( cbPerObjectBuffer, 0, NULL, &cbPerObj, 0, 0 ); d3d11DevCon->VSSetConstantBuffers( 0, 1, &cbPerObjectBuffer ); d3d11DevCon->PSSetConstantBuffers( 1, 1, &cbPerObjectBuffer ); if(material[meshSubsetTexture[i]].hasTexture) d3d11DevCon->PSSetShaderResources( 0, 1, &meshSRV[material[meshSubsetTexture[i]].texArrayIndex] ); if(material[meshSubsetTexture[i]].hasNormMap) d3d11DevCon->PSSetShaderResources( 1, 1, &meshSRV[material[meshSubsetTexture[i]].normMapTexArrayIndex] ); d3d11DevCon->PSSetSamplers( 0, 1, &CubesTexSamplerState ); d3d11DevCon->RSSetState(RSCullNone); int indexStart = meshSubsetIndexStart[i]; int indexDrawAmount = meshSubsetIndexStart[i+1] - meshSubsetIndexStart[i]; if(!material[meshSubsetTexture[i]].transparent) d3d11DevCon->DrawIndexed( indexDrawAmount, indexStart, 0 ); } /////Draw the Sky's Sphere////// //Set the spheres index buffer d3d11DevCon->IASetIndexBuffer( sphereIndexBuffer, DXGI_FORMAT_R32_UINT, 0); //Set the spheres vertex buffer d3d11DevCon->IASetVertexBuffers( 0, 1, &sphereVertBuffer, &stride, &offset ); //Set the WVP matrix and send it to the constant buffer in effect file WVP = sphereWorld * camView * camProjection; cbPerObj.WVP = XMMatrixTranspose(WVP); cbPerObj.World = XMMatrixTranspose(sphereWorld); d3d11DevCon->UpdateSubresource( cbPerObjectBuffer, 0, NULL, &cbPerObj, 0, 0 ); d3d11DevCon->VSSetConstantBuffers( 0, 1, &cbPerObjectBuffer ); //Send our skymap resource view to pixel shader d3d11DevCon->PSSetShaderResources( 0, 1, &smrv ); d3d11DevCon->PSSetSamplers( 0, 1, &CubesTexSamplerState ); //Set the new VS and PS shaders d3d11DevCon->VSSetShader(SKYMAP_VS, 0, 0); d3d11DevCon->PSSetShader(SKYMAP_PS, 0, 0); //Set the new depth/stencil and RS states d3d11DevCon->OMSetDepthStencilState(DSLessEqual, 0); d3d11DevCon->RSSetState(RSCullNone); d3d11DevCon->DrawIndexed( NumSphereFaces * 3, 0, 0 ); //Set the default VS, PS shaders and depth/stencil state d3d11DevCon->VSSetShader(VS, 0, 0); d3d11DevCon->PSSetShader(PS, 0, 0); d3d11DevCon->OMSetDepthStencilState(NULL, 0); /////Draw our model's TRANSPARENT subsets now///// //Set our blend state d3d11DevCon->OMSetBlendState(Transparency, NULL, 0xffffffff); for(int i = 0; i < meshSubsets; ++i) { //Set the grounds index buffer d3d11DevCon->IASetIndexBuffer( meshIndexBuff, DXGI_FORMAT_R32_UINT, 0); //Set the grounds vertex buffer d3d11DevCon->IASetVertexBuffers( 0, 1, &meshVertBuff, &stride, &offset ); //Set the WVP matrix and send it to the constant buffer in effect file WVP = meshWorld * camView * camProjection; cbPerObj.WVP = XMMatrixTranspose(WVP); cbPerObj.World = XMMatrixTranspose(meshWorld); cbPerObj.difColor = material[meshSubsetTexture[i]].difColor; cbPerObj.hasTexture = material[meshSubsetTexture[i]].hasTexture; cbPerObj.hasNormMap = material[meshSubsetTexture[i]].hasNormMap; d3d11DevCon->UpdateSubresource( cbPerObjectBuffer, 0, NULL, &cbPerObj, 0, 0 ); d3d11DevCon->VSSetConstantBuffers( 0, 1, &cbPerObjectBuffer ); d3d11DevCon->PSSetConstantBuffers( 1, 1, &cbPerObjectBuffer ); if(material[meshSubsetTexture[i]].hasTexture) d3d11DevCon->PSSetShaderResources( 0, 1, &meshSRV[material[meshSubsetTexture[i]].texArrayIndex] ); if(material[meshSubsetTexture[i]].hasNormMap) d3d11DevCon->PSSetShaderResources( 1, 1, &meshSRV[material[meshSubsetTexture[i]].normMapTexArrayIndex] ); d3d11DevCon->PSSetSamplers( 0, 1, &CubesTexSamplerState ); d3d11DevCon->RSSetState(RSCullNone); int indexStart = meshSubsetIndexStart[i]; int indexDrawAmount = meshSubsetIndexStart[i+1] - meshSubsetIndexStart[i]; if(material[meshSubsetTexture[i]].transparent) d3d11DevCon->DrawIndexed( indexDrawAmount, indexStart, 0 ); } RenderText(L"FPS: ", fps); //Present the backbuffer to the screen SwapChain->Present(0, 0); } int messageloop(){ MSG msg; ZeroMemory(&msg, sizeof(MSG)); while(true) { BOOL PeekMessageL( LPMSG lpMsg, HWND hWnd, UINT wMsgFilterMin, UINT wMsgFilterMax, UINT wRemoveMsg ); if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) { if (msg.message == WM_QUIT) break; TranslateMessage(&msg); DispatchMessage(&msg); } else{ // run game code frameCount++; if(GetTime() > 1.0f) { fps = frameCount; frameCount = 0; StartTimer(); } frameTime = GetFrameTime(); DetectInput(frameTime); UpdateScene(frameTime); DrawScene(); } } return msg.wParam; } LRESULT CALLBACK WndProc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam) { switch( msg ) { case WM_KEYDOWN: if( wParam == VK_ESCAPE ){ DestroyWindow(hwnd); } return 0; case WM_DESTROY: PostQuitMessage(0); return 0; } return DefWindowProc(hwnd, msg, wParam, lParam); } Effects.fx struct Light { float3 pos; float range; float3 dir; float cone; float3 att; float4 ambient; float4 diffuse; }; cbuffer cbPerFrame { Light light; }; cbuffer cbPerObject { float4x4 WVP; float4x4 World; float4 difColor; bool hasTexture; bool hasNormMap; }; Texture2D ObjTexture; Texture2D ObjNormMap; SamplerState ObjSamplerState; TextureCube SkyMap; struct VS_OUTPUT { float4 Pos : SV_POSITION; float4 worldPos : POSITION; float2 TexCoord : TEXCOORD; float3 normal : NORMAL; float3 tangent : TANGENT; }; struct SKYMAP_VS_OUTPUT //output structure for skymap vertex shader { float4 Pos : SV_POSITION; float3 texCoord : TEXCOORD; }; VS_OUTPUT VS(float4 inPos : POSITION, float2 inTexCoord : TEXCOORD, float3 normal : NORMAL, float3 tangent : TANGENT) { VS_OUTPUT output; output.Pos = mul(inPos, WVP); output.worldPos = mul(inPos, World); output.normal = mul(normal, World); output.tangent = mul(tangent, World); output.TexCoord = inTexCoord; return output; } SKYMAP_VS_OUTPUT SKYMAP_VS(float3 inPos : POSITION, float2 inTexCoord : TEXCOORD, float3 normal : NORMAL, float3 tangent : TANGENT) { SKYMAP_VS_OUTPUT output = (SKYMAP_VS_OUTPUT)0; //Set Pos to xyww instead of xyzw, so that z will always be 1 (furthest from camera) output.Pos = mul(float4(inPos, 1.0f), WVP).xyww; output.texCoord = inPos; return output; } float4 PS(VS_OUTPUT input) : SV_TARGET { input.normal = normalize(input.normal); //Set diffuse color of material float4 diffuse = difColor; //If material has a diffuse texture map, set it now if(hasTexture == true) diffuse = ObjTexture.Sample( ObjSamplerState, input.TexCoord ); //If material has a normal map, we can set it now if(hasNormMap == true) { //Load normal from normal map float4 normalMap = ObjNormMap.Sample( ObjSamplerState, input.TexCoord ); //Change normal map range from [0, 1] to [-1, 1] normalMap = (2.0f*normalMap) - 1.0f; //Make sure tangent is completely orthogonal to normal input.tangent = normalize(input.tangent - dot(input.tangent, input.normal)*input.normal); //Create the biTangent float3 biTangent = cross(input.normal, input.tangent); //Create the "Texture Space" float3x3 texSpace = float3x3(input.tangent, biTangent, input.normal); //Convert normal from normal map to texture space and store in input.normal input.normal = normalize(mul(normalMap, texSpace)); } float3 finalColor; finalColor = diffuse * light.ambient; finalColor += saturate(dot(light.dir, input.normal) * light.diffuse * diffuse); return float4(finalColor, diffuse.a); } float4 SKYMAP_PS(SKYMAP_VS_OUTPUT input) : SV_Target { return SkyMap.Sample(ObjSamplerState, input.texCoord); } float4 D2D_PS(VS_OUTPUT input) : SV_TARGET { float4 diffuse = ObjTexture.Sample( ObjSamplerState, input.TexCoord ); return diffuse; }
Comments
Can i translate this post on my blog? I of course will put this post link in top of post which is translated.
on Jun 02 `16
noblecat
Hi noblecat, You most certainly can!
on Jun 03 `16
iedoc
how we can rotate model around axes? Thank you
on Oct 16 `17
az0634