<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[The Dev Log]]></title><description><![CDATA[Software Development, Arduino, Graphics, Computer Vision]]></description><link>https://brettsem.dev/</link><generator>Ghost 4.36</generator><lastBuildDate>Thu, 16 Apr 2026 02:14:29 GMT</lastBuildDate><atom:link href="https://brettsem.dev/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[An Overview of Direct3D 11 Operation and Resources]]></title><description><![CDATA[<p>I had originally intended this post to be a tutorial on initializing Direct3D 11 and rendering a triangle to the screen. Unfortunately, by the time I got past the basic theory behind utilizing Direct3D and resources this post was already 3600 words in length. If I included all of the</p>]]></description><link>https://brettsem.dev/an-overview-of-direct3d-11-operation-and-resources/</link><guid isPermaLink="false">624916f1d984b50f878231a5</guid><dc:creator><![CDATA[Brett Semmler]]></dc:creator><pubDate>Fri, 08 Apr 2022 20:25:16 GMT</pubDate><content:encoded><![CDATA[<p>I had originally intended this post to be a tutorial on initializing Direct3D 11 and rendering a triangle to the screen. Unfortunately, by the time I got past the basic theory behind utilizing Direct3D and resources this post was already 3600 words in length. If I included all of the code snippets and code explanations this post would simply be too long for a single topic. Therefore, I am going to limit this post to purely theory on Direct3D 11 operation, resources, resource views and the Swapchain.</p><!--kg-card-begin: markdown--><h2>Intended Audience</h2><!--kg-card-end: markdown--><p>Although this post is only theory, I am creating it as part of a series so I will have the same prerequisites as my <a href="https://brettsem.dev/getting-started-with-d3d11-window/">last post</a>. Please check that you have the required knowledge, hardware, and software to continue on. </p><!--kg-card-begin: markdown--><h2>High Level Overview<!--kg-card-end: markdown--><figure class="kg-card kg-image-card"><img src="https://brettsem.dev/content/images/2022/04/GraphicsArchitecture.png" class="kg-image" alt loading="lazy" width="370" height="649"></figure><!--kg-card-begin: markdown--><h3>Direct3D and DXGI</h3><!--kg-card-end: markdown--><p>Direct3D is a native API present on Windows for communicating with and controlling &#xA0;video hardware. The meaning of native is that it is designed to be utilized from within a C/C++ application. Although it may seem like there is only one layer between our application and the graphics driver/hardware, there are actually numerous layers that comprise a graphics application. At the top level of this software stack is our application, this is responsible for sending data and commands to the Direct3D runtime. From there the runtime forwards those commands to the user mode driver of the video hardware which then interacts with DXGI. </p><p>DXGI or the DirectX Graphics Infrastructure is responsible for the low level communication with the kernel mode driver. Additionally, it handles the management of hardware resources related to the GPU. DXGI also has functionality that we can call directly from our application, some of this functionality can perform actions such as enumerate the video hardware present on the system and query their capabilities. It also helps us to create frame buffers called Swapchains, they allow us to present to the window. </p><!--kg-card-begin: markdown--><h3>COM</h3><!--kg-card-end: markdown--><p>Both Direct3D and DXGI are implemented as a collection of Component Object Model (COM) interfaces. We can&apos;t go too deep into COM itself as that&apos;s out of scope and a discussion on its own. But the gist of COM is that it provides a set of standard interfaces for each object. </p><p>Each object isn&apos;t allocated through keywords like &apos;new&apos;; rather, they&apos;re produced by a method that implements a factory pattern. Likewise, we cannot use &apos;delete&apos; with COM objects since we never allocated them. When we utilize one of these factory methods they return a reference to you. Each COM object is reference counted, this is subsequently used to manage their lifetime. </p><p>Another important note that we need to know for COM objects is interface querying. This is part of the IUnkown interface which allows us to discover additional interfaces that a COM object implements. By passing a Universally Unique Identifier (UUID) of our desired interface, and a pointer to pointer of the desired interface reference to IUnkown::QueryInterface() we can retrieve the desired interface if it is available.</p><!--kg-card-begin: markdown--><h2>The Graphics Pipeline</h2><!--kg-card-end: markdown--><figure class="kg-card kg-image-card"><img src="https://brettsem.dev/content/images/2022/04/Graphics-Pipeline-Horizontal.png" class="kg-image" alt loading="lazy" width="1573" height="376" srcset="https://brettsem.dev/content/images/size/w600/2022/04/Graphics-Pipeline-Horizontal.png 600w, https://brettsem.dev/content/images/size/w1000/2022/04/Graphics-Pipeline-Horizontal.png 1000w, https://brettsem.dev/content/images/2022/04/Graphics-Pipeline-Horizontal.png 1573w" sizes="(min-width: 720px) 720px"></figure><p>The <a href="https://docs.microsoft.com/en-us/windows/win32/direct3d11/overviews-direct3d-11-graphics-pipeline">graphics pipeline</a> is a conceptual model for the steps/stages that are performed on input data before being written to an output buffer. (Which may or may not be the display.) Its a mixture of fixed function and programmable stages. The programmable stages of the pipeline are known as shaders. If you&apos;re into video games you&apos;ve probably heard that term before. Shaders are essentially small programs that execute on a set of data. Each shader stage operates on different type of data such as vertices or pixels. Additionally, not all programmable stages need to be used. Fixed function stages are usually configurable and are sometimes a dedicated piece of hardware on the GPU. They perform only one function and are usually configured through and exposed state. The graphics pipeline is manipulated through what is called a device context. It is encapsulated through the ID3D11DeviceContext interface, this interface provides methods to update the state of the pipeline by setting resources, shaders, and various state objects for the fixed function stages of the pipeline.</p><!--kg-card-begin: markdown--><h4>Input Asssembler</h4><!--kg-card-end: markdown--><p>The Input Assembler stage is a configurable fixed function stage that assembles primitive data (points, lines, triangles) from user-filled buffers into primitives types such as point lists, line lists, triangle lists, triangle strips and numerous other types. Several types contain adjacency data that can be used later on in the pipeline. This stage also attaches system-generated values to primitives to increase the efficiency of shaders. These values are referred to as semantics. The buffers that we bind to the stage contain per-vertex data. In this stage we need to create what&apos;s called an input layout that describes the data elements in the buffer. This input layout is dependent on what the vertex shader expects in the next stage.</p><!--kg-card-begin: markdown--><h4>Vertex Shader</h4><!--kg-card-end: markdown--><p>The Vertex Shader is a programmable stage that processes vertices from the input assembler by performing per vertex operations like transformations or per-vertex lighting. These shader programs always operate on only one vertex at a time. This stage is always active and as such, if it&apos;s not needed, it must be programmed to pass through the data. Vertex shaders are encapsulated by the ID3D11VertexShader interface.</p><!--kg-card-begin: markdown--><h4>Tessellation Stages</h4><!--kg-card-end: markdown--><p>Although the image above only shows one stage, in reality this actually three separate stages. The three stages are the Hull Shader, Tessellator, and Domain Shader. These stages are designed for surface subdivision, essentially what this means is it will take a the polygons that make up a surface and divide them. This yields a higher fidelity surface that before with a less faceted appearance. We&apos;re only glossing over these stages as they&apos;re fairly advanced and technical.</p><!--kg-card-begin: markdown--><h4>Geometry Shader</h4><!--kg-card-end: markdown--><p>The Geometry Shader is designed to generate new vertices using the adjacency data of some primitive types mentioned earlier in the Input Assembler stage. Again, this is also an advanced topic that is out of scope for this post. Geometry shaders are encapsulated by the ID3D11GeometryShader interface.</p><!--kg-card-begin: markdown--><h4>Stream Output</h4><!--kg-card-end: markdown--><p>The Stream Output stage is fairly simple, it&apos;s designed to output processed primitives before they&apos;re rasterized (transformed into pixels). This can be useful when you&apos;re utilizing multi-pass rendering techniques. This simply means that data is processed in the pipeline and then streamed out, the pipeline will then be reconfigured before that data is sent back into the pipeline for further processing.</p><!--kg-card-begin: markdown--><h4>Rasterizer</h4><!--kg-card-end: markdown--><p>The Rasterizer is a configurable fixed function stage that takes our primitives and turns them into raster images (pixel images). This stage also performs functions such as culling and clipping (removing hidden pixels). This stage can also be configured to change the &quot;fill&quot; of rendered primitives. For example, this stage can be configured to rasterize primitives as wireframes.</p><!--kg-card-begin: markdown--><h4>Pixel Shader</h4><!--kg-card-end: markdown--><p>The Pixel Shader is a programmable stage that is extremely powerful as it allows us to perform techniques like per-pixel lighting, texturing and post processing. It takes in a variety of data like constant variables and texture data to produce per-pixel outputs. The Rasterizer stage invokes the pixel shader once for each pixel covered by a primitive. Pixel shaders are encapsulated by the ID3D11PixelShader interface.</p><!--kg-card-begin: markdown--><h4>Output-Merger</h4><!--kg-card-end: markdown--><p>The Output-Merger stage is the last step in the pipeline. It&apos;s a fixed function stage that assembles all our pixels into a cohesive image. It can utilize depth data of each pixel to determine which pixels are present (unobstructed) in the final image.</p><!--kg-card-begin: markdown--><h2>Direct3D Resources</h2><!--kg-card-end: markdown--><p>There are generally two groups of resources found within Direct3D, Textures and Buffers. Textures are a roughly split into one, two and three dimension textures; whereas, Buffers are more uniform and are generally considered to be one dimension. Buffers have numerous different types corresponding to the type of data they hold and/or how they&apos;re utilized. Both groups of resources are bound to various points throughout the graphics pipeline. All resources, state objects and shader objects found within Direct3D are created through what is called the Device. This is related the device context that we discussed earlier. The Device is encapsulated by the ID3D11Device interface, every Direct3D application must have at least one device (though they usually only have one). </p><!--kg-card-begin: markdown--><h3>Textures</h3><!--kg-card-end: markdown--><p>Textures are a structured collection of elements known as texels. You can think of them as a series of cells in a cartesian coordinate system. Normally we would denote the axes as X, Y and Z, but because those are already used in model/world/camera space (coordinates of objects in a virtual world) we refer to them as U, V and W. They refer to the length, width, and depth respectively (if applicable). Textures are most commonly seen in their two dimensional form as they&apos;re often used to detail a 3D model using a bitmap image. You can see this in the image down below, the first tank is untextured where as the second tank is textured. (Image credit to Wikipedia.) It should be noted though that textures can contain more than just color information, one example is that they&apos;re often used to store depth information.</p><figure class="kg-card kg-image-card"><a href="https://upload.wikimedia.org/wikipedia/commons/3/30/Texturedm1a2.png"><img src="https://brettsem.dev/content/images/2022/04/Texturedm1a2-1-.png" class="kg-image" alt loading="lazy" width="978" height="718" srcset="https://brettsem.dev/content/images/size/w600/2022/04/Texturedm1a2-1-.png 600w, https://brettsem.dev/content/images/2022/04/Texturedm1a2-1-.png 978w" sizes="(min-width: 720px) 720px"></a></figure><!--kg-card-begin: markdown--><h4>1D Textures</h4><!--kg-card-end: markdown--><p>As the name implies, this type of texture is one dimensional. Its probably easiest to think of these as an array of colours. Depending on the data format each texel can contain a number of different colour components. These textures are addressed using the U coordinate. These textures are encapsulated by the ID3D11Texture1D interface. </p><figure class="kg-card kg-image-card"><img src="https://brettsem.dev/content/images/2022/04/Texture1D-3.png" class="kg-image" alt loading="lazy" width="301" height="234"></figure><!--kg-card-begin: markdown--><h4>2D Textures</h4><!--kg-card-end: markdown--><p>Much like a regular image . This is a two dimensional texture that can represents a 2D bitmap image. These are addressed by the U and V coordinates and are utilized in a process called <a href="https://en.wikipedia.org/wiki/UV_mapping">UV mapping</a>. These textures are encapsulated by the ID3D11Texture2D interface. </p><figure class="kg-card kg-image-card"><img src="https://brettsem.dev/content/images/2022/04/Texture2D-1.png" class="kg-image" alt loading="lazy" width="291" height="401"></figure><!--kg-card-begin: markdown--><h4>3D Textures</h4><!--kg-card-end: markdown--><p>You can visualize these textures as a cube that is addressed by the U, V, and W coordinates. These are frankly quite weird, but they can be used for really cool effects like volumetric smoke or volumetric light rays. These textures are encapsulated by the ID3D11Texture3D interface.</p><figure class="kg-card kg-image-card"><img src="https://brettsem.dev/content/images/2022/04/Texture1D.png" class="kg-image" alt loading="lazy" width="429" height="454"></figure><!--kg-card-begin: markdown--><h4>Mipmaps</h4><!--kg-card-end: markdown--><p>Mipmaps are progressively lower resolution textures that are calculated from a texture resource and stored along with it. Usually multiple mip &apos;levels&apos; exist which are progressively smaller textures. Mipmaps are used to increase rendering speed and aliasing artifacts. They&apos;re often used for far away objects in a scene since at longer ranges only so much detail can be seen on an object. You can see the effect of mipmapping in the image below. Notice the banding towards the top of the image. (Image credit to Wikipedia.) Each texture resource you create has the ability to store mipmaps of it self. </p><figure class="kg-card kg-image-card"><a href="https://upload.wikimedia.org/wikipedia/commons/5/59/Mipmap_Aliasing_Comparison.png"><img src="https://brettsem.dev/content/images/2022/04/image.png" class="kg-image" alt loading="lazy" width="1920" height="540" srcset="https://brettsem.dev/content/images/size/w600/2022/04/image.png 600w, https://brettsem.dev/content/images/size/w1000/2022/04/image.png 1000w, https://brettsem.dev/content/images/size/w1600/2022/04/image.png 1600w, https://brettsem.dev/content/images/2022/04/image.png 1920w" sizes="(min-width: 720px) 720px"></a></figure><!--kg-card-begin: markdown--><h4>Texture Arrays</h4><!--kg-card-end: markdown--><p>Both ID3D11Texture1D and ID3D11Texture2D are capable of containing homogenous arrays. By homogenous, I mean that each texture of the array has the same data format, dimensions, and mip levels. </p><!--kg-card-begin: markdown--><h3>Buffers</h3><!--kg-card-end: markdown--><p>Buffers are an unstructured resource that are a collection of of typed data groups. They can store numerous forms of data including, but not limited to: positional vectors, texture coordinates, indices. Because they&apos;re unstructured, they cannot contain mipmap levels. There are six different types of buffers in Direct3D 11, all of them are encapsulated through the ID3D11Buffer interface. In this post we&apos;re going to focus only on a the vertex, index, constant buffers. The others which are the structured buffer, append and consume buffer, and finally the byte address buffer. These last three buffer types are bit more advanced so we&apos;re going to skip over them for now.</p><!--kg-card-begin: markdown--><h4>Vertex Buffers</h4><!--kg-card-end: markdown--><p>Vertex buffers are buffers that are designed to hold contain per vertex data. This data can vary wildly depending on what your pipeline configuration is expecting. For example, here is a vertex definition for the program that I will go over in the next post. </p><pre><code class="language-cpp">// XMFLOAT4 is simply a four 32 bit floats packed into one structure
struct Vertex
{
    XMFLOAT4 position;
    XMFLOAT4 color;
};

Vertex triangle[] =
{
    // pos(x, y z, 1)   color(r,g,b,a) 
    { XMFLOAT4( 1.0f, -1.0f, 0.0f, 1.0f ),  XMFLOAT4( 1.0f, 0.0f, 0.0f, 1.0f ) }, // Bottom right.
    { XMFLOAT4( -1.0f, -1.0f, 0.0f, 1.0f ), XMFLOAT4( 0.0f, 1.0f, 0.0f, 1.0f ) }, // Bottom left.
    { XMFLOAT4( 0.0f, 1.0f, 0.0f, 1.0f ),   XMFLOAT4( 0.0f, 0.0f, 1.0f, 1.0f ) }, // Top.
};</code></pre><p>Here you see that each one of my vertices are comprised of two, four float packs. One pack for the position of the vertex and one pack for RGBA colour. The array specifies three vertices that make up a triangle. That array would be memory copied to the GPU upon creating the buffer. </p><!--kg-card-begin: markdown--><h4>Index Buffers</h4><!--kg-card-end: markdown--><p>Index buffers are related to vertex buffers, they store indices that determine which vertices make up a primitive. Indices are represented as an array of either 16bit or 32bit unsigned integers. Technically, you don&apos;t need to use index buffers at all, this is because there are draw calls that simply use the order of the vertices for creating primitives; however, this creates the problem that you need to duplicate vertex data in a model as vertices can&apos;t be shared. Consider a cube, in total there a total of twelve triangle primitives that comprise it (two per face). If we used indexed rendering then we only need eight vertices to represent a cube. But if we used vertex order based rendering we would need a total of 36 vertices to draw a cube. This is because each triangle primitive would need 3 vertices in order to be drawn. </p><!--kg-card-begin: markdown--><h4>Constant Buffers</h4><!--kg-card-end: markdown--><p>Constant buffers are unique in that they&apos;re designed to supply a shader program data constants. A common use case of constant buffers is the transformational data used in vertex shaders. This would be comprised of a matrix or matrices that transform the vertices within a model to a new position.</p><!--kg-card-begin: markdown--><h3>Resource Usage and CPU Access</h3><!--kg-card-end: markdown--><p>Both textures and buffers have the ability to be read and written to. They also have mutability options as well. When creating a buffers and textures there is a D3D11_USAGE enumerator field which dictates how the resource is to be used. There are a total of four different values, D3D11_USAGE_DEFAULT, D3D11_USAGE_IMMUTABLE, D3D11_USAGE_DYNAMIC, and D3D11_USAGE_STAGING. If that field is set to default then that resource can only be accessed by the GPU for reads and writes. If its set to immutable then that resource cannot be accessed by the CPU and the GPU only has read permissions. An immutable resource can only be initialized, it cannot be changed. For a dynamic usage the CPU can write to the resource while the GPU can only read. This configuration is often used for constant buffers to update transformation and other per frame or per draw call data. The last configuration is staging which allows the GPU to write while the CPU reads. This can be used to stream out data from the GPU.</p><p>We all also have CPU Access Flags for resources. This is another enumerator field called D3D11_CPU_ACCESS_FLAG. There are two possible values that can be bitwise ORed together, the two configurations are write or read. Depending on the usage that was used determines which of these flags you can use.</p><!--kg-card-begin: markdown--><h3>Resource Views</h3><!--kg-card-end: markdown--><p>Resource views are designed to help the runtime determine how a resource is to be used. Certain resources can be utilized in different locations around the pipeline we must use a resource view to tell the runtime how we intend to use the resource. Different resource views allow different types of resources to be used. We have four types of resource views available to us, they&apos;re the Render Target View, Depth Stencil View, Shader Resource View, and the Unordered Access View. There are other resource binding types available which are typically used for resources that don&apos;t have ambiguous usages as they have single purpose. This would include things like vertex buffers, index buffers, and constant buffers. We will now look at the various resource views in further detail.</p><!--kg-card-begin: markdown--><h4>Render Target View</h4><!--kg-card-end: markdown--><p>A render target view is encapsulated by the ID3D11RenderTargetView interface. This view is bound to the output merger stage and it points to a texture resource on the GPU. This view allows the output merger to write the pixels that it has assembled into a texture. If this texture was one of the back buffers of the Swapchain (which will be discussed soon) then we can present that texture to the screen as our output frame. Additionally, we can use this texture in the pipeline and apply it to objects in the scene. By being able to render to a texture and utilize it later we can create effects like mirrors, see through portals, in game displays, mini maps, etc. This effect can be seen in the games Portal and Portal 2 by Valve Corporation. This effect is so delicious and moist.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://brettsem.dev/content/images/2022/04/20220410005611_1.jpg" class="kg-image" alt loading="lazy" width="2000" height="1125" srcset="https://brettsem.dev/content/images/size/w600/2022/04/20220410005611_1.jpg 600w, https://brettsem.dev/content/images/size/w1000/2022/04/20220410005611_1.jpg 1000w, https://brettsem.dev/content/images/size/w1600/2022/04/20220410005611_1.jpg 1600w, https://brettsem.dev/content/images/size/w2400/2022/04/20220410005611_1.jpg 2400w" sizes="(min-width: 720px) 720px"><figcaption>Portal 2</figcaption></figure><!--kg-card-begin: markdown--><h4>Depth Stencil View</h4><!--kg-card-end: markdown--><p>The Depth Stencil View is extremely similar to a Render Target View in that it binds a shader that is &apos;rendered&apos; too. The texture doesn&apos;t get filled with colour data but rather with depth information that can be used for occlusion of geometry among other things. (If two primitives occupy the same screen space we check the depth of each of them and draw the primitive that is closer). This view is encapsulated by the ID3D11DepthStencilView interface.</p><!--kg-card-begin: markdown--><h4>Shader Resource View</h4><!--kg-card-end: markdown--><p>The Shader Resource View or SRV for short is used to bind textures and buffers to the shader stages of the pipeline for read only access. For example if we wanted to texture a model we would have to create a shader resource view that points to that texture resource. We would then bind that view to the pixel shader and then read data from that texture. The interface for the shader resource view is ID3D11ShaderResourceView</p><!--kg-card-begin: markdown--><h4>Unordered Access View</h4><!--kg-card-end: markdown--><p>Unorded Access Views (UAVs) are similar to SRVs in that they bind resources to shaders but more specifically they bind to pixel shaders and compute shaders. (A general purpose for shader program for GPGPU operations.) The difference lies in the fact that UAVs allow for random read and write access to the resource. UAVs are encapsulated by the ID3D11UnorderedAccessView interface.</p><!--kg-card-begin: markdown--><h2>The Swapchain</h2><!--kg-card-end: markdown--><p>The Swapchain is a relatively simple concept, it a series of buffers (textures) that are written to and presented to the window. While one buffer is being presented we&apos;re writing to the second buffer known as the back buffer. Originally, games and other applications only used a single buffer, this had an immersion breaking problem though. The problem was that the buffer could be drawn to the screen as it was being written to from the GPU. Unfortunately, this meant that the user could see the scene being drawn to the screen piece by piece.</p><p>Currently we use at least two buffers which are are constantly swapping, hence the name swapchain. We&apos;re never writing to the buffer that is currently being display to the screen. In Direct3D swapchains are encapsulated with the IDXGISwapchain interface. To draw to the back buffer of the swapchain we utilize a render target view that points to the texture resource of the back buffer. When we&apos;re done rendering we simply call the IDXGISwapchain::Present() method which swaps the buffers.</p><p>Although you don&apos;t technically need a Swapchain for a Direct3D application, you do need to create a Swapchain if you intend to create any sort of interactive or real-time application. About the only time you don&apos;t utilize a Swapchain is if you intend to never display but rather save images to the disk. Normally this is used for generating cinematics. Cinematics in video games are typically created using applications like Autodesk Maya or Autodesk 3DS Max or occasionally the open source application blender. These applications are often entirely ray-traced rendering which produce absolutely stunning results but at a very slow pace (potentially minutes, hours or even days for a single frame). Game developers are starting to utilize their game engines rendering capabilities to produce cinematics that are closer to what their games look like. In this manner you can also crank the render fidelity since real-time/interactive frame rates are not necessary.</p><!--kg-card-begin: markdown--><h2>Conclusion</h2><!--kg-card-end: markdown--><p>In this post we have discussed the various layers that comprise an application that uses Direct3D. We seen that we have two APIs at are disposal which are DXGI and Direct3D 11. We looked at the graphics pipeline and the stages that comprise it. We examined the two primary resource categories being textures and buffers. We discussed how to access resources with resource views. Finally we discussed what the Swapchain is and why we need it to present to the screen. </p><p>With this information you can now comprehend how data is allocated, accessed and processed in a rendering application. This includes how each stage in the graphics pipeline affects the outputted image. In my next post we will examine how to program an application that renders a triangle to the screen.</p><!--kg-card-begin: markdown--><h2>Reference Material</h2><!--kg-card-end: markdown--><p>Most of the content I have learned and subsequently presented to you is from three different sources. The first is <em>Practical Rendering and Computation with Direct3D 11 by Jason Zink, Matt Pettineo, and Jack Hoxley</em> (ISBN-13: 978-1568817200). The second source is <em>Introduction to 3D Game Programming with DirectX 11 by Frank D. Luna</em> (ISBN-13: 978-1936420223). The last source is the actual docs for Direct3D and DirectX found on <a href="https://docs.microsoft.com/en-us/windows/win32/direct3d11/atoc-dx-graphics-direct3d-11">MSDN</a>. I&apos;ve included many links to MSDN through out the post so that you can find more in depth information than what I provide.</p></h2>]]></content:encoded></item><item><title><![CDATA[Creating a Win32 Window for Direct3D 11 Rendering]]></title><description><![CDATA[<p>What is the first thing you think of when you think of computer graphics and rendering? If you said video games, you wouldn&apos;t be alone as its the most prominent example. We&apos;ve become accustomed to immersing ourselves in fictional or simulated environments as a source of</p>]]></description><link>https://brettsem.dev/getting-started-with-d3d11-window/</link><guid isPermaLink="false">620d956ed984b50f87822ae7</guid><category><![CDATA[Win32]]></category><category><![CDATA[Windows]]></category><category><![CDATA[Direct3D]]></category><category><![CDATA[D3D11]]></category><category><![CDATA[DirectX]]></category><category><![CDATA[C++]]></category><dc:creator><![CDATA[Brett Semmler]]></dc:creator><pubDate>Thu, 17 Feb 2022 20:47:08 GMT</pubDate><media:content url="https://brettsem.dev/content/images/2022/02/Win32Window.png" medium="image"/><content:encoded><![CDATA[<img src="https://brettsem.dev/content/images/2022/02/Win32Window.png" alt="Creating a Win32 Window for Direct3D 11 Rendering"><p>What is the first thing you think of when you think of computer graphics and rendering? If you said video games, you wouldn&apos;t be alone as its the most prominent example. We&apos;ve become accustomed to immersing ourselves in fictional or simulated environments as a source of entertainment to escape from the rigors of life. Game developers and graphics card manufactures have always been trying to push the envelope in terms of graphical fidelity. We have seen many demo&apos;s from the likes of Epic and Nvidia featuring jaw-dropping rendering techniques such as real-time ray-tracing like the video below. </p><figure class="kg-card kg-embed-card"><iframe width="200" height="113" src="https://www.youtube.com/embed/BBhr9oddwR4?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><p>It wasn&apos;t always this way, though; one of the first games ever made used an analogue oscilloscope to render and display its graphics. We quickly used a mixture of display adapters and CPU&apos;s to display 2D games like what was seen on the Atari 2600. Eventually we got 3D titles like Wolfenstein3D, Quake and Doom as well as 3D Graphics Processor Units (GPU) to crunch the computations behind them.</p><p>With these GPUs that we&apos;re able to create and interact with these life like scenes, but how do we interact with a GPU? That question is answered through APIs such as Direct3D 11 and Direct3D 12 from Microsoft or OpenGL and Vulkan from the Khronos Group. These APIs accomplish the same thing: to configure and program the GPU to process a data set and display the results of those computations to the screen.</p><p>Before rendering anything to a screen, we first need to open a window. I intend to create a series of posts that cover the first steps of getting a rendering application up and running. This will be the first post of that series where we cover how to create a window for a rendering application.</p><h2 id="intended-audience">Intended Audience</h2><p>Before continuing on with this post, you should know some prerequisites. For my classmates, I will give a strong recommendation that you&apos;re in the 4th semester. This post will be utilizing advanced topics that are not taught in the 1st, 2nd, or 3rd semesters. We will be using C++, a far less forgiving language than most. Still, it is a necessary evil when working with graphics APIs. &#xA0;For specific topics that you should have some intermediate knowledge and experience with are the following:</p><ul><li>Programming and software development with a C style language. Preferably C or C++ but not strictly required for understanding concepts behind utilizing the APIs.</li><li>Object-Oriented Design and Programming</li><li>Pointers and memory management</li><li>Vector and Linear Algebra (At least a vague idea)</li></ul><p>Additionally, it would be best to have a Windows 10 or higher machine with DirectX 11.1 or higher video card that supports feature set 11_1 or higher. We will be using <a href="https://visualstudio.microsoft.com/vs/community/">Visual Studio</a> 2019 Community Edition (2017 and 2022 should work with no issues). </p><p>If you&apos;re unsure of the version and feature set that your card supports you can use the dxdiag tool bundled with Windows to determine that. Run the tool, click the first Display tab, and on the right hand side pane, it should tell you the Direct3D (DDI) version and the Feature Levels. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://brettsem.dev/content/images/2022/02/image.png" class="kg-image" alt="Creating a Win32 Window for Direct3D 11 Rendering" loading="lazy" width="724" height="527" srcset="https://brettsem.dev/content/images/size/w600/2022/02/image.png 600w, https://brettsem.dev/content/images/2022/02/image.png 724w" sizes="(min-width: 720px) 720px"><figcaption>dxdiag tool showing the relevant information of the GPU</figcaption></figure><!--kg-card-begin: markdown--><h2>Outcomes</h2><!--kg-card-end: markdown--><p>By the end of this post, I hope to thoroughly explain the steps and concepts behind opening a Win32 window. Additionally, the window created by the end will be compatible with Direct3D</p><!--kg-card-begin: markdown--><h2>Setting Up Visual Studio</h2><!--kg-card-end: markdown--><p>The first step to displaying a window is to create a project, so lets get Visual Studio fired up. Select Create New Project, and from there, we will select the &quot;Windows Desktop Application&quot; and then press next.</p><figure class="kg-card kg-image-card"><img src="https://brettsem.dev/content/images/2022/02/image-1.png" class="kg-image" alt="Creating a Win32 Window for Direct3D 11 Rendering" loading="lazy" width="1024" height="680" srcset="https://brettsem.dev/content/images/size/w600/2022/02/image-1.png 600w, https://brettsem.dev/content/images/size/w1000/2022/02/image-1.png 1000w, https://brettsem.dev/content/images/2022/02/image-1.png 1024w" sizes="(min-width: 720px) 720px"></figure><p>Give your project a name and select where you would like to place it, and press create.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://brettsem.dev/content/images/2022/02/image-2.png" class="kg-image" alt="Creating a Win32 Window for Direct3D 11 Rendering" loading="lazy" width="1024" height="680" srcset="https://brettsem.dev/content/images/size/w600/2022/02/image-2.png 600w, https://brettsem.dev/content/images/size/w1000/2022/02/image-2.png 1000w, https://brettsem.dev/content/images/2022/02/image-2.png 1024w" sizes="(min-width: 720px) 720px"><figcaption>Project configuration dialog</figcaption></figure><p>Next, we&apos;ll clean our project of any of the pre-generated files. You will need to navigate to the Solution Explorer to delete any .h and .cpp files and any files in the resource folder (filter). &#xA0;Afterwards, your project and solution should look like so.</p><figure class="kg-card kg-image-card"><img src="https://brettsem.dev/content/images/2022/02/image-4.png" class="kg-image" alt="Creating a Win32 Window for Direct3D 11 Rendering" loading="lazy" width="311" height="220"></figure><p>Now, we add our main cpp file which contains the entry point to your application. Right click on the source folder -&gt; Add -&gt; New Item -&gt; C++ File (.cpp). From there, give it the name of main.cpp. </p><figure class="kg-card kg-image-card"><img src="https://brettsem.dev/content/images/2022/02/image-7.png" class="kg-image" alt="Creating a Win32 Window for Direct3D 11 Rendering" loading="lazy" width="955" height="670" srcset="https://brettsem.dev/content/images/size/w600/2022/02/image-7.png 600w, https://brettsem.dev/content/images/2022/02/image-7.png 955w" sizes="(min-width: 720px) 720px"></figure><p>If you did everything correctly, there should be a blank document displayed to you and a file sitting within the source files folder. (Filter is the VS terminology for the folders you see in the Solution Explorer since they&apos;re not actually folders on the file system but a filtering tool for within VS.)</p><p>We&apos;re entirely done yet, though; since we&apos;re preparing this project for D3D11 we still need to configure the linker to use the additional dependencies that Direct3D needs. To do this, we will right click on our project (the one with the ++ icon) and select properties. Now, select All Configurations and All Platforms at the top. In the Configuration Properties list on the left, you will choose linker and then all options. You will then click on Additional Dependencies and then click on the drop-down menu on its right and select &lt;edit&gt;.</p><figure class="kg-card kg-image-card"><img src="https://brettsem.dev/content/images/2022/02/image-8.png" class="kg-image" alt="Creating a Win32 Window for Direct3D 11 Rendering" loading="lazy" width="786" height="544" srcset="https://brettsem.dev/content/images/size/w600/2022/02/image-8.png 600w, https://brettsem.dev/content/images/2022/02/image-8.png 786w" sizes="(min-width: 720px) 720px"></figure><p>Then copy the three lines below, and add them to the additional dependencies textbox, and press OK.</p><pre><code>d3d11.lib
dxgi.lib
d3dcompiler.lib</code></pre><p>After you&apos;re complete, press Apply and then OK. If you were successful you should see the line below in bold text in the additional dependencies field</p><pre><code>d3d11.lib;dxgi.lib;d3dcompiler.lib;%(AdditionalDependencies)</code></pre><p>One more step is to set our IDE to use an x64 (64 bit) build. The reason for this is so that we don&apos;t have to worry about aligning our data to a 16 byte boundary so that we can use SSE intrinsics in the math library. At the top of your IDE, you should see a dropdown box with the value of x86. Click that box and select x64. </p><figure class="kg-card kg-image-card"><img src="https://brettsem.dev/content/images/2022/02/image-9.png" class="kg-image" alt="Creating a Win32 Window for Direct3D 11 Rendering" loading="lazy" width="641" height="109" srcset="https://brettsem.dev/content/images/size/w600/2022/02/image-9.png 600w, https://brettsem.dev/content/images/2022/02/image-9.png 641w"></figure><p>We&apos;re now ready to begin coding.</p><!--kg-card-begin: markdown--><h2>Creating a Window</h2><!--kg-card-end: markdown--><p>Before we start coding I will state now that this first post will be using very C-like C++ code written using a functional paradigm. This is so that we can abstract away the complexities that idiomatic C++ and OO introduce and focus simply on what&apos;s important. With that out of the way, lets write some code! </p><p>In the previous steps we added which libraries we were using to the linker, so we now must add the includes to our main.cpp so that we have code declarations for those libraries. &#xA0;</p><pre><code class="language-cpp">#define WIN32_LEAN_AND_MEAN // Excludes rarely used libraries in Windows.h
#include &lt;Windows.h&gt;     // All of the Win32 stuff.
#include &lt;d3d11_1.h&gt;     // Direct3D library
#include &lt;dxgi.h&gt;        // DirectX Graphics Infrastructure 
#include &lt;d3dcompiler.h&gt; // Shader compiler
#include &lt;DirectXMath.h&gt; // SIMD math library utilizing SSE</code></pre><p>We will next declare some globals, these include a HWND which is a Win32 window handle and we will use it to keep a reference to our window. We then have the attributes of the window, these include the position of the window on the desktop and its size. </p><pre><code class="language-cpp">HWND gMainWnd = 0;      // Handle to our window
int gXPos     = 0;      // Window X Position
int gYPos     = 0;      // Window Y Position
int gWidth    = 800;    // Window width
int gHeight   = 600;    // Window height</code></pre><p>You should note that the position, width and height are in pixels. The coordinate system used for the desktop and windows begins at the top left hand of the display. The x+ direction moves to the right of the display while the y+ direction moves to the bottom of the display. With the graphic below you can see this clearly illustrated.</p><figure class="kg-card kg-image-card"><img src="https://brettsem.dev/content/images/2022/02/Win32-Coordinates-1.png" class="kg-image" alt="Creating a Win32 Window for Direct3D 11 Rendering" loading="lazy" width="197" height="196"></figure><!--kg-card-begin: markdown--><h3>WinMain</h3><!--kg-card-end: markdown--><pre><code class="language-cpp">bool InitWindow( HINSTANCE instanceHandle, int show );
LRESULT CALLBACK WndProc( HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam );
void Run();

int WINAPI WinMain( HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR pCmdLine, int nShowCmd )
{
    // Try to create the window. If it fails exit the program.
    if ( !InitWindow( hInstance, nShowCmd ) )
    {
        return 1; // Return Error and exit program.
    }

    // Begin executing the event and render loop.
    Run();

	return 0;
}</code></pre><p>We will now declare our function prototypes so that we don&apos;t have to write our functions before main. If you read the program from top to bottom it will read in order of execution. As we go further into the source code we will examine the inner workings of them. We are moving onto our <a href="https://docs.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-winmain">WinMain</a>, a modified C++ entry point unique to Windows. This is the entry point of our application, it takes in four arguments. The first is the HINSTANCE hInstance which is a handle to our application in the OS. Next is the same type, but for a previous instance, this argument is a legacy leftover that is no longer used. Next we have LPSTR pCmdLine which is a <em>long pointer string</em> which is just a type definition of c string (char*). This is where command line arguments are passed into the entry point. Next is the int ShowCMD, this states how our program should initially open. e.g. window showing, window hidden etc. Inside of our main we will try to create a window and then call run which takes us into a loop for handling events that are sent to the window. </p><!--kg-card-begin: markdown--><h3>The Window Procedure (WndProc)</h3><!--kg-card-end: markdown--><pre><code class="language-cpp">LRESULT CALLBACK WndProc( HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam )</code></pre><p>Before we look at creating the window, a few concepts need to be explained first. All Win32 windows are event driven, meaning they receive <a href="https://docs.microsoft.com/en-us/windows/win32/learnwin32/window-messages">messages</a> from the OS. There are lots of different messages that can be sent to the window, such as key presses, mouse movement, window movement, windowing resizing, window destruction etc. As developers we&apos;re responsible for receiving and handling these messages. The way we handle these messages is through the Window Procedure or <a href="https://docs.microsoft.com/en-us/windows/win32/api/winuser/nc-winuser-wndproc">WndProc </a>as it&apos;s commonly called. The WndProc is capable of handling multiple windows and differentiating between them. The WndProc takes in a window handle (HWND), the message type (UINT msg parameter) as well as the WPARAM and LPARAM which store the details of the message. WPARAM and LPARAM can hold various things, and messages utilize them differently. Hence, you have to read the MSDN docs to know what a message holds. The last part of the signature of a WndProc is the return type, an LRESULT, a simple integer value that&apos;s returned based on the message. </p><pre><code class="language-cpp">LRESULT CALLBACK WndProc( HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam )
{
    switch ( msg )
    {
    case WM_LBUTTONDOWN: // Left Mousclick
        MessageBox( 0, L&quot;Hello, World&quot;, L&quot;Hello&quot;, MB_OK );
        return 0;
    case WM_KEYDOWN: // keypress (down position)
        if ( wParam == VK_ESCAPE ) // Escape Key
        {
            DestroyWindow( hWnd );
        }
        return 0;
    case WM_DESTROY: // Window was closed by us or by the user
        DestroyWindow( hWnd );
        gMainWnd = 0;
        PostQuitMessage( 0 ); // Send the quit message
        return 0;
    default:
        return DefWindowProc( hWnd, msg, wParam, lParam ); // Send messages back to the OS.
        break;
    }
}</code></pre><p>The guts of a WndProc is a simple switch statement that takes the message parameter, with each case being a message type. Here we are handling the WM_LBUTTON message, a left mouse click, WM_KEYDOWN, a keypress message that fires when a key is in the down state. The VK_ESCAPE is a preprocessor definition for the integer value for the virtual escape key code. (<a href="https://docs.microsoft.com/en-us/windows/win32/inputdev/virtual-key-codes">Virtual keys</a> represent real keys.) Next, we have a WM_DESTROY message which &#xA0;we receive when we call <a href="https://docs.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-destroywindow">DestroyWindow</a>(), or when the User or OS closes the window. After destroying the window, we call <a href="https://docs.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-postquitmessage">PostQuitMessage</a>(), which tells the OS that we are terminating the application. It takes quit code that we wish to send, such as 0 for success or 1 for failure. Since there are numerous messages, many of which we don&apos;t care to process, they need to go somewhere. The solution is to send those messages back to the OS through the <a href="https://docs.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-defwindowprocw">DefWindowProc</a>() function (DefaultWindowProc). Simply pass the parameters to it and call it a day. You should note, WndProc must be implemented in the same .cpp file as the WNDCLASS discussed next. </p><!--kg-card-begin: markdown--><h3>The Window Class</h3><!--kg-card-end: markdown--><pre><code class="language-cpp">bool InitWindow( HINSTANCE instanceHandle, int show ) 
{
    WNDCLASS wc; // Datastructure that holds the details of the windowclass which describes our window.
    ZeroMemory( &amp;wc, sizeof( WNDCLASS ) ); // Initialize the structure.

    wc.style            = CS_HREDRAW | CS_VREDRAW;                  // Class styling. Allows for additional behaviours of the window.
    wc.lpfnWndProc      = WndProc;                                  // A function pointer to the Window Procedure.
    wc.cbClsExtra       = 0;                                        // Extra bytes to allocate to the window class structure.
    wc.cbWndExtra       = 0;                                        // Extra bytes to allocate to the window instance.
    wc.hInstance        = instanceHandle;                           // The module handle of this application.
    wc.hIcon            = LoadIcon( 0, IDI_APPLICATION );           // Icon of the window.
    wc.hCursor          = LoadCursor( 0, IDC_ARROW );               // Cursor used by the window.
    wc.hbrBackground    = ( HBRUSH ) GetStockObject( WHITE_BRUSH ); // Paints the window white.
    wc.lpszMenuName     = 0;                                        // Name of an associated menu.
    wc.lpszClassName    = L&quot;D3DWindowClass&quot;;                        // Name of the window class this structure will become</code></pre><p>The InitWindow() function is what we&apos;re using to create our window. The process works by filling out a <a href="https://docs.microsoft.com/en-us/windows/win32/api/winuser/ns-winuser-wndclassa">WNDCLASS </a>structure (or <a href="https://docs.microsoft.com/en-us/windows/win32/api/winuser/ns-winuser-wndclassexa">WNDCLASSEX</a> if you want more granularity). This structure is how we define various attributes that define a window. &#xA0;We have fields such as style can be used to tell how the window is drawn under certain conditions such as CS_HREDRAW and CS_VREDRAW, which cause the window to be &apos;painted&apos; when they&apos;re moved Vertically or Horizontally. There are other effects/styles that can be combined with bitwise OR as well. There is a field for us to pass a function pointer to the Window Procedure. The next two fields we don&apos;t utilize in this post, however; you should be aware of them as they&apos;re extremely useful. They are for allocating extra memory to the class (ClsExtra) or the Window instance (WndExtra). </p><p> These fields for extra bytes can be particularly useful when you&apos;re utilizing these Windows in OO manner since with these extra bytes we can store an object that contains a WndProc as one of its methods. This allows us to define a WndProc as a member method rather than a local function. We setup a local WndProc to simply forward the events to the OO WndProc like so. </p><pre><code class="language-cpp">LRESULT CALLBACK InternalWindowProc( HWND hwnd, UINT msg, WPARAM wparam, LPARAM lparam )
{
	LONG_PTR ObjPtr = GetWindowLongPtr(hwnd, 0);

	if (ObjPtr == 0) {
        return( DefWindowProc( hwnd, msg, wparam, lparam ) );
	} else {
	    return( ((MyObject*)ObjPtr)-&gt;WndProc(hwnd, msg, wparam, lparam) );
	}
}</code></pre><pre><code class="language-cpp">	class My_Object_That_Implements_WndProc
	{
	public:
		// Some methods
        // Our WndProc
		virtual LRESULT CALLBACK WndProc( HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam );
		// Some data members.
	};</code></pre><p>We can store the above object inside a Window instance (WndExtra). The first argument is the handle (HWND) to the window, then the byte index (offset) of the allocated memory region, and finally a pointer (cast to LONG_PTR) to your object that implements the WndProc function.</p><pre><code class="language-cpp">SetWindowLongPtr( windowHandle, 0, (LONG_PTR)( procObj ) );</code></pre><p>We&apos;re not utilizing this technique (yet) in our example, but it is good to know that it exists.</p><p>Next we have that all too familiar HINSTANCE which, if you recall, is the handle to our process. The next two fields are the Icon and Cursor that we will use in the window. We&apos;re loading the OS defaults for our window. The next field is the hbrBackground which tells the OS how the window should be drawn such as color or other effects. </p><p>lpszMenuName is used if we wish to associate a menu with our window. lpszClassName is an essential field since its the name for this WNDCLASS. It&apos;s one of the parameters that we pass to the <a href="https://docs.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-createwindowa">CreateWindow</a>() function. The reason this is necessary is because the WNDCLASS acts as a blueprint for a window. The class can be instantiated multiple times for multiple windows.</p><p>Now that our WNDCLASS is filled out, we have to register the class with the OS. This is done through the <a href="https://docs.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-registerclassa">RegisterClass</a>() function ,which takes a pointer to a WNDCLASS. It returns a BOOL (Windows integer typedef) result that we can use to see if it succeeded or failed. If it failed to register the class, we can call <a href="https://docs.microsoft.com/en-us/windows/win32/api/errhandlingapi/nf-errhandlingapi-getlasterror">GetLastError</a>() to get the error code that corresponds to the issue. The list of error codes can be seen <a href="https://docs.microsoft.com/en-us/windows/win32/debug/system-error-codes">here</a>.</p><pre><code class="language-cpp">// Check if the window class failed to register. If it did the function will be a false value.
    // In that case we will print the failure and the error code associated with it.
    if ( !RegisterClass( &amp;wc ) ) 
    {
        MessageBox( 0, L&quot;RegisterClass FAILED. Error code: &quot; + GetLastError(), 0, 0 );
        return false;
    }</code></pre><!--kg-card-begin: markdown--><h3>Create Window</h3><!--kg-card-end: markdown--><pre><code class="language-cpp">	gMainWnd = CreateWindow( 
        L&quot;D3DWindowClass&quot;,          // Which window class do we want to instantiate.
        L&quot;Hello, World&quot;,            // title of our window.                         
        WS_OVERLAPPED | WS_SYSMENU, // window style. We&apos;re specifying a window with a title bar and a thin border                     
        gXPos, gYPos,               // Starting position of the window in pixel coordinates.
        gWidth, gHeight,            // Starting size of the window in pixels.
        0,                          // A handle to the parent.
        0,                          // A handle to a menu
        instanceHandle,             // A handle to the instance of this application.
        0 );                        // Extra creation parameters.

    // Check if the CreateWindow function failed. If it did the window handle will be zero.
    // In that case we will print the failure and the error code associated with it.
    if ( gMainWnd == 0 ) 	
    {
        MessageBox( 0, L&quot;CreateWindow FAILED. Error code: &quot; + GetLastError(), 0, 0 );
        return false;
    }

    // Display the window and update that state.
    ShowWindow( gMainWnd, show );
    UpdateWindow( gMainWnd );

    return true;</code></pre><p>Here we&apos;re calling the <a href="https://docs.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-createwindowa">CreateWindow</a>() function, which takes the class name from which we wish to create a window. It will also take a string for the window caption, which is the text that appears at the top of a window. Here we&apos;re specifying the style of our window which tells the OS how it should look. These <a href="https://docs.microsoft.com/en-us/windows/win32/winmsg/window-styles">styles </a>can be bitwise ORed together to combine them. Some styles like WS_OVERLAPPED already contain multiple styles. We also specify the position that the window should start at with xPos and yPos as well as the width and height. The next parameters are handles to a parent window, a menu, our process. The last parameter is for extra creation parameters which you can read <a href="https://docs.microsoft.com/en-us/windows/win32/api/winuser/ns-winuser-createstructa">here</a>.</p><p>The return value of a CreateWindow() is a HWND, however; this function can fail. Failure is signified by a null (zero value) HWND. If it fails, we simply open a message box to state the failure, additionally; we use GetLastError() again to retrieve the failure code. </p><p>If CreateWindow() was successful we will now display the window with <a href="https://docs.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-showwindow">ShowWindow</a>(). We will also call <a href="https://docs.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-updatewindow">UpdateWindow</a>() to update the client area of the window.</p><!--kg-card-begin: markdown--><h3>The Message Pump (Loop)</h3><!--kg-card-end: markdown--><pre><code class="language-cpp">void Run()
{
    MSG msg = { 0 };

    while ( true )
    {
        // Check if messages are sitting in the queue. 
        if ( PeekMessage( &amp;msg, 0, 0, 0, PM_REMOVE ) )
        {
            // Translate virtual-key messages into character messages.
            TranslateMessage( &amp;msg );

            // Send the message to the WndProc function.
            DispatchMessage( &amp;msg );

            // If we recieve the quit message then its time to break out of 
            // the loop and end the application.
            if ( msg.message == WM_QUIT )
            {
                break;
            }
        } else
        {
            // Here is where we will put our loop logic for rendering.
        }
    }
}</code></pre><p>So we can create a window and handle the messages sent to that window, but how do we receive those messages? I introduce to you the message pump. Here we&apos;re using <a href="https://docs.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-peekmessagea">PeekMessage</a>() to see if there are any messages on the queue. If there is, it removes them and stores them inside of the msg structure and returns true. This then causes us to translate and dispatch those messages to the WndProc. &#xA0;If there are no messages, we will process our rendering loop. For PeekMessage() we pass the parameters of a MSG struct, (which contains the actual message), several null values, and an enumerator instructing the function to discard messages from the queue after we dispatch them. There are some additional parameters that can be used for filtering, the first of which says what window you want to receive from. The other two parameters are for specifying a min and max for the messages you want to receive.</p><p>Another message function which performs a similar task called <a href="https://docs.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-getmessage">GetMessage</a>(). Unfortunately this function isn&apos;t ideal for a real time applications since it is blocking until it receives a message. </p><!--kg-card-begin: markdown--><h2>Conclusion<!--kg-card-end: markdown--><p>If you successfully in implemented the above code, you see the same window below when you press f5. This is all it takes to create a window using the Win32 libraries. Unfortunately, this is just the tip of the iceberg for the Win32 libraries. You could probably spend months or possibly years learning this library in its entirety.</p><figure class="kg-card kg-image-card"><img src="https://brettsem.dev/content/images/2022/02/image-10.png" class="kg-image" alt="Creating a Win32 Window for Direct3D 11 Rendering" loading="lazy" width="786" height="593" srcset="https://brettsem.dev/content/images/size/w600/2022/02/image-10.png 600w, https://brettsem.dev/content/images/2022/02/image-10.png 786w" sizes="(min-width: 720px) 720px"></figure><p>If you wish to learn more about various aspects of Win32 I suggest going to MSDN which hosts all of Microsofts documentation. Additionally, I have provided links for all of the functions and structures used in this demo. I highly suggest reading through them to figure out all of the options available to you. As an excercise I recommend trying to configure the window to run in borderless mode.</p><p>The link to the code on GitHub can be found <a href="https://github.com/BSemmler/D3D11_GettingStarted/tree/Win32WindowCreation">here</a>.</p></h2>]]></content:encoded></item></channel></rss>