In this blog, we continue our journey of abstraction. We will make the graphics system completely platform-independent and we will discuss how it provides developers a simple interface to draw using the Graphics System.
So as I mentioned in the last blog whenever you are trying to create an interface it is better to start from understanding how will it be used and what kind of usage is expected for this system. This helps us in finding the differences in both platforms and how to create a interface. We already know how the graphics system is used, we initialize, RenderFrame and Clean. But each graphics API has different things that it needs in each of the above phase which is why we had Graphics.d3d.cpp & Graphics.gl.cpp. Now we want to abstract these differences and have only one Graphics.cpp which provides the same behaviour for both platforms. Not only these two graphics platforms but It is better to have an interface that supports any graphics API so that it is easy to add support to any new graphics API.
So I have created a new GraphicsAPI interface that provides necessary methods to setup any graphics API with our Graphics System. I added a new GraphicsAPI.h file which contains the interface and added platformspecific GraphicsAPI.d3d.cpp and GraphicsAPI.gl.cpp implementations. So now to support a new graphics API all we have to do is just create the platformspecfic implementation for that specific graphics API. I have also added an enum that is used to identify the current graphics API being used.
One of the things that the GraphicsAPI interface provides is to set the back buffer clearcolor. It is very simple to set the clearcolor using this interface now.
//Clearing Image Buffer
GraphicsAPI::SetClearColor(Red,Blue,Green);
The following is the result of the above function. To just make it fun I am updating the Red, Blue, Green values every frame to make this animation effect.
Just like we provided the ability for the user to provide input for clear color using the GraphicsAPI interface we need to update our Effect and Mesh interfaces to accept user input but first we have to know what data do we need to accept from the user and what should be stored. The only data we required to accept from the user for the effect is the shader file paths and use them to initialize the effect we don't need to store the file paths.
s_Effect.InitializeEffect("data/Shaders/Vertex/standard.shader","data/Shaders/Fragment/MyGame.shader")
The data we store inside the Effect is
eae6320::Graphics::cShader* m_vertexShader = nullptr;
eae6320::Graphics::cShader* m_fragmentShader = nullptr;
eae6320::Graphics::cRenderState* m_renderState;
#if defined(EAE6320_PLATFORM_GL)
GLuint m_programId = 0;
So the size of Effect is 24 bytes in D3D and 16 bytes in OpenGL. The difference in size in both platforms is because of the difference in the size of a pointer in x86 and x64 platforms which is 4 bytes and 8 bytes correspondingly. In D3D we have 3 pointers each 8 bytes with a total of 24 bytes and in OpenGL, the 3 pointers are 12 bytes, and the GLUint which is a typedef of unsigned int of size 4 bytes the total adds up to 16 bytes. It is hard to create an Effect data smaller than this as all the members are required and that is the smallest size possible for Effect. In OpenGL Effect has an extra member compared to D3D GLuint m_programId which increases the size by 4 bytes.
The Data we required to Initialize a mesh is vertex data & count, index data & count. The only data that we store from the input is the index count as it is required while rendering the mesh.
s_Triangle.InitializeMesh(VertexCount, vertexData, IndexCount, indexData)
The data we store in the Mesh is
#if defined(EAE6320_PLATFORM_GL)
GLuint VertexBufferId = 0;
GLuint IndexBufferId = 0;
GLuint VertexArrayId = 0;
#if defined(EAE6320_PLATFORM_D3D)
eae6320::Graphics::cVertexFormat* VertexFormat = nullptr;
ID3D11Buffer* VertexBuffer = nullptr;
ID3D11Buffer* IndexBuffer = nullptr;
uint16_t IndexCount = 0;
The size of the Mesh is 32 bytes in D3D and 16 bytes in OpenGL. The major differences in sizes are because pointers take 8 bytes in x64 platform which is why D3D has a bigger size whereas GLuint only takes 4 bytes. But wait there is another thing if you observe OpenGL each GLuint takes 4 bytes and there is an additional uint16_t which is 2 bytes shouldn't the size then be 14 bytes and this is also true for D3D each pointer takes 8 bytes plus with 2 bytes it should be 26 bytes but they are bigger than that why?
This is where struct alignment and padding comes into place. In general, each type except for char has a align requirement. When memory is allocated they just don't start at an arbitrary address. A 2 byte short must start on an even address, a 4-byte int or float must start at an address divisible by 4, and so on. So padding is added between the members to meet these alignment requirements. For a struct, the alignment requirement would be the same as its largest member. So in our case, our Mesh in D3D has an alignment of 8 bytes as it's the largest member has an alignment of 8. So the compiler makes it's size 32 by adding 6 bytes of padding to ensure all members are self-aligned. Similarly for Mesh in OpenGL has a padding of 2 bytes.
The size of the Mesh cannot be made any smaller as all the members are required and that is the smallest possible size for the Mesh as of now.
Controls:
ESC – exit.
UP Arrow – To Increase the speed of simulation.
DOWN Arrow – To decrease the speed of simulation.
Comments