Finish creating the delivery report

master
Peder Bergebakken Sundt 2019-04-07 20:29:35 +02:00
parent 349a7934c4
commit 55f9249363
9 changed files with 71 additions and 29 deletions

5
.gitignore vendored
View File

@ -33,5 +33,6 @@
build/
report/pd-images/
report/report_combined.md
report/report_combined_out.pdf
report/*_combined.md
report/*_combined.md5_hash
report/*_combined_out.pdf

View File

@ -1,11 +1,11 @@
% TDT4230Final assignment report
% Peder Berbebakken Sundt
% insert date here
% Peder Bergebakken Sundt
% 7'th of April 2019
\small
```{.shebang im_out="stdout"}
#!/usr/bin/env bash
printf "time for some intricate graphics surgery!\n" | cowsay -f surgery | head -n -4 | sed -e "s/^/ /"
printf "time for some intricate graphics surgery!\n" | cowsay -f surgery | head -n -5 | sed -e "s/^/ /"
```
\normalsize
@ -15,6 +15,8 @@ printf "time for some intricate graphics surgery!\n" | cowsay -f surgery | head
For this project, we're supposed to investigate a more advanced or complex visualisation method in detail by implementing it ourselves using C++ and OpenGl 4.3+. I'll be doing it by myself.
The idea i have in mind for the scene i want to create, is a field of grass with trees spread about in it, where a car is driving along the ups and downs of the hills. I then plan to throw all the effect i can at it to make it look good.
I want to look more into effects one can apply to a scene of different materials. In detail, i plan to implement:
Phong lighting,
texturing,
@ -25,10 +27,8 @@ I want to look more into effects one can apply to a scene of different materials
fog and
rim backlights.
I also want to implement som post-processing effects:
I also want to implement some post-processing effects:
Chromatic aberration,
Depth of field,
Vignette and
Noise / Grain
The idea i have in mind for the scene i want to create, is a field of grass with trees spread about in it, where a car is driving along the ups and downs of the hills. I then plan to throw all the effect i can at it to make it look good.

View File

@ -7,30 +7,31 @@ Each mesh can be UV mapped. Each vertex has a UV coordinate assigned to it, whic
## Normal mapping
Normals are defined in two places: One normal vector per vertex in the mesh, and an optional tangental normal map texture. The normal vector is combined with it's tangent and bitangent vectors (tangents in the U and V directions respectively) into a TBN transformation matrix, which the tangential normal vector fetched from the normal map can be transformed with. This allows us to define the normal vector along the surfaces of the mesh.
Normals are defined in two places: One normal vector per vertex in the mesh, and an optional tangental normal map texture. The normal vector is combined with it's tangent and bitangent vectors (tangents in the U and V directions respectively) into a TBN transformation matrix, which the tangential normal vector fetched from the normal map can be transformed with. This allows us to define the normal vector along the different surfaces of the mesh.
## Displacement mapping
Displacement mapping is done in the vertex shader. A displacement texture is mapped into using the UV coordinates. The texture describes how much to offset the vertex along the normal vector. This is further controlled with a displacement coefficient uniform passed to the vertex shader. See @fig:img-fine-plane and @fig:img-displacement-normals.
Displacement mapping is done in the vertex shader. A displacement texture is mapped into using the UV coordinates. The texture describes how much to offset the vertex along the normal vector. This is further controlled with a displacement coefficient uniform passed into the vertex shader. See @fig:img-fine-plane and @fig:img-displacement-normals.
## Phong lighting
The Phong lighting model is implemented in the fragment shader. The model describes four light components: The diffuse component, the emissive component, the specular component and the ambient component. Each of these components have a color/intensity assigned with them, which is stored in the `SceneNode`/`Material`.
The colors are computed using the normal vector computed as described above. The basecolor is multiplied with sum of the diffuse and the emissive colors, and the specular color is added on top. I chose to combine the ambient and emissive into one single component, since i don't need the distinction in my case. I did however make the small change of multiplying the emissive color with the color of the first light in the scene. This allows me to 'tint' the emissive components.
The colors are computed using the normal vector computed as described above. The basecolor is multiplied with the sum of the diffuse and the emissive colors, and the specular color is then added on top. I chose to combine the ambient and emissive into one single component, since i don't need the distinction in my case. I did however make the small change of multiplying the emissive color with the color of the first light in the scene. This allows me to 'tint' the emissive component as well.
I have two type of nodes in the scene for lights: point lights and spot lights. Each light has a color associated with them as well as a position and three attenuation factors. The final attenuation is computed as $\frac{1}{x + y\cdot |L| + z\cdot |L|^2}$ from these three factors.
## Loading models
Importing of models is done using the library called `assimp`. It is a huge and bulky library which takes decades to compile, but it gets the job done. Each model file is actually a whole 'scene'. I first traverse the materials defined in this scene and store them into my own material structs. I then traverse the textures in the scene and load them into `PNGImage` structs. I then traverse all the meshes stored in the scene and store the. At last i traverse the nodes in the scene, creating my own nodes. I apply the transformations, materials, textures and meshes referenced. Finally i transform the root node to account for me using a coordinate system where z points skyward.
Importing of models is done using the library called `assimp`. It is a huge and bulky library which takes decades to compile, but it gets the job done. Each model file is actually a whole 'scene'. I first traverse the materials defined in this scene and store them into my own material structs. I then traverse the textures in the scene and load them into `PNGImage` structs. I then traverse all the meshes stored in the scene and store those. At last i traverse the nodes in the scene, creating my own nodes. I apply the transformations, materials, textures and meshes referenced in each node. Finally i rotate the root node to account for me using a coordinate system where z points skyward.
## Reflections
Reflections are implemented in the fragment shader, using the vector pointing from the camera to the fragment (F), and the normal vector. I reflect the F vector along the normal vector and normalize the result. Computing the dot product between the normalized reflection and any other unit vector gives my the cosine of the angle between the two. Computing this cosine northward and skyward allows me to map the reflection into a sphere and retrieve the UV coordinates used to fetch a reflection color value from a reflection map texture (see fig:img-reflection and fig:img-reflection-map).
Reflections are implemented in the fragment shader, using the vector pointing from the camera to the fragment (F), and the normal vector. I reflect the F vector along the normal vector and normalize the result. Computing the dot product between the normalized reflection and any other unit vector gives me the cosine of the angle between the two. Computing this cosine northward and skyward allows me to map the reflection into a sphere and retrieve the UV coordinates used to fetch the reflected color value from a reflection texture map (see fig:img-reflection and fig:img-reflection-map).
## Fog
TODO
Fog is an easy effect to implement. I originally planned for it to be implemented as a post-processing effect, but moved it as discussed in @sec:learn to the fragment shader.
The z component of the fragment position in MVP space is transformed into linear space and then multiplied by a fog strength uniform. This new value is used as the mix factor between fragment color, and our fog color. (See @fig:img-fog)
## Rim backlights
@ -38,15 +39,15 @@ To make objects pop a bit more, one can apply a rim backlight color. The effect
## Post processing
Post processing is achieved by rendering the whole scene, not to the window, but to an internal framebuffer instead. This framebuffer is then used as a texture covering a single quad which is then rendered to the window. This in-between step allows me to apply different kinds of effects using the fragment shader, which rely on being able to access neighboring pixel's depth and color values.
Post processing is achieved by rendering the whole scene, not to the window, but to an internal framebuffer instead. This framebuffer is then used as a texture covering a single quad which is then rendered to the window. This in-between step allows me to apply different kinds of effects using the separate fragment shader applied to the quad, effects which rely on being able to access neighboring pixel's depth and color values.
### Depth of Field / Blur
Using this post processing shader, I could apply blur to the scene. Depth of field is a selective blur, keeping a certain distance in focus. I first transform the depthbuffer (see @fig:img-depth-map) to be 0 around the point of focus and tend towards 1 otherwise. I then use this focus value as the range of my blur. The blur is simply the average of a selection of neighboring pixels. See @fig:img-depth-of-field for results.
Using this post processing shader, I could apply blur to the scene. Depth of field is a selective blur, keeping just a certain distance range in focus. I first transform the depthbuffer (see @fig:img-depth-map) to be 0 around the point of focus and tend towards 1 otherwise. I then use this focus value as the range of my blur. The blur is simply the average of a selection of neighboring pixels. See @fig:img-depth-of-field for results.
### Chromatic aberration
Light refracts differently depending on wavelength. (see @fig:img-what-is). By scaling the tree color components by different amounts, i can recreate this effect. This scaling is further multiplied by the focus value, to avoid aberration near the vertical line in @fig:img-what-is.
Light refracts differently depending on the wavelength. (see @fig:img-what-is). By scaling the tree color components by different amounts, i can recreate this effect. This scaling is further multiplied by the focus value computed above, to avoid aberration near the vertical line in @fig:img-what-is.
### Vignette

View File

@ -2,26 +2,26 @@
## General difficulties
A lot of time was spent cleaning up and modifying the gloom base project. A lot of time was also spent working with `assimp` and getting the internal framebuffer to render correctly. `assimp` and `OpenGL` aren't the most verbose companion out there.
A lot of time was spent cleaning up and modifying the gloom base project. A lot of time was also spent working with `assimp` and getting the internal framebuffer to render correctly. `assimp` and `OpenGL` aren't the most verbose debugging companion out there.
I learned that the handedness of face culling and normal maps aren't the same everywhere. Luckily `assimp` supports flipping faces. When reading the grass texture, i had to flip the R and G color components of the normal map to make it look right. See @fig:img-wrong-handedness and @fig:img-flipped-handedness.
## The slope of the displacement map
The scrolling field of grass is actually just a static plane mesh of 100x100 vertices, with a perlin noise displacement map applied to it (I use an UV offset uniform to make the field scroll). You can however see in @fig:img-fine-plane that the old normals doesn't mesh with the now displaced geometry. I therefore had to recalculate the normals using the slope of the displacement. I rotate the TBN matrix and normal vectors in the shader to make it behave nice with the lighting. Luckily i have both the tangent and bitangen vector pointing in the U and V direction. calculating the slope of the displacment in both of these directions allows me to add the normal vector times the slope to the tangent and the bitangent. after normalizing the tangens, i can compute the new normal vector using the cross product of the two. From these i construct the TBN matrix. See @lst:new-tbn for the code.
The scrolling field of grass is actually just a static plane mesh of 100x100 vertices, with a perlin noise displacement map applied to it (I use an UV offset uniform to make the field scroll, the map is mirrored on repeat to account for sharp edges, see @fig:img-gl-mirror). You can however see in @fig:img-fine-plane that the old normals doesn't mesh with the now displaced geometry. I therefore had to recalculate the normals using the slope of the displacement. I rotate the TBN matrix and normal vectors in the shader to make it behave nice with the lighting. Luckily i have both the tangent and bitangen vector pointing in the U and V direction. calculating the slope of the displacment in both of these directions allows me to add the normal vector times the slope to the tangent and the bitangent. after normalizing the tangens, i can compute the new normal vector using the cross product of the two. From these i construct the TBN matrix. See @lst:new-tbn for the code.
This did however give me a pretty coarse image, so I moved the computation of the TBN matrix from the vertex shader to the fragement shader. This will give me a slight performance penalty, but I can undo the change in a simplified shader should I need the performance boost. See @fig-img-displacement-normals for results.
This did however give me a pretty coarse image, so I moved the computation of the TBN matrix from the vertex shader to the fragement shader. This will give me a slight performance penalty, but I can undo the change in a simplified shader should I need the performance boost. See @fig:img-displacement-normals for results.
## Transparent objects {#sec:trans}
When rendering transparent objects with depth testing enabled, we run into issues as seen in @fig:img-tree-alpha. The depth test is simply a comparison against the depth buffer, which determines if a fragment should be rendered or not. When a fragment is rendered, the depth buffer is updated with the depth of the rendered fragment. Only fragment which will appear behind already rendered fragments will be skipped. But non-opaque objects should allow objects behind to still be visible.
When rendering transparent objects with depth testing enabled, we run into issues as seen in @fig:img-tree-alpha. The depth test is simply a comparison against the depth buffer, which determines if a fragment should be rendered or not. When a fragment is rendered, the depth buffer is updated with the depth of the rendered fragment. Fragment which will appear behind already rendered fragments will be skipped. But non-opaque objects should allow objects behind to still be visible.
As a first step to try to fix this issue, i split the rendering of the scene into two stages: opaque nodes and transparent nodes. The first stage will traverse the scene graph and store all transparent nodes in a list. Afterwards the list is sorted by distance from camera, then rendered back to front. This will ensure that the transparent meshes furthest away are rendered before the ones in front, which won't trip up the depth test. The results of this can be viewed in @fig:img-tree-sorted.
As a first step to try to fix this issue, i split the rendering of the scene into two stages: opaque nodes and transparent nodes. The first stage will traverse the scene graph and store all transparent nodes in a list. Afterwards the list is sorted by the distance away from camera, then rendered back to front. This will ensure that the transparent meshes furthest away are rendered before the ones in front, which won't trip up the depth test. The results of this can be viewed in @fig:img-tree-sorted.
We still have issues here however. Faces within the same mesh aren't sorted and could be rendered in the wrong order. This is visible near the top of the tree in @fig:img-tree-sorted. To fix one could sort all the faces, but this isn't feasible in real time rendering applications. I then had the idea to try to disable the depth test. This look *better* in this case, but it would mean that opaque objects would always be beneath transparent ones, since the transparent ones are rendered in a second pass afterwards.
We still have issues here however. Faces within the same mesh aren't sorted and could be rendered in the wrong order. This is visible near the top of the tree in @fig:img-tree-sorted. To fix this one could sort all the faces, but this isn't feasible in real time rendering applications. I then had the idea to try to disable the depth test. This look *better* in this case, but it would mean that opaque objects would always be beneath transparent ones, since the transparent ones are rendered in a second pass afterwards.
I then arrived at the solution of setting `glDepthMask(GL_FALSE);`, which makes the depth buffer read only. All writes to the depth buffer is ignored. Using this, the depth buffer created by the opaque objects can be used while rendering the transparent ones, and since the transparent ones are rendered in sorted order, they *kinda* work out as well. See @fig:img-tree-depth-readonly for the result. The new rendering pipeline is visualized in @fig:render-pipeline.
I then arrived at the solution of setting `glDepthMask(GL_FALSE)`, which makes the depth buffer read only. All writes to the depth buffer are ignored. Using this, the depth buffer created by the opaque objects can be used while rendering the transparent ones, and since the transparent ones are rendered in sorted order, they *kinda* work out as well. See @fig:img-tree-depth-readonly for the result. The new rendering pipeline is visualized in @fig:render-pipeline.
## Need for optimizations

View File

@ -1,9 +1,9 @@
# What i learned about the methods in terms of advantages, limitations, and how to use it effectively
# What i learned about the methods in terms of advantages, limitations, and how to use it effectively {#sec:learn}
Post-processing is a great tool, but it adds complexity to the rendering pipeline. Debugging issues with the framebuffer isn't easy. It does have the advantage allowing me to skew the window along a sinus curve should i want to.
Post-processing also a cost-saving measure in terms of performance. It can allow me to only compute some value only once per pixel instead of once per fragment which are privy to cover one another. The grain and vignette effect are both possible to implement in the scene shader, doing it in the post processing step spares computation.
The method i used to render transparent objects works *okay*, as described in @sec:trans, but it does have consequences for the post-processing step later in the pipeline. I now have an incomplete depth buffer to work with, as seen in @fig:img-depth-map. This makes adding a fog effect in post create many artifacts. Fog can however be done in the fragment shader for the scene anyway, with only a slight performance penalty due to overlapping fragments.
The method i used to render transparent objects works *okay*, as described in @sec:trans, but it does have consequences for the post-processing step later in the pipeline. I now have an incomplete depth buffer to work with, as seen in @fig:img-depth-map, where no grass or leaves show up. This makes adding a fog effect in post create weird artifacts. Fog can however be done in the fragment shader for the scene anyway, with only a slight performance penalty due to overlapping fragments.
One other weakness with the way i render transparent objects is that transparent meshes which cut into eachother will be render incorrectly. The whole mesh is sorted and rendered, not each face. If i had two transparent ice cubes inside one another *(kinda like a Venn diagram)* then one cube would be rendered on top of the other one. This doesn't matter for grass, but more complex and central objects in the scene may suffer from this.
One other weakness with the way i render transparent objects is that transparent meshes which cut into each other will be render incorrectly. The whole mesh is sorted and rendered, not each face. If i had two transparent ice cubes inside one another *(kinda like a Venn diagram)* then one cube would be rendered on top of the other one. This doesn't matter for grass, but more complex and central objects in the scene may suffer from this.

View File

@ -1,7 +1,7 @@
# Appendix
![
The seqmented pane with the cobble texture and normal map
The seqmented pane with the cobble texture and normal map and lighting applied to it.
](images/0-base.png){#fig:img-base}
![
@ -174,3 +174,10 @@ bool shader_changed = s != prev_s;
![
The same scene, during the day. Spotlights have been turned off.
](images/26-day.png){#fig:img-day}
![
The early-morning scene with some strong fog applied. The code was later changed to have the fog affect the background color as well.
](images/27-fog.png){#fig:img-fog}
```{.dot include="images/effect-order.dot" caption="A high-level graph representing the fragment shader for the scene" #fig:effect-order}
```

BIN
report/images/27-fog.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.9 MiB

View File

@ -0,0 +1,33 @@
digraph asd {
//rankdir=LR;
dpi=600;
ratio=0.4;
node [fontname=arial, shape=rectangle, style=filled, fillcolor="#ddddee"]
normal [ label="compute_normal()" ]
base [ label="vec4(1, 1, 1, object_opacity);"]
vertex_color [ label="vertex_color" ]
texture [ label="texture(diffuse_texture, UV)" ]
invert [ label="if (inverted)\l color.rgb = 1 - color.rgb" ]
phong [ label="color = phong(color)" ]
reflecton [ label="reflection()" ]
fog [ label="linearDepth() * fog_color" ]
rim [ label="compute_rim_light()" ]
multiply [shape=ellipse, fillcolor = "#ffccaa"]
out [shape=ellipse, fillcolor = "#ccffaa"]
normal -> phong;
normal -> reflecton;
normal -> rim;
base -> multiply;
vertex_color -> multiply [label="if(has_vert_colors)"];
texture -> multiply [label="if(has_texture)"];
multiply -> invert;
invert -> phong;
rim -> out [label="mix"];
phong -> out [label="mix"];
reflecton -> out [label="mix"];
fog -> out [label="mix"];
}

View File

@ -2,7 +2,7 @@ digraph asd {
//rankdir=LR;
dpi=600;
ratio=0.55;
node [fontname=arial, shape=rectangle, style=filled, fillcolor="#dddddd"]
node [fontname=arial, shape=rectangle, style=filled, fillcolor="#ddddff"]
null [ label="updateNodes(rootNode);" ]
0 [ label="renderNodes(rootNode, only_opaque=true);" ]
1 [ label="std::sort(transparent_nodes);" ]