Menu Share

What are Shaders?

by | Last Updated:

I’ve started to work on some projects that relate to graphics programming. As I mentioned in a previous article I started exploring the bgfx library and with that I started to develop my own stack called BIG2. In this article today I will explore shaders with you to share what I know so that you might follow along with other articles around the blog.

Table of Contents


Shaders are programs that are run on the GPU. The graphics card has its own processor unit that specializes in paralellizing computation. This is why it programs are very special and are not meant to do normal work. Instead the GPU loads a program and executes that same program on multiple threads by passing different data and then collecting the output. Since when the screen is drawn we have to process multiple complex computations and render it to about 2073600 pixels for your typical HD monitor that job is perfect for the GPU.

Different operating systems provide different ways of writing shader programs so we have a few programming languages for them:

  • GLSL – The programming language related to OpenGL or Khronos. Khronos is behind the major open source graphics library specifications like OpenGL, Vulkan, WebGL, OpenCL, etc.
  • HLSL – Microsoft’s shading language for DirectX on platforms like Windows or XBox
  • Metal – Apple’s shading language for anything with an Apple OS.
  • SPIR-V – Khronos specified a standard shading language that can be compiled to from GLSL or HLSL and is binarized. This language works well with Vulkan.

There are also a set of tools that with the help of spir-v allow us to transpile from one language to another. This is useful when talking about the library bgfx or some other game engines since you will often see that they have their little spin-off of some shading language. I am still amazed how every tool has a little different syntax to do shading.

The Shader Pipeline

The shader pipeline consists of a few steps. There are two major universal steps that you need to know about though since they are programmable on each of the shading languages. Some GPUs allow you to customize more segments of the pipeline but we will not discuss those here.

Now when we talk to the GPU we usually transfer some data that we want to process. In the example lower in the article you will see how we pass a triangle to be drawn. A triangle is consisted of 3 points in 3D space and more specifically a vector of 3 values. So we load this data into the GPU and we start the shading program.

The Vertex Shader

The first part that we can control is the vertex shader. This shader has to deal with vertices. Currently they will represent our 3 positions for the triangle. They are called vertices though since they can have additional information that you could link to each position. For example you could assign a color to each of those point and you could then color the pixels around that position in that color.

The vertex shader can define what incoming data it expects and what output data it would give to the next part of the process. The vertex shader has to define only one mandatory data – what is the resulting position on the screen? This is done by setting the built-in variable gl_Position to a position. You might define other data like the above-mentioned color that you could pass to the next shading steps.

The position will then be used by the next steps of the shading pipeline to enclose a space that will be drawn.

The Fragment Shader

This shader is responsible for drawing each pixel. Since the vertex shader defines the enclosing triangle that will be drawn the fragment shader is responsible for setting values to the pixels in that enclosed space. This shader will be run for each pixel in that triangle.

It can also take any input parameter coming from the vertex shader.


Enough talking. Let’s take a look at an example. The browser only allows for WebGL so for this article I’ve prepared the following case. We have one vertex and one fragment shader. The vertex takes the positions for a triangle and then passes the position directly but also passes the position as a color value by adding 0.5 to each of the components. The fragment shader on the other hand takes the color passed from the vertex and also passes a color to the next step (which in this case is the renderer). The result is the triangle bellow.


#version 300 es


// This comes from JavaScript
in vec4 vertexPosition;

// This will be passed to the fragment shader
out vec4 colorForFragment;

// The logic itself
void main() {

    colorForFragment = vertexPosition + 0.5;
    gl_Position = vertexPosition;


#version 300 es
precision highp float;


// This comes from the fragment shader
in vec4 colorForFragment;

// This is passed to the renderer
out vec4 fragColor;

// The logic itself
void main() {
  fragColor = colorForFragment;

You might wonder why does the triangle have a gradient. Aren’t we only working with 3 positions for the triangle points? Well there is some magic behind it between the vertex and fragment shader such that any value between the vertex and the fragment gets interpolated (mixed into an intermediate value). So the position is moved (duh!) but also the color is changed a little bit to mix nicely from one point to the next.

You can try and play around with this example. Try to scale or mirror the triangle for example.


I hope that with this article I managed to shed some light into the world of shaders and more native development. If that peaked your interest then subscribe to be notified for more content like this!

If you want more information on this topic check out the excelent article on the learn opengl website that explains the whole process in even more detail.

Leave a comment

The browser only allows for WebGL
With WebGPU (already ships in Chrome/Edge; in development on Firefox & Safari), there will be more alternative!
#Hristo Iliev
Indeed, I am waiting for official support for WebGPU.