= Introduction = In the first graph tutorial, we plotted a function by creating 2-dimensional vertices for all the data points we had. This is straightforward and works very well, if we do not have too much data. In this tutorial, we will approach the problem of plotting data points very differently. First, we are usually more interested in the y coordinates, the x coordinates are just equally spaced in the domain of interest. We don't want to store the x coordinates in memory if they can easily be recovered programmatically. In fact, when taking data from an ADC (such as from a microphone connected to a sound card), we only get a stream of y coordinates. It would be very nice if we could put that into a buffer without any further processing, and have the graphics card do something useful with it. Secondly, if we have thousands of data points, it really is useless to plot all of them in a window that may not even be a thousand pixels wide. So it would be nice if we can separate the number of vertices that we draw from the number of data points that we have. We also do not want to change the vertices as we move around or zoom in and out of the graph. The solution is simple, we put fixed x coordinates in a 1-dimension vertex buffer object (VBO), and put the y coordinates in a 1-dimensional texture, and have the vertex shader combine the two. NOTE: although core OpenGL ES 2.0 supports texture lookups in the vertex shader, it is allowed that graphics cards have zero texture units available in the vertex shader. It is therefore possible that this technique does not work on your card. To check the number of vertex texture units available, use this code fragment: = 1-dimensional VBO = Although vertices are usually two- or three-dimensional, OpenGL has nothing against using one-dimensional vertices. Remember that by default, the x coordinates of the window go from -1 to 1. So we will create a VBO with 101 x coordinates that go from -1 to 1. We also rename our vertex attributes to "coord1d": Then, we can draw our "line" almost exactly like we would if we had 2D vertices: In our vertex shader, we have to come up with the y coordinates on our own: Exercise: Try various ways of calculating y values in the vertex shader. In fact, you can even let OpenGL evaluate the function we used in the first tutorial! = Putting y-values in a texture = Depending on the graphics card and drivers that you have, textures can either be very flexible or very restricted. Some cards allow textures in a wide range of formats, including 16 bit integers, floating point or even fixed point formats. If your input data matches a format supported by the card, you don't have to do any conversions and rendering will be very fast. If you try to be OpenGL ES 2.0 compliant however, there is the restriction that it only supports 8 bit integers for texture data. However, might just be enough. Take for example the function we used in the previous tutorial, and map the y coordinates from -1..1 to 0..255: Now we can create a one-dimensional texture. Again, OpenGL ES has a limitation; it doesn't explicitly support 1 dimensional textures. However, nothing prevents us from making a texture that is very wide but only one pixel high: Here we used the GL_LUMINANCE format to indicate that we only have one color component. Exercises: Try to find out which texture formats your card supports. Is there a limit to the size of a one-dimensional texture? Try changing part of the graph using glTexSubImage2D(). OpenGL ES also supports the GL_RGBA format, essentially giving us 32 bits per pixel. Could we use that to get better accuracy of our y values? = The vertex shader = Now that we have our VBO with x coordinates and our texture with y coordinates, we will combine them in our vertex shader. Remember that texture coordinates go from 0 to 1, while our x coordinates go from -1 to 1. Also, we want to pan and zoom, so we will use the offset_x and scale_x variables from the previous tutorial. In this case however, since we do not change our x coordinates, we need to apply the offset and scale transformations in reverse to get the texture coordinates! Once we have all the coordinates, we can also use it to color the graph similar to how we did that in the previous tutorial. Here is the full vertex shader source: As you can see, nothing prevents you from using textures in the vertex shader (although on some graphics cards, especially older ones, it might be slower to access them from the vertex shader than from the fragment shader). Since we have a GL_LUMINANCE format texture, we have to read the red component, the other components are undefined. Also note that the texture2D() function returns floating point values in the range 0..1, not integers in the range 0..255. The fragment shader is the same from the previous tutorial, simply setting gl_FragColor to f_color. = Interpolation and wrapping = If you zoom in very far, you will notice that the lines are not smooth anymore, but look like a staircase. You might think that this is because of the low accuracy of our 8-bit integer y values. However, the height of the steps vary, and in the steepest parts of the function, the height will be much more than can be explained by 8-bit integers. Instead, the problem is caused by the fact that there now are more vertices horizontally than pixels in the texture. Nearest-neighbor interpolation will cause clusters of vertices to all have the same y value. To our smooth curve back, we should enable linear interpolation: If you pan or zoom out very far, you will notice something very interesting: the function is repeating itself! This is because by default, OpenGL will wrap the texture coordinates. We could clip the texture coordinates ourself in the vertex shader, but we can also tell OpenGL to do that automatically: Exercises: Make it so you can toggle interpolation and wrapping modes by pressing F1 and F2. Make it so that if you press F3, it draws the graph twice, once with GL_LINE_STRIP and once with GL_POINTS, use a point size of 5 pixels. The MIN_FILTER does not seem to be doing very much. Research how GL_LINEAR works for both MIN_FILTER and MAG_FILTER. Would mipmaps be useful? Think again about using GL_RGBA to get 32 bit accuracy.< OpenGL Programming