# 11.1 - Introduction to Advanced Rendering¶

This section discusses the following, more advanced, rendering topics:

• Hidden Surface Removal

Given a scene and a camera location, how can you render only the graphic primitives that are visible to the camera?

• Object Selection

Given the rendering of a scene that contains multiple objects, how can a user select an individual object?

• Transparency / Alpha Blending

Given that transparent surfaces allow light to pass through, how can you render them and still partially see the objects that are behind?

When light is blocked from an object by another object, the missing reflection is called a “shadow”. How can shadows be rendered in a scene?

• Particle Systems

Many phenomena, such as clouds, smoke, fire, water, dust, and stars, are composed of many small particles and modeling these physical things as a set of triangles is problematic for many reasons. How can we render such things and get believable results?

Before we discuss these advanced rendering topics, let’s discuss the general concept of buffers and how buffers are used in the rendering process.

## Buffers¶

A buffer is a contiguous block of memory for storing a set of related values. In previous lessons we have referred to three types of buffers:

• buffer object: A set of attributes related to graphic primitives. The most common attribute is geometric location using an (x,y,z) value. Other attributes include colors, normal vectors, and texture coordinates. Any data stored on a “per vertex” basis is stored in an buffer object. (The official name is vertex object buffer, or VOB’s.)
• texture object: A set of values that store the rendering parameters of a texture map and its 2D image.
• frame buffer: A 2D image that contains the rendered output of a scene.

We need to be more precise in our terminology for the lessons in this section. Technically, a frame buffer is a collection of buffers used for rendering. In a WebGL program, a frame buffer is composed of the following three, 2D arrays:

• color buffer: a 2D array of color values. Each element of a color buffer defines a pixel color using either three values, RGB, or four values, RGBA. The minimum memory for each color component value is 8 bits.
• depth buffer: a 2D array of values that represent a “distance from the camera.” A depth buffer is used for hidden surface removal. The minimum memory for each element is 16 bits.
• stencil buffer: a 2D array of values that represent locations in a color buffer that can be changed. The minimum memory for each element is 8 bits. This buffer defines a “stencil mask” that determines which elements in the color buffer and the depth buffer can be modified.

Again, these three buffers compose a frame buffer. A WebGL frame buffer must have a color buffer and a depth buffer. The stencil buffer is optional. A statement like “this updates the frame buffer” actually means that multiple buffers are being updated. WebGL allows you to create multiple frame buffers and manipulate them in a variety of ways. We will explain the details of frame buffer objects as we encounter the need for them in the coming lessons.

## Double Buffering and Canvas Updates¶

Double buffering means you have two buffers for rendering. One buffer, the off-screen frame buffer, is used to create a new rendering. This buffer is modified as you issue gl.clear() and gl.drawArrays() commands. A rendering is not created instantaneously, but rather incrementally as various pixels are set to their appropriate color. We don’t typically want a user to see this process, so it is done “off-screen”. The other on-screen buffer is used to hold the image that is currently visible to a user. When all rendering for an image is complete in the off-screen frame buffer, it’s color buffer is automatically copied to the on-screen buffer so that it is visible to a user.

WebGL calls the off-screen frame buffer the “drawing buffer”.

How does the browser know when rendering is complete? All processing on a web page is event driven. When an event happens, a JavaScript “event handler” is called to perform some action. When an “event handler” is finished, the browser can detect if a new rendering is in the “off-screen buffer*. If a new rendering exists it updates the on-screen buffer before the next refresh of the screen. The implication is that a rendering must be performed in a single execution of an event. There are ways to create a rendering from the handling of multiple event, but that is not the design intent. We will assume that one render of a canvas happens during the handling of a single event.

## Glossary¶

hidden surface removal
The determination of which graphic primitives in a scene are visible from the current virtual camera.
transparent
A surface that allows light to pass through it.
opaque
A surface that reflects or absorbs all of the light that strikes it.
The area of a surface that does not receive direct light from a light source.
particle system
A model of a physical phenomena that is composed of many small particles.
buffer
A set of contiguous memory locations that store a set of related values.
color buffer
A buffer containing color values.
depth buffer
A buffer containing “depth” (distance from the camera) values.
stencil buffer