The concept of this algorithm is extremely simple and straightforward. Given a given resolution of the screen, every pixel on the screen can represent a point on one (and only one) object (or it may be set to the back ground if it does not form a part of any object). I.e. irrespective of the number of objects in line with the pixel, it should represent the object nearest to the viewer. The algorithm aims at deciding for every pixel the object whose features it should represent. The depth-buffer algorithm used two arrays, depth and intensity value. The size of the arrays equals the number of pixels. As can be expected, the corresponding elements of the array store the depth and intensity represented by each of the pixels
The algorithm itself proceeds like this
Algorithm Depth Buffer:
a. For every pixel, set it’s depth and intensity pixels to the back ground value ie. At the end of the algorithm, if the pixel does not become a part of any of the objects it represents the background value
b. For each polygon on the scene, find out the pixels that lie within this polygon (which is nothing but the set of pixels that are chosen if this polygon is to be displayed completely).
For each of the pixels
i) Calculate the depth Z of the polygon at that point (note that a polygon, which is inclined to the plane of the screen will have different depths at different points)
ii) If this Z is less than the previously stored value of depth in this pixel, it means the new polygon is closer than the earlier polygon which the pixel was representing and hence the new value of Z should be stored in it. (i.e from now on it represents the new polygon). The corresponding intensity is stored in intensity vector.
If the new Z is greater than the previously stored vale, the new polygon is at a farther distance than the earlier one and no changes need be made. The polygon continues to represents the previous polygon
One may note that at the end of the processing of all the polygons, every pixel, will have the intensity value of the object which it should display in its intensity location and this can be displayed
This simple algorithm, as can be expected, works on the image space. The scene should have properly projected and clipped before the algorithm is used.
The basic limitation of the algorithm is it’s computational intensiveness. On a 1024 X 1024 screen it will have to evaluate the status of each of these pixels in a limiting case. In it’s present form, it does not use any of the coherence or other geometric properties to reduce the computational efforts.
To reduce the storage, some times the screen is divided into smaller regions like say 50 X 50 or 100 X 100 pixels, computations made for each of this religions, displayed on the screen and then the next region is undertaken. However this can be both advantageous and disadvantageous. It is obvious that such a division of screen would need each of the polygons to be processed for each of the regions – thereby increasing the computational efforts. This is disadvantage. But when smaller regions are being considered, it is possible to make use of various coherence tests, thereby reducing the number of pixels to be handled explicitly.