The purpose of this course is to give the student an introduction to modern data visualization. The course starts with an introduction to basic computer graphics. Later, visualization terminology, data representation as well as algorithms for geometry-generation are discussed. In volume visualization, texture based rendering techniques are often used. Here the "voxel", a 3-dimensional analogy to the 2-dimensional pixel can be used as a key building block of the volume. In cases where this technique can be used, you may imagine that the entire volume is built by these voxels.
The goal of the course is to give the student a sufficient background to handle practical visualization problems. During the course, data from computational fluid dynamics (CFD), medical data and seismological data will be used as examples. For practical training, a case study will be given, visualizing and interpreting data from fluid/gas dynamical simulations. It is also possible for the student to bring in his/her own data as examples.
At Forsvarets Forskningsinstitutt, we have during some years, used advanced visualization techniques in analyzing and interpreting data from numerical simulations. Due to the use of modern visualization techniques, we have discovered important dynamical details in our data sets that would have reminded undiscovered without its use. Visualization have given us new physical insight into complex fluid mechanical processes. For scientists working with huge data sets, the use of modern visualization techniques can be compared with the astronomers use of telescopes.
During the last years, the evolution of capacity and power of computers for numerical simulations and data storage has been enormous. Before the introduction of the Cyber205 and the Cray1 super computer in 1976 (peak performance 250MFlops at 80MHz, prize 8M$), most of the simulations were one-dimensional, and the results were drawn on graph paper, or they were printed out as a series of numbers on a line printer. If more then one data field were involved, they could be plotted on the same sheet of paper and easily be compared with each other. As the computing power increased, 2-dimensional simulations began to be affordable.
Up to about 1990, most simulations were of one or two-dimensional nature, the three-dimensional simulations carried out were of low resolution(~50x50x50 pts). Physical systems where the symmetry was broken by a directional force, could be studied realistically only in two or three-dimensional numerical models. Examples of 2D problems are given in geophysical fluid dynamics where Coriolis forces occur, fluids influenced by gravity or in magneto-fluid dynamics with magnetic fields of very simple topology. To interpret the data from such systems, line printer output became in-adequate. Two-dimensional images, color coded and contoured according to magnitude of the physical field, were very suitable, giving a nice expression of the data. Images can easily be used for comparison of various data fields.
A turning point in physical realism of simulations occurred about 1990. With the introduction of the CrayY-MP (1988, 330MFlops/processor) and the Cray2 super computers, the computing power and memory capacity became sufficient for "near-realistic" 3-dimensional simulations. The interpretation of the huge datasets produced by these systems introduced a new challenge and new techniques had to be developed. Dedicated graphics hardware commonly used by the simulator and entertainment industry combined with dedicated graphics software such as GL (Graphical Language) or OpenGL, were taken into use for data visualization and interpretation.
Since then, the evolution in "computer-realism" has been incredible. As an example, turbulence is by nature a 3-dimensional phenomena, that can not be adequately expressed by one or two dimensional models. In about 1995 (massively parallel computers like the SGI Origin series), turbulence simulations with Reynolds numbers approaching Re = 1000 could be carried out, clarifying some of the fundamental fluid dynamical questions raised many years ago. Today, simulations of moderate Reynolds numbers (~10^4) are feasible. Such simulations involve (more than 1000)^3 data points in each field, often involving at least 4 fields in a single scene. That is more than 4 billion data points per scene, expressing only one instant in a time sequence. When interpreting and analyzing such data, several instants are needed to understand the time evolution and dynamics. The sizes of such data sets are enormous, involving Tera bytes of data, and it is necessary to utilize advanced techniques like volume visualization and animation for analysis and interpretation purposes. Such techniques are presented and discussed during this course.
By the end of the year 2008, the most powerful computers are able to perform close to 500-Tera-Flops (500x10^12 floating point operations per second). We are approaching the milestone of 1-Peta-Flops. They have a storage space (RAM) of about 100-Tera-bytes, that is 12.5x10^12 floating point numbers kept in its distributed main memory. To solve "real problems without modeling for example like in aerodynamics", the data storage space has to be far greater than this. The computer industry is planing systems that are 100-1000 times larger compared with current super computers. With such systems, reliable numerical simulations of the aerodynamics of a jet-plane or the detailed fluid dynamics of a submarine can be done with fairly good accuracy. There is a huge potential for industrial applications. For direct numerical simulations though, solving Navier Stokes equations, much larger systems are needed. In 2008 using the largest computers on the planet, we can reach perhaps Re~5x10^5 in a Direct Numerical Simulation. For comparison, a simulation of the fluid field surrounding a jumbo jet or an average submarine at cruise sped, requires that all scales from a fraction of 1mm up to 100m are resolved (not using turbulence models, which are all in-accurate). Reynolds numbers involved are typically 10^8. The number of degrees of freedom in a direct numerical simulation goes like N~(Re)^(9/4), which requires computers that are 10^7 larger than current supercomputers. If the evolution continues at the same past as today, such systems will be available within perhaps 30 years? Then we can perhaps solve engineering problems like aerodynamics of aircrafts without the need of turbulence models. Still there are many many important problems that are far out of reach even for those computers to be solved accurately. For example long term weather prediction or climate modeling. Problems related to chemistry, bio-informatics and engineering of new drugs...
Parallel to the evolution of compute machinery, hardware, algorithms and software, there should be a matching development of hardware, algorithms and software for visualization and interpretation tools for huge digital data sets. Due to the game industry, there has recently been a tremendous boost in the functionality and power of dedicated graphics boards. There is no indication that this will not continue. Parallel to the development of graphics hardware, high level software is also developed, making the tremendous graphics power available to developers and users. This is utilized also by the visualization community.
There has also been a huge evolution of a variety of imaging sensors. This has led to a increasing demand for more efficient and powerful tools for visualization and interpretation of observed data. There has been a revolution in the field of medical imaging, MRIs (magnetic resonance imagers) CTs (computer tomographs)and (positron emission tomograph) PET scanners play a key role as diagnostic tools for a variety of diseases. In the field of oil and gas exploration, the seismic data sets are of 3-dimensional nature, and modern visualization techniques are used for interpretation and analysis. This has been of great national benefit for Norway. The military intelligence use equipment comparable to the one utilized for oil exploration for submarine hunting.
Several other areas could also be mentioned, but during this course, we will limit our discussions to the topics mentioned above.
The key goal of scientific visualization is to "turn" numerical data into a visual form that enables us to reveal relevant information about the data. When intelligently "converted" to visual information, the human senses including our eyes and brain have a tremendously powerful ability to identify complex objects and simultaneously to compress and extract key information visually. When the visual information is keyed in a clever form, the key information is retrieved almost instantly. This should be kept in mind when we design, develop and use visualization systems, including hardware and software.
When dealing with data-visualization, it is an advantage to utilize dedicated computer graphics hardware and software to generate adequate visual information. There are several aspects that should be considered to achieve visual information. We will consider some of them below. Most important is perhaps:
High degree of interactivity and fast response. That is to be able to quickly translate, rotate and zoom the scene. This is important for teaching our brain about the spatial relations between the objects comprising the scene. If the particular scene is too complicated, or the actual transformations are carried out too slow, our visual memory who is temporarily very short, will forget about what was displayed before the next image is rendered. In this situation, we loose track of the information. High degree of interactivity and fast system response is crucial to the visualization process. In addition to the ability of translating, zooming and rotating the scene in real time, an almost instant response when moving cut planes around is also very useful. Fast responses when changing given parameters are beneficial, like altering of light sources and colors. Within voxel graphics, the ability to have instant response to changes in color and opacity is of great importance for interpretation of large datasets.
Proper and careful use of colors can greatly enhance the visual expression of a scene. When dealing with colors, it may be of help to know something about the physiology of vision. There are visual receptors with sensitivity to red, green and blue light, and every color can be generated by a certain mixture of these three primary colors. For example, it can be difficult or perhaps unpleasant to view or focus on a picture containing objects with highly saturated red and blue colors. This can be utilized to separate objects from each other. A more pleasant impression is given when using colors with lower saturation and that are closer to each other in the spectrum, like red and yellow. In many cases use of Gery tones will be a good choice. We will proceed this discussion in a later chapter.
Proper use of color and opacity is of great importance in voxel visualization. As we will see later when dealing with voxel visualization, it is difficult to implement light and reflection models. The voxels appear as a glowing continuum of data, emitting its own light. By careful use of colors and opacity, the "halo" effect, or "limb darkening/limb brightening" as called by astronomers studying the sun and outer planets, can be implemented just by proper use of the color and opacity tables. We will proceed the discussion in this section in later chapters.
To reach the goals of achieving a powerful visualization system dealing smoothly with large datasets, a computer system with the capability of high data throughput is needed. Presently, microprocessors are very powerful, but not sufficiently fast to give the necessary interactivity. During the 10-15 recent years, dedicated graphics hardware have been developed. This is feasible since computer graphics consist of a relatively small number of simple operations that are conducted repeatedly. The graphical information consists among others of polygon vertex data, color tables and images/textures. The data is pipelined in real time through the graphical processor. The graphical processor is designed as a pipeline with differing processing functionality present down the pipe. In the lower part of the pipe, the information is rasterized to fit into a video or display system like a computer screen.
Graphics software is designed together with the hardware for optimal utilization and speed. Lately the graphics library OpenGL (Open Graphics Language, first developed by Silicon Graphics Inc., has gained the role as a leading standard for basic graphics programming. OpenGL is a powerful software library, utilizing dedicated graphics hardware, giving high graphics performance. From OpenGL most of the graphics hardware functionality can be accessed and utilized. Several high level graphics software systems like OpenInventor are based on OpenGL. For data visualization, OpenGL can be used, but it is a low level library, requiring a lot of knowledge at a very basic level. As with assembly programming, everything can in principle be done using OpenGL, but it is a rather tedious process to create a high level application using it. During this course, we will just mention it, and give a very simple example. There are high level object oriented systems like Open Inventor that are better suited for making graphics applications. For visualization, there are custom designed APIs or libraries at high level that should be used. We can mention systems like vtk, AVS, NAG Explorer and others. Some of these will be discussed later.
These examples show voxel renderings of: 1) Simulated temperature blue and red, and enstrophy (squared vorticity) green, of stellar convection close to the photo-spheric level. 2) Data from seismic exploration of the sea bottom. 3) Computer tomography (CT) data showing a lung. 4) Magnetic resonance image (MRI) data of a brain tumor. 5) Simulated data of a cylindric shock wave hitting a perpendicularly oriented cylinder containing low density matter compared to the surroundings.
In the process of mapping a 3D scene on the 2D computer screen, different coordinate systems, transformations between these systems and projections to map the objects into a 2D display are involved. Typical coordinate systems involved are:
Between the coordinate systems are the following transforms:
The coordinates and transforms involved from 3D objects modeled in local coordinate systems to the 2D device coordinates defining the output picture on the computer screen is shown in the figure below:
In addition to the geometrical objects positions, the light and camera positions are defined. How a scene is seen from the camera is dependent on these transformations. A projection and a clipping transformation is applied to generate a 2D picture of the scene that can be mapped to the computer screen.
We will briefly discuss some of the transformations involved. The matrix representation for translation, scaling and rotation are as follows:
where T is the translation vector, S is the scaling matrix and R is the rotation matrix. Their 2D representation is given below.
To obtain a single matrix operation for the three transformations, a 3-component vector formulation is used with a 3x3 matrix notation for the 2D operations. Note that the operations generally do not commute. Using this formulation, translation,scaling and rotation can then be expressed as:
When executed on ordinary CPUs, the 3X3 matrix formulation as given above does not offer any advantage, in fact the number of arithmetic operations is higher compared to the 2x2 matrix + vector add operation. The advantage of the 3X3 formulation is evident when custom designed graphics hardware is used. In such hardware the arithmetic operations are pipelined in multiple and parallel pipes. Each pipe may perform multiply-add operations used in the 3X3 matrix formulations, and a certain amount of polygon vertices are processed in parallel. Typically in computer graphics and visualization, polygons with a huge number of vertices are present. Due to the custom built pipeline/parallel graphics processors, operations like clipping, pan, zoom and rotation can be performed in real time. Dedicated hardware for computer graphics as utilized in visualization is treated in a later section. For some of the more powerful systems, the arithmetic capacity is in the supercomputer range. In the case of computer graphics, the same operations are repeated again and again often on millions of data points, so it is beneficial to use designs that are very dedicated for just this particular problem.
Contrary, in the case of high performance computing (HPC), a great variety of problems are encountered, so it is hard to predict which kind of hardware design that is the best. Also in the case of computer graphics, the "visual accuracy" is low, 24-32 bits or less is needed, compared to the needs in HPC, where 64-128 bits are used, which makes the hardware more complicated. The graphics pipeline referred to in this section is often called the geometry engine. It is only a part of the complete graphics pipeline. We are going to discuss this in greater detail later.
In 2D, for rotation of an object about a point P=(x1,y1), the transformations can be composed by first translating the object to origin by subtracting (-x1,-y1), then performing a rotation and then finally translating the object back by adding (x1,y1) as shown by the expression below, this can be performed by a single matrix operation after multiplying the three matrices together analytically.
In 2D, the general composition of rotation, scaling and translation can be done by a matrix of the form:
This can easily be extended to 3D space. In 3D scaling is a 4x4 diagonal matrix S(sx,sy,sz). Rotations can be composed into rotation about the x, y and z axis respectively, Rx, Ry and Rz. In a left handed coordinate system with the x-y plane in the display screen, the rotation matrices can be written in the form:
Generally in 3D, scaling S(sx,sy,sz),rotations Rz, Rx and Ry, and translation T(tx,ty,tz) can be written:
In 3D viewing a view volume is specified. Objects in the 3D world are clipped against the 3D view volume and are then projected. The contents of the projection of the view volume onto the projection plane, are mapped into the view port for the display.
Generally, projections transform points from n dimensional space into a space of dimension less than n. We shall limit ourselves to project points in 3D space into a 2D plane. The projection of a 3D object is defined by straight projection rays emanating from a center of projection, passing through each point of the object, and intersecting a projection plane to form the projection as shown in the figure below.
Left is shown a perspective projection of the line AB into A'B', and right is shown a parallel projection. In the right figure, the projectors AA' and BB' are parallel.
The perspective projections of any set of parallel lines that are not parallel to the projection plane, converge to a vanishing point as shown in the figure below. In 3D, parallel lines do no meet each other within a finite distance,(their vanishing point can be thought of as the projection of a point at infinity). For each of the infinity of directions in which lines can be oriented, there is a vanishing point.
Perspective projections are categorized by their number of principal vanishing points and therefore by the number of axes the projection plane cuts. There might be one, two or three principal vanishing points. Most commonly, one or two vanishing points are used. Use of three vanishing points is considered to add little of realism beyond the two point perspective.
A one-point perspective projection of a cube onto a plane cutting the z-axis. It is clear that this is a one-point projection because the lines parallel to the x and y axis do not converge, only lines parallel to the z-axis do.
There are a variety of planar geometric projections, the more important ones can be classified as shown in the figure below:
The mathematics of planar geometric projections: For simplicity let us first assume that the plane of projection is perpendicular to the z-axis, crossing the z-axis in the point z=d. The projections can be defined by a 4x4 matrix. Given a point P(x,y,z). It is projected into the point P'(x',y',z') at z'=d as shown in the figure below.
From the figure it is evident that x'/d = x/z, and y'/d = y/z, which can be written: x' = x/(z/d), and y' = y/(z/d). The transformation can be expressed using a 4x4 matrix, given:
This yields (X,Y,Z,W) = (x,y,z,z/d) transposed. Dividing by W then give the projected coordinates in 3D space: (X/W, Y/W, Z/W) = (x',y',z') = (x/(z/d), y/(y/d), d) as obtained above.
Alternatively, by moving the projection plane to origin and the center of projection to z = -d, the following results are obtained: x' = x/((z/d +1), y' = y/((z/d) +1). In this case find the perspective matrix! Letting d -> infinity, give parallel projection.
This is about what is sufficient as background material on transformations and projections for visualization. If you are interested, then see: Foley et al.
To obtain a realistic view in a scene containing objects with material surfaces, these objects must be rendered in such a way that the characteristics of reflection of the materials are considered. A good reflection model can give a visible impression of surfaces made of rubber, metal, glass etc... The basis for reflection is light. In computer graphics two kinds light sources are commonly used. One is a point source located at infinity, here the intensity is constant with distance and the light rays are parallel. Another is a local light source where the intensity vary as 1/d^2, where d is the distance between the light source and the object.
Three reflection models are commonly used. All three are used in the same scene to give a realistic visual impression. Ambient light or reflection models the background light. Its cause can be unspecified and multiple reflections. Diffuse reflection will scatter light isotropically in all directions, like card-board or chalk. Specular reflection is used to render shiny objects like steel balls.
Ambient reflection: Let an object have a color given by Oc= (R1,G1,B1) vector (see the chapter on color models). Let the "intensity" of the homogeneous light from the surroundings be given by Lc=(R2,G2,B2). The resulting light will have an intensity given by the product Rc= Oc*Lc=(R1R2,G1G2,B1B2).
Diffuse reflection: This form of reflection is also called Lambertian reflection. The intensity of the reflected light is independent on aspect angle. Every point on the surface has the same intensity independent on aspect angle. This property can be explored as follows. Let Ln be the direction of the light and On a normal vector directed of the object. The area that is lighted from a light source is inversely proportional to cos(θ), where θ is the angel between - Ln and On.
Lambert's law states that the intensity of light reflected from a surface is proportional to cos(θ). Since these effects will cancel each other will the resulting light be constant, independent on viewing angel. Such surfaces are called Lambertian surfaces. Examples are card-board and chalk.
The expression for the Lambertian reflection is Rc=Lc*Oc(-Ln.On).
Specular reflection is used to render shiny surfaces. If the surface is smooth will the angle of incidence be equal to the angle of reflection. Let S denote the direction of the reflected ray. It will be in the plane of Ln and On as shown in the figure below. If the eye is not in the direction of reflection we will not see this reflected ray. Let the camera point in the direction Cn. In the real world, the the area that is lighted is not characterized by a point shaped high intensity area. The high intensity area will suffer from some diffusion around the center. The smaller the diffusive spot the stronger is the illusion of a surface of high smoothness and reflectivity. The mathematical expression for the reflectivity is given by
Rc = Lc*Oc(-Cn.S)^k, where
S= 2(-Ln.On)On + Ln
Where k is a positive constant, called the specular exponent. The larger k is, the more concentrated is the reflection. For even more realistic visual modeling, the specular reflection will vary with angle of viewing. Glass will have no specular reflection when the viewing angle is small, while the specularity is large when viewed perpendicularly.
By using modern computer graphics hardware including textures, a very realistic rendering can be obtained. Below follows an example using Matlab to demonstrate the effect of various ambient, diffuse and specular reflection.
The processes starting with a collection of 3D objects and finally making a 2D picture on the computer screen is called rendering. The word rendering is frequently used in computer graphics. Its meaning is among many others: Translate, giving in return, make clear, give a performance... In computer graphics the render is the system of software, supported by hardware, that makes the 3D scene visible on a 2D device, like the computer screen. In data-visualization the rendering process is just one of many processes. First the data must be converted into some visual form before the rendering process can be executed. There is a variety of different techniques that can be used in the render process.
Ray-tracing is one example. The degree of realism is high in ray-tracing, but the process is very compute intensive, and it is not trivial to implement it on dedicated hardware. At most systems, ray-tracing is a slow process that is used when high degree of interactivity is not needed, but the priority is to give high degree of realism. Ray-tracing is not commonly used in data-visualization. In the ray-tracing process, from each pixel on the screen, a ray is integrated into the 3D scene, hitting objects on its way to the light source. A recursive algorithm is resulting that has a tree-structure. It is very well suited to model specular reflection and refraction of light. Local light sources are needed to bring in diffuse reflections. The figure below shows the path of a single ray emerging from a specific pixel.
Radiosity is another rendering algorithm. It gives a good description of ambient and diffuse reflection. Its analogy with radiative transfer is striking, taking into account the conservation of energy of light reflected and scattered by objects. In this algorithm, the radiosity Bi is the energy per unit time and unit area of a surface patch i. (The surface is divided into a certain number of distinct patches). Ei is the emitted energy per unit time and unit area from the surface at patch i. Ri is the diffuse reflection (and scattering) coefficient, redistributing the energy through reflection and scattering from remote sources. Fij is the fraction of radiated energy from a remote patch j hitting patch i. Ai is the area of the patch i of the given surface A. The emerging energy can be computed, using the following expression:
Neither ray-tracing nor radiosity models are well suited to utilize dedicated graphics hardware so they are not suited for real time rendering. Ray tracing and radiosity models can be combined to yield very realistic renderings. For further information about these topics, see Manchester Visualization Center. Lighting and Shading.
Direct rendering.Visualization is by most users done through the use of geometric primitives like points, lines, and polygons. The latter with some surface attributes. This is adequate for applications used by architects, for terrain visualization or in 3D games. In data-visualization where contiguous data are filling a finite 3D space (such data are called volumetric), the use of points, lines and polygons are often in-adequate for data-visualization. In the volumetric description, 3D pixels called voxels can be used as building bricks to represent the entire volume. In voxel based visualization, a front to back or back to front rendering is used. This is often called direct rendering and will be discussed in detail in a later section. This technique is suited for a very efficient utilization of texture hardware. It is limited by the requirement that data must be on a regular Cartesian form. Some vendors have developed APIs that utilize tetrahedrons as building bricks instead of voxels. Then, data given on more complex geometries can be visualized through utilization of texture hardware.
Rasterization: After a number of various transformations the model is projected into 2D space, then the final task is to map it to the screen. Typically the model is made up of polygons that can be generated by simple triangles. Each corner is represented by a coordinate (x,y) and a color value is expressed by a vector (R,G,B). In the process called rasterization geometries are converted into a raster image. The process rasterization is also called scan-conversion. This process is made up of two parts:
1) Color: To give each pixel within the polygon a (R,G,B) value, and
2) Reflection: To compute the reflection from the surface. Both processes are based on interpolation from values defined at the polygon corners also called vertices .
For computation of colors, the edges get the vertex values. Further, the internal pixels get their values from a "horizontal" interpolation from the vertex values. For 2), reflections, there are three possibilities. In each vertex, the local surface normal is used. The surface normals are not necessary pointed in the same direction as the triangle normal. This imply that polygon based surfaces can be rendered with a reflection model that approximate a smoother surface. The possibilities are:
1) Flat shading. Here the local surface (triangle) normal is used.
2) Gouraud shading, also called intensity interpolation shading or color interpolation shading. In this case the average vertex normals are obtained from the normals of all polygonal facets sharing each vertex. The vertex intensities are determined by using the vertex normals with any desired illumination model. Finally the shading is done by linear interpolation of the vertex intensities along each edge and then between edges along each scan lines, see figure below.
3) Phong shading, also known as normal-vector interpolation shading, here surface normal vectors are interpolated rather than the intensities. (This is a vector interpolation rather than a scalar interpolation as used in Gouraud shading). Interpolation occurs across a polygon span on a scan line, between starting and ending normals from the span. These normals are themselves interpolated along polygon edges from vertex normals that are computed, if necessary just as in Gouraud shading. The interpolation along edges can again be done, with all three components of the normal vector being computed scan line through scan line. Each normal is used together with the illumination model to compute the intensity. Phong shading is the most compute intensive, but gives the best result compared to flat and Gouraud shading.
The following figure shows: Flat, Gouraud and Phong shading applied to a sphere composed by a finite number of polygons.
Use of colors is important in imaging. In visualization the colors should be used to emphasize the relevant information or the information you want to communicate. A particular selection of colors can be used to communicate the behavior of a physical field like the temperature. Contrast can be used to make differences visible. Color is a sophisticated instrument in visualization, and black and white is often used with good result for illustration purposes when the use of high quality color plates is considered too expensive. To be able to master the use of colors, practical experience is necessary. Before we start with experiments, some very basic physics of light should be mentioned.
We have light sources that emit light of a certain quality. The light is transferred through an optical medium that can emit, absorb and scatter/reflect the light. We have objects that absorb, transmit, emit and reflect light. The objects are defined in our data domain and the eye finally receives the resulting light.
An example taken from computer graphics is to create effects by the use of fog. The visual effect of fog is to occult objects far away. The fog does not help us to improve the visualization process. In visualization the goal is to map through different techniques the information about a physical field to a screen such that a particular value of the field is as uniquely represented to the observer as possible. In these cases which are the simplest ones, emission and scattering of light is neglected outside the physical domain where our data is located.
Physically, the visible light is electro magnetic waves with wave lengths between 400 and 700 nano meters. The light that reach the eye will have a characteristic spectrum (the intensity depends on the wave length). Blue light have excessive intensity in the short wave length part of the spectrum, while red light have the bulk of the intensity in the long wave length part. The eye has three types of receptors. Each of them have different sensitivity profile as shown in the figure. The maximum sensitivity is in the red, green and blue, (R,G,B) region of the spectrum (red, green and blue are called primary colors). It is then possible to visually represent any color by adding a certain "portion" of red, green and blue light. This is utilized in color monitors where the pixels are made up of the three colors: red, green and blue.
There are several color models. The most intuitive is the RGB model used in color raster graphics systems. The YIQ model is used in broadcast TV color systems (Y is not yellow, but luminance which is the only component visible on black and white televisions, the chromaticity is encoded in I and Q), and CMY (cyan, magenta, yellow) for some color printing devices. I voxel visualization the best control over the visual expression is through the HSV color system described later. The RGB primaries are additive primaries, that is, the individual contributions of each primary are added together to yield the result, as suggested in figure-1.1. The subset of interest is in the unit cube shown in figure-1.2.
The RGB color model used in color raster graphics employs a Cartesian coordinate system. The RGB primaries are additive. the individual contributions of each primary are added together to yield the result as shown in figure-1.1.
Figure-1.1: Additive colors. Adding red, blue and green form white. Blue and red form magenta, red and green form yellow, blue and green form cyan.
Figure-1.2: The RGB cube. The back planes of the cube are colored. Gray are on the diagonal connecting (0,0,0) and (1,1,1), black and white. Looking through the front walls, the back walls and their colors are visible in the figure.
Figure-1.3: The facing planes of the RGB cube projected into a plane perpendicular to the gray diagonal (0,0,0) - (1,1,1).
The CMY model: CMY is the abbreviation for the colors Cyan, Magenta and Yellow which are the complementary colors to Red, Green and Blue. Complementary colors are known to give white/gray when they are added. Draw a line through the center and out to each rim of figure-1.3, then the complementary colors are located at the opposite rims on that line. Contrary to the RGB where colors are added to black, the CMY model is ssubtractive, that is color is removed from White light as when applying filters. There is a mathematical reason for the name complementary colors, we can simply write the vector identity (C,M,Y)=(1,1,1)-(R,G,B). White is (1,1,1) in the RGB system while white is (0,0,0) in the CMY system.
Figure-1.4: Subtractive colors. Magenta and yellow subtracted from white form red, Yellow and cyan subtracted from white form green...The CMY model is used in color printers, where a thin sheet of transparent CMY or colored material or CMYK where K is a black ribbon, is added to white paper.
As kids we experienced when we mixed together paint, for example cyan and yellow at a sheet of paper, that green is the result. According to a subtractive model see figure-1.4, Cyan is formed from White by subtracting Red: (1,1,1) - (1,0,0) = (0,1,1) and Yellow is formed from White by subtracting Blue: (1,1,1) - (0,0,1) = (1,1,0). The result of the filters Yellow and Cyan together is equivalent of removing Red and Blue from White: (1,1,1) - (1,0,0) - (0,0,1) = (0,1,0) which is Green in the RGB system, a little complicated though. On the other hand in the CMY model , the effect of applying the two filters Cyan and Yellow gives: (C,M,Y) = (1,0,0) + (0,0,1) = (1,0,1), that is green in the CMY system. In the RGB system this yields (R,G,B) = (1,1,1) - (1,0,1) = (0,1,0), which is Green.
Experience has thought us that Smith's HSV color model gives the best control over colors in visualization. HSV stands for Hue, Saturation and Value. In this case a cylindrical coordinate system is useful, then imagine a hex cone as in figure-1.5. The color or Hue is given by the azimuthal angel relative to a fixed axis in a plane perpendicular to the Value axis V. In the lower vertex point of the cone, V=0. Here is where black is. This is a singular point in the meaning that it gives the color black on what H and S are, when transforming from HSV to RGB, this point may impose some problems. At V=1, the light intensity or brightness is at a maximum. The radial distance from the V axis defines the color Saturation S. At the axis the color saturation is zero (S=0), here are the grays from black at the hex cone vertex to white at the other end. When going radially from the cone axis, the saturation increases up to a maximum value at the widest surface of the hex cone. Here the pigment of the colors are the strongest.
Figure-1.5: Single-hex cone HSV color model. The plane V=1 is the projection of the RGB color cube viewed along the principal diagonal. The left triangle is a cut through the hex cone which coincide with the points (180, 0, 1), (180, 1,1) and (180, 0,0), that is white, cyan and black.
NCS (natural color system) is used by manufacturers of paint and architects. It is practical for you to know how this system works next time you are painting your home. The system consists of 8 characters. For example 1040-R90B. Here the first number: 10 indicates blackness, the next number: 40 is for color saturation. R means red, B means blue, so it is a mixture of red and blue, how much of each is indicated by the number 90. In the example, 90 means 90% blue and the rest 10% red. Not too logical?, but possible to learn, and maybe useful. What colors do you get in the cases 1040-R90B, 1070-G50Y, 3030-Y40G?
It is difficult to make a color map that efficiently communicate your message or ideas. The proper use of colors is not a theoretical topic where you can read all you need in a book. It is a practical topic, where experimenting and training is the key to success. Anyway, we can give you some good advice, some of them taken from Keller & Keller 19xx.
J. D. Foley, A. van Dam, S. K. Feiner and J. F. Hughes. Computer Graphics, Principles and Practice. Second edition, Addison-Wesley publishing company, 1990.