Color Formation System PDF

Title Color Formation System
Course Remote Sensing
Institution Red Rocks Community College
Pages 5
File Size 226.1 KB
File Type PDF
Total Downloads 53
Total Views 135

Summary

Color Formation System...


Description

Remote Sensing COLOR FORMATION SYSTEM Several theories were proposed for color vision. According to Young's theory there are three types of cones in the retina, each type containing a light-sensitive substance of different wavelengths. One type is stimulated by red rays, another by green rays and a third by blue/violet rays. The yellow color is seen due to the stimulation of equal number of elements of green and red and very little or almost nothing of blue. If the cones sensitive to red, green and blue are excessively excited, we see, respectively, the colors red, green and blue. The peripheral part of the retina, devoid of cones, is color insensitive and only gives impressions of white, gray and black (rods). The most central part of the retina is responsible for color vision, because in this region most cones are present. The process of color formation through a primary color blue (B), green (G) and red (R) the other primary color is called the additive process of color formation. The colors resulting from this process are called secondary colors, which are yellow (Y), magenta (M) and cyan (C), respectively resulting from the addition of green and red, blue and blue and blue and green, as can be seen in Fig. 6. White is the result of the addition of the three so-called primary colors.

Figure 6 - Colour formation processes

In the substrattive process, secondary colors are used, i.e., yellow, magenta and cyan. The overlap of yellow with magenta results in the formation of the red color. The overlap of cyan and magenta and cyan and yellow form respectively the colors blue and green. The black color is resulting from the overlap of cyan, yellow and magenta. INFORMATION EXTRACTION TECHNIQUES In general, in the performance of works that use aerial photographs or satellite images for the purpose of surveys, monitoring or mapping, in whatever area of knowledge, the following steps should be followed: definition of objectives, choice of the area of study, acquisition of products (images or aerial photographs), choice of the technique of extraction of information (visual

interpretation and or classification of images) , fieldwork, mapping validation, and reporting. In general, in the different types of application (geomorphology, geology, pedology, vegetation, agriculture and land use) one begins with the choice of the scale with which one wishes to work. This depends on the accuracy of the results and the objectives of the research (Table 1). Next, the band or set of bands is defined (if the work is performed with satellite images), depending on the characteristics of the study targets. FIG. 7 can help in selecting the best spectral bands. The period of acquisition of the images is chosen according to the variation of the phenological conditions of the targets, lighting conditions and atmospheric conditions. Table 1 - Sensors x larger work scale satellite Landsat SPOT CBERS

EARTH/ASTER IKONOS QUICK BIRD

sensor Mss Tm ETM+ Pa HRVIR HRG WFI IR-MSS ccd VNIR SWIR IRR Pa Multispectral Pa Multiespetral

FIGURE 7 - Spectral behavior of some targets

Larger scale 1:150.000 1:50.000 1:25.000 1:20.000 1:40.000 1:10.000 1:600.000 1:150.000 1:40.000 1:30.000 1:50.000 1:200.000 1:2.000 1:8.000 1:1.500 1:5.000

VISUAL INTERPRETATION TECHNIQUES Once the aerial photographs and/or images (satellite) are obtained and the objectives of the work are defined, the interpretation process begins, which involves three stages: photo-reading, photo-analysis and the photointerpretation itself. Photo-reading consists essentially of identifying features or objects on photographic images. It is a superficial and very simple interpretation, where only qualitative aspects are taken into account, such as: this is a tree, that is a house, etc. Photo-analysis consists in the study of the features or objects present in the photograph or image, that is, it is the study of the evaluation and ordering of the parts that make up the photograph/image. It is a more accurate interpretation than photo-reading, because semiquantitative aspects of what is interpreted are mentioned. The photo-interpreter begins to use his technical knowledge and practical experience of his field of work. Interpretation is the process that uses logical, deductive and inductive reasoning to understand and explain the objects, features or conditions studied in the two previous phases. For the photo-interpreter, the most important features in the interpretation of photographic images are: tonality/color, texture, shape, size, shadow and pattern. Tone/color – The tone is related to the intensity of electromagnetic radiation reflected and/or emitted by the targets, or to the return of the signal, in the case of active systems (radar). The shade is nothing more than different graduations of gray, ranging from white to black, constituting an essential element in the interpretation of aerial photographs and satellite images. The gray graduations of the image depend on the characteristics of the emulsion, photographic processing, physical-chemical properties of the objects/targets photographed or imaged, in addition to lighting/topography conditions and atmospheric conditions. Thus, latitude, month and time are variables that interfere, and a similar type of coverage may appear with different shades, depending on the time, place and time of year. The tint in an aerial photograph or in a satellite image is directly proportional to the radiance of the surface targets. The different shades of gray found in an aerial photograph, or in a given wavelength range (satellite image) for the same target and for the same date and time of taking the data, are explained by the variation of irradiance on the surface. Irradiance depends on latitude, the inclination of the Sun (time of year), earth/sun distance, orientation and inclination of the topographic surface, and the time of data collection. The color, in turn, depends on the wavelength of electromagnetic radiation and the sensitivity of the film (in the case of aerial photographs) and the bands used to generate the color composition (in the case of satellite images). One of the advantages is that the human eye is able to distinguish more colors than shades of gray. Texture - Texture is the spatial arrangement pattern of textural elements. Textural element is the smallest continuous and homogeneous feature distinguishable in an aerial photograph and/or satellite image, but subject to repetition. It depends on the scale and spatial resolution of the

sensor system, in addition to the contrast between the objects or features of the surface. The texture varies from smooth to coarse, depending on the characteristics of the targets, resolution and scale. Tone and texture are interrelated visual concepts that aid the perception and recognition of surface characteristics in aerial photographs and satellite images. Form – generally, natural features have irregular shapes, while features worked by man, such as crops, reforestation, roads, etc., have geometric shapes. Size - The size can be used to identify individual features, depending on the scale used. Feature size can indicate the type of occupancy, type of use, size of the property, etc. Shadow – shadows are common phenomena in satellite images obtained in winter. They are the result of the sun's oblique illumination or the absence of signal return in the case of data obtained by active sensors. In photography and large-scale images, the shadow can provide recognition and measurement of the height of buildings, trees/reforestation etc. However, often the effect of the shadow masks important details. Images obtained with low angles of solar elevation (winter) favor geomorphological studies, due to the shadow to allow the inferral of the topographic model. However, they are not suitable for soil study, as their effect may hide targets or features of interest. Standard – in satellite images, the process of visual extraction of information basically consists of the inspection and identification of different tonal and textural patterns in each spectral band, as well as their comparison in different spectral bands and times. Due to the characteristics of imager repeatability, one can analyze time variations presented by the different patterns of tones and texture of the targets. The pattern or spatial arrangement of farms, fields, crops, or other targets usually becomes an important feature in interpretation. IMAGE CLASSIFICATION TECHNIQUES Image classification refers to the interpretation of computer-aided remote sensing images. Although some procedures allow the identification of information about image characteristics, such as texture and context, most of the image classification is based exclusively on the detection of spectral signatures (spectral response patterns) of soil cover classes. Classification, in remote sensing, means the association of points of an image to a class or group; for example, water, culture, urban area, reforestation, cerrado, etc., or the process of recognizing classes or groups whose members exhibit common characteristics. When classifying an image, it is assumed that different objects/targets have different spectral properties and that each point belongs to a single class. In addition, the representative points of a certain class must have close patterns of tonality and texture.

Image classification can be subdivided into supervised and unsupervised, according to how the classification is conducted. In the unsupervised classification, the analyst seeks to define all existing soil cover categories in the image at certain levels of generalization, while in the supervised classification, the task is to detect specific types of soil cover already known. Unsupervised classification - This type of classification does not require any prior information about the classes of interest. It examines the data and divides them into the predominant natural spectral clusters present in the image. The analyst then identifies these groupings as soil cover classes, through a combination of their familiarity with the studied region and visits to field truth survey. The logic with which unsupervised classification works is known asclusteranalysis. It is important to recognize that the clusters produced in this case are not classes of information, but spectral categories (that is, clusters of similar reflectance patterns). Generally the analyst needs to reclassify spectral classes into information classes. Unsupervised classification is useful when you do not have information about the imaged area, for example, no prior data on the number of classes present are available. Classes are defined in the sorting algorithm. Supervised classification - Supervised classification is used when you have some knowledge about the classes in the image, their number and points (in the image) representative of these classes. Before the classification phase itself, the analyst obtains the characteristics of the classes, for example, mean and variance of each class, which will be used as comparison terms in the classification, a phase called training. In this type of classification we identified examples of the information classes (types of soil cover) present in the image. These examples are called training areas. The image processing system is then used to develop a statistical characterization of the reflectances for each class of information. This stage is often called signature analysis and can involve developing a characterization as simple as the mean or range of reflectances in each band, or as complex as detailed analyses of the mean, variances, and covariances in all bands. There are several algorithms for image classification, among which deserve to be highlighted: cobblestone, maximum likelihood and k-media....


Similar Free PDFs