1.8. 绘制图片 Drawing Images

简介: 1.8. 绘制图片 Drawing Images As mentioned previously, OpenGL has a great deal of support for drawing images in addition to its support for drawing 3D geometry.

1.8. 绘制图片 Drawing Images


As mentioned previously, OpenGL has a great deal of support for drawing images in addition to

its support for drawing 3D geometry. In OpenGL parlance, images are called PIXEL RECTANGLES.

The values that define a pixel rectangle start out in application-controlled memory as shown in

Figure 1.1 (11). Color or grayscale pixel rectangles are rendered into the frame buffer with

glDrawPixels, and bitmaps are rendered into the frame buffer with glBitmap. Images that are

destined for texture memory are specified with glTexImage or glTexSubImage. Up to a point, the

same basic processing is applied to the image data supplied with each of these commands.

1.8.1. Pixel Unpacking

OpenGL reads image data provided by the application in a variety of formats. Parameters that

define how the image data is stored in memory (length of each pixel row, number of rows to

skip before the first one, number of pixels to skip before the first one in each row, etc.) can be

specified with glPixelStore. So that operations on pixel data can be defined more precisely, pixels

read from application memory are converted into a coherent stream of pixels by an operation

referred to as PIXEL UNPACKING (12). When a pixel rectangle is transferred to OpenGL by a call like

glDrawPixels, this operation applies the current set of pixel unpacking parameters to determine

how the image data should be read and interpreted. As each pixel is read from memory, it is

converted to a PIXEL GROUP that contains either a color, a depth, or a stencil value. If the pixel

group consists of a color, the image data is destined for the color buffer in the frame buffer. If

the pixel group consists of a depth value, the image data is destined for the depth buffer. If the

pixel group consists of a stencil value, the image data is destined for the stencil buffer. Color

values are made up of a red, a green, a blue, and an alpha component (i.e., RGBA) and are

constructed from the input image data according to a set of rules defined by OpenGL. The result

is a stream of RGBA values that are sent to OpenGL for further processing.

1.8.2. Pixel Transfer

After a coherent stream of image pixels is created, pixel rectangles undergo a series of

operations called PIXEL TRANSFER (13). These operations are applied whenever pixel rectangles are

transferred from the application to OpenGL (glDrawPixels, glTexImage, glTexSubImage), from OpenGL

back to the application (glReadPixels), or when they are copied within OpenGL (glCopyPixels,

glCopyTexImage, glCopyTexSubImage).

The behavior of the pixel transfer stage is modified with glPixelTransfer. This command sets state

that controls whether red, green, blue, alpha, and depth values are scaled and biased. It can

also set state that determines whether incoming color or stencil values are mapped to different

color or stencil values through the use of a lookup table. The lookup tables used for these

operations are specified with the glPixelMap command.

Some additional operations that occur at this stage are part of the OpenGL IMAGING SUBSET, which

is an optional part of OpenGL. Hardware vendors that find it important to support advanced

imaging capabilities will support the imaging subset in their OpenGL implementations, and other

vendors will not support it. To determine whether the imaging subset is supported, applications

need to call glGetString with the symbolic constant GL_EXTENSIONS. This returns a list of

extensions supported by the implementation; the application should check for the presence of

the string "ARB_imaging" within the returned extension string.

The pixel transfer operations that are defined to be part of the imaging subset are convolution,

color matrix, histogram, min/max, and additional color lookup tables. Together, they provide

powerful image processing and color correction operations on image data as it is being

transferred to, from, or within OpenGL.

1.8.3. Rasterization and Back-End Processing

Following the pixel transfer stage, fragments are generated through rasterization of pixel

rectangles in much the same way as they are generated from 3D geometry (14). This process,

along with the current OpenGL state, determines where the image will be drawn in the frame

buffer. Rasterization takes into account the current RASTER POSITION, which can be set with

glRasterPos or glWindowPos, and the current zoom factor, which can be set with glPixelZoom and

which causes an image to be magnified or reduced in size as it is drawn.

After fragments have been generated from pixel rectangles, they undergo the same set of

fragment processing operations as geometric primitives (6) and then go on to the remainder of

the OpenGL pipeline in exactly the same manner as geometric primitives, all the way until

pixels are deposited in the frame buffer (8, 9, 10).

Pixel values provided through a call to glTexImage or glTexSubImage do not go through rasterization

or the subsequent fragment processing but directly update the appropriate portion of texture

memory (15).

1.8.4. Read Control

Pixel rectangles are read from the frame buffer and returned to application memory with

glReadPixels. They can also be read from the frame buffer and written to another portion of the

frame buffer with glCopyPixels, or they can be read from the frame buffer and written into texture

memory with glCopyTexImage or glCopyTexSubImage. In all of these cases, the portion of the frame

buffer that is to be read is controlled by the READ CONTROL stage of OpenGL and set with the

glReadBuffer command (16).

The values read from the frame buffer are sent through the pixel transfer stage (13) in which

various image processing operations can be performed. For copy operations, the resulting pixels

are sent to texture memory or back into the frame buffer, depending on the command that

initiated the transfer. For read operations, the pixels are formatted for storage in application

memory under the control of the PIXEL PACKING stage (17). This stage is the mirror of the pixel

unpacking stage (12), in that parameters that define how the image data is to be stored in

memory (length of each pixel row, number of rows to skip before the first one, number of pixels

to skip before the first one in each row, etc.) can be specified with glPixelStore. Thus, application

developers enjoy a lot of flexibility in determining how the image data is returned from OpenGL

into application memory.

目录
相关文章
|
6月前
|
前端开发
背景图像[background-image]
背景图像[background-image]。
57 1
|
存储 Web App开发 编解码
图片:前端展示图像(img 、picture、svg、canvas )及常用图片格式(PNG、JPG、JPEG、WebP、GIF、SVG、AVIF等)
图片:前端展示图像(img 、picture、svg、canvas )及常用图片格式(PNG、JPG、JPEG、WebP、GIF、SVG、AVIF等)
872 1
|
C#
WPF 将Bitmapsource转换到Emgu.cv.image
原文:WPF 将Bitmapsource转换到Emgu.cv.image Transform WPF BitmapSource to Emgu.
1250 0
|
C# 计算机视觉
OpencvSharp 在WPF的Image控件中显示图像
原文:OpencvSharp 在WPF的Image控件中显示图像 1、安装OpencvSharp 我使用的是VS2013 社区版,安装OpencvSharp3.0 在线安装方法:进入Tools,打开NuGet的包管理器 搜索Opencv 安装之后就可以使用,无需再做其他配置。
4111 0
|
定位技术
openlayers加载图片图层png,jpeg等
版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/gisdoer/article/details/80479536 原文:http://www.
1511 0
|
前端开发 JavaScript
Canvas自定义图片大小及蒙版与生成gif图
Html的Canvas主要通过js脚本做一些图形化操作。Canvas是一个矩形区域,您可以控制其每一像素。canvas 拥有多种绘制路径、矩形、圆形、字符以及添加图像的方法。