1.3. 执行模型 Execution Model

简介: 1.3. 执行模型 Execution Model The OpenGL API is focused on drawing graphics into frame buffer memory and, to a lesser extent, in reading back values stored in that frame buffer.

1.3. 执行模型 Execution Model


The OpenGL API is focused on drawing graphics into frame buffer memory and, to a lesser

extent, in reading back values stored in that frame buffer. It is somewhat unique in that its

design includes support for drawing threedimensional geometry (such as points, lines, and

polygons, collectively referred to as PRIMITIVES) as well as for drawing images and bitmaps.

The execution model for OpenGL can be described as client-server. An application program (the

client) issues OpenGL commands that are interpreted and processed by an OpenGL

implementation (the server). The application program and the OpenGL implementation can

execute on a single computer or on two different computers. Some OpenGL state is stored in

the address space of the application (client state), but the majority of it is stored in the address

space of the OpenGL implementation (server state).

OpenGL commands are always processed in the order in which they are received by the server,

although command completion may be delayed due to intermediate operations that cause

OpenGL commands to be buffered. Out-of-order execution of OpenGL commands is not

permitted. This means, for example, that a primitive will not be drawn until the previous

primitive has been completely drawn. This in-order execution also applies to queries of state

and frame buffer read operations. These commands return results that are consistent with

complete execution of all previous commands.

Data binding for OpenGL occurs when commands are issued, not when they are executed. Data

passed to an OpenGL command is interpreted when the command is issued and copied into

OpenGL memory if needed. Subsequent changes to this data by the application have no effect

on the data that is now stored within OpenGL.

目录
相关文章
|
4月前
|
自然语言处理 数据中心
Scaling LLM Test-Time Compute Optimally: 一种更有效的方法
【10月更文挑战第14天】本文探讨了大型语言模型(LLMs)在测试时通过增加计算资源来提升性能的可能性。研究发现,通过优化测试时计算的分配,特别是采用基于过程的验证器搜索和自适应更新响应分布的方法,LLM可以显著提高对复杂问题的应对能力,甚至在某些情况下超越更大规模的模型。论文提出了“计算最优”策略,旨在根据问题难度自适应调整计算资源,以最大化性能提升。未来工作将聚焦于增强测试时计算缩放、快速评估问题难度及实现自我改进循环。
239 6
|
6月前
|
TensorFlow 算法框架/工具 Python
TensorFlow2 Eager Execution模式
【8月更文挑战第18天】TensorFlow2 Eager Execution模式。
72 9
|
6月前
|
Python
【Batch Job】Batch Job中执行一段Python代码,遇见Failure Exit Code
【Batch Job】Batch Job中执行一段Python代码,遇见Failure Exit Code
|
6月前
【Azure Batch】在批处理的Task中如何让它执行多个CMD指令呢
【Azure Batch】在批处理的Task中如何让它执行多个CMD指令呢
|
Docker 容器
求助: 运行模型时报错module 'megatron_util.mpu' has no attribute 'get_model_parallel_rank'
运行ZhipuAI/Multilingual-GLM-Summarization-zh的官方代码范例时,报错AttributeError: MGLMTextSummarizationPipeline: module 'megatron_util.mpu' has no attribute 'get_model_parallel_rank' 环境是基于ModelScope官方docker镜像,尝试了各个版本结果都是一样的。
460 5
|
9月前
|
机器学习/深度学习 JavaScript 算法
GAN Step By Step -- Step7 WGAN
GAN Step By Step -- Step7 WGAN
GAN Step By Step -- Step7 WGAN
|
机器学习/深度学习 编解码 计算机视觉
GAN Step By Step -- Step5 ACGAN
GAN Step By Step -- Step5 ACGAN
GAN Step By Step -- Step5 ACGAN
|
SQL Oracle 关系型数据库
事务模型(Transaction Model)
事务模型(Transaction Model)是一种用于管理数据库操作的方法,它确保数据库操作的原子性、一致性、隔离性和持久性,通常简称为ACID属性。
631 1
|
机器学习/深度学习
GAN Step By Step -- Step4 CGAN
GAN Step By Step -- Step4 CGAN
GAN Step By Step -- Step4 CGAN
|
机器学习/深度学习
GAN Step By Step -- Step6 LSGAN
GAN Step By Step -- Step6 LSGAN
GAN Step By Step -- Step6 LSGAN

热门文章

最新文章