1.3. 执行模型 Execution Model

简介: 1.3. 执行模型 Execution Model The OpenGL API is focused on drawing graphics into frame buffer memory and, to a lesser extent, in reading back values stored in that frame buffer.

1.3. 执行模型 Execution Model


The OpenGL API is focused on drawing graphics into frame buffer memory and, to a lesser

extent, in reading back values stored in that frame buffer. It is somewhat unique in that its

design includes support for drawing threedimensional geometry (such as points, lines, and

polygons, collectively referred to as PRIMITIVES) as well as for drawing images and bitmaps.

The execution model for OpenGL can be described as client-server. An application program (the

client) issues OpenGL commands that are interpreted and processed by an OpenGL

implementation (the server). The application program and the OpenGL implementation can

execute on a single computer or on two different computers. Some OpenGL state is stored in

the address space of the application (client state), but the majority of it is stored in the address

space of the OpenGL implementation (server state).

OpenGL commands are always processed in the order in which they are received by the server,

although command completion may be delayed due to intermediate operations that cause

OpenGL commands to be buffered. Out-of-order execution of OpenGL commands is not

permitted. This means, for example, that a primitive will not be drawn until the previous

primitive has been completely drawn. This in-order execution also applies to queries of state

and frame buffer read operations. These commands return results that are consistent with

complete execution of all previous commands.

Data binding for OpenGL occurs when commands are issued, not when they are executed. Data

passed to an OpenGL command is interpreted when the command is issued and copied into

OpenGL memory if needed. Subsequent changes to this data by the application have no effect

on the data that is now stored within OpenGL.

目录
相关文章
|
6月前
|
机器学习/深度学习 存储 PyTorch
Pytorch中in-place操作相关错误解析及detach()方法说明
Pytorch中in-place操作相关错误解析及detach()方法说明
322 0
|
17天前
|
自然语言处理 数据中心
Scaling LLM Test-Time Compute Optimally: 一种更有效的方法
【10月更文挑战第14天】本文探讨了大型语言模型(LLMs)在测试时通过增加计算资源来提升性能的可能性。研究发现,通过优化测试时计算的分配,特别是采用基于过程的验证器搜索和自适应更新响应分布的方法,LLM可以显著提高对复杂问题的应对能力,甚至在某些情况下超越更大规模的模型。论文提出了“计算最优”策略,旨在根据问题难度自适应调整计算资源,以最大化性能提升。未来工作将聚焦于增强测试时计算缩放、快速评估问题难度及实现自我改进循环。
25 6
|
3月前
|
TensorFlow 算法框架/工具 Python
TensorFlow2 Eager Execution模式
【8月更文挑战第18天】TensorFlow2 Eager Execution模式。
40 9
|
3月前
|
TensorFlow API 算法框架/工具
【Tensorflow】解决Inputs to eager execution function cannot be Keras symbolic tensors, but found [<tf.Te
文章讨论了在使用Tensorflow 2.3时遇到的一个错误:"Inputs to eager execution function cannot be Keras symbolic tensors...",这个问题通常与Tensorflow的eager execution(急切执行)模式有关,提供了三种解决这个问题的方法。
36 1
|
12月前
|
Docker 容器
求助: 运行模型时报错module 'megatron_util.mpu' has no attribute 'get_model_parallel_rank'
运行ZhipuAI/Multilingual-GLM-Summarization-zh的官方代码范例时,报错AttributeError: MGLMTextSummarizationPipeline: module 'megatron_util.mpu' has no attribute 'get_model_parallel_rank' 环境是基于ModelScope官方docker镜像,尝试了各个版本结果都是一样的。
400 5
|
11月前
问题出在`megatron_util.mpu`模块中没有找到`get_model_parallel_rank`属性
问题出在`megatron_util.mpu`模块中没有找到`get_model_parallel_rank`属性
112 1
|
SQL Oracle 关系型数据库
事务模型(Transaction Model)
事务模型(Transaction Model)是一种用于管理数据库操作的方法,它确保数据库操作的原子性、一致性、隔离性和持久性,通常简称为ACID属性。
541 1
|
并行计算 算法 Go
如何调用一个只支持batch_call的服务?
如何调用一个只支持batch_call的服务?
50 0
|
机器学习/深度学习
GAN Step By Step -- Step4 CGAN
GAN Step By Step -- Step4 CGAN
GAN Step By Step -- Step4 CGAN
FastAPI(5)- 查询参数 Query Parameters
FastAPI(5)- 查询参数 Query Parameters
275 0
FastAPI(5)- 查询参数 Query Parameters