ceph GLOSSARY

简介:
ceph文档里术语较多, 为了方便理解, 最好先了解一下ceph的术语.
以下摘自ceph doc, 少了PG.

PG placement group
     PG, 存储 object 的逻辑组. PG存储在OSD中. OSD包含journal和data. 写完journal后返回ack确认数据安全性.
     一般journal使用SSD来存储, 需要高的响应速度(类型postgresql xlog)
      Ceph stores a client’s data as objects within storage pools. Using the CRUSH algorithm, Ceph calculates which placement group should contain the object, and further calculates which Ceph OSD Daemon should store the placement group. The CRUSH algorithm enables the Ceph Storage Cluster to scale, rebalance, and recover dynamically.

CEPH GLOSSARY

Ceph is growing rapidly. As firms deploy Ceph, the technical terms such as “RADOS”, “RBD,” “RGW” and so forth require corresponding marketing terms that explain what each component does. The terms in this glossary are intended to complement the existing technical terminology.

Sometimes more than one term applies to a definition. Generally, the first term reflects a term consistent with Ceph’s marketing, and secondary terms reflect either technical terms or legacy ways of referring to Ceph systems.

Ceph Project
The aggregate term for the people, software, mission and infrastructure of Ceph.
cephx
The Ceph authentication protocol. Cephx operates like Kerberos, but it has no single point of failure.
Ceph
Ceph Platform
All Ceph software, which includes any piece of code hosted at  http://github.com/ceph.
Ceph System
Ceph Stack
A collection of two or more components of Ceph.
Ceph Node
Node
Host
Any single machine or server in a Ceph System.
Ceph Storage Cluster
Ceph Object Store
RADOS
RADOS Cluster
Reliable Autonomic Distributed Object Store
The core set of storage software which stores the user’s data (MON+OSD).
Ceph Cluster Map
cluster map
The set of maps comprising the monitor map, OSD map, PG map, MDS map and CRUSH map. See  Cluster Map for details.
Ceph Object Storage
The object storage “product”, service or capabilities, which consists essentially of a Ceph Storage Cluster and a Ceph Object Gateway.
Ceph Object Gateway
RADOS Gateway
RGW
The S3/Swift gateway component of Ceph.
Ceph Block Device
RBD
The block storage component of Ceph.
Ceph Block Storage
The block storage “product,” service or capabilities when used in conjunction with  librbd, a hypervisor such as QEMU or Xen, and a hypervisor abstraction layer such as  libvirt.
Ceph Filesystem
CephFS
Ceph FS
The POSIX filesystem components of Ceph.
Cloud Platforms
Cloud Stacks
Third party cloud provisioning platforms such as OpenStack, CloudStack, OpenNebula, ProxMox, etc.
Object Storage Device
OSD
A physical or logical storage unit ( e.g., LUN). Sometimes, Ceph users use the term “OSD” to refer to  Ceph OSD Daemon, though the proper term is “Ceph OSD”.
Object Storage Devices.
Ceph OSD Daemon
Ceph OSD
The Ceph OSD software, which interacts with a logical disk ( OSD). Sometimes, Ceph users use the term “OSD” to refer to “Ceph OSD Daemon”, though the proper term is “Ceph OSD”.
Ceph Monitor
MON
The Ceph monitor software.
Ceph Metadata Server
MDS
The Ceph metadata software.
Ceph Clients
Ceph Client
The collection of Ceph components which can access a Ceph Storage Cluster. These include the Ceph Object Gateway, the Ceph Block Device, the Ceph Filesystem, and their corresponding libraries, kernel modules, and FUSEs.
Ceph Kernel Modules
The collection of kernel modules which can be used to interact with the Ceph System (e.g,.  ceph.korbd.ko).
Ceph Client Libraries
The collection of libraries that can be used to interact with components of the Ceph System.
Ceph Release
Any distinct numbered version of Ceph.
Ceph Point Release
Any ad-hoc release that includes only bug or security fixes.
Ceph Interim Release
Versions of Ceph that have not yet been put through quality assurance testing, but may contain new features.
Ceph Release Candidate
A major version of Ceph that has undergone initial quality assurance testing and is ready for beta testers.
Ceph Stable Release
A major version of Ceph where all features from the preceding interim releases have been put through quality assurance testing successfully.
Ceph Test Framework
Teuthology
The collection of software that performs scripted tests on Ceph.
CRUSH
Controlled Replication Under Scalable Hashing. It is the algorithm Ceph uses to compute object storage locations.
ruleset
A set of CRUSH data placement rules that applies to a particular pool(s).
Pool
Pools
Pools are logical partitions for storing objects.

CLUSTER MAP

Ceph depends upon Ceph Clients and Ceph OSD Daemons having knowledge of the cluster topology, which is inclusive of 5 maps collectively referred to as the “Cluster Map”:

  1. The Monitor Map: Contains the cluster fsid, the position, name address and port of each monitor. It also indicates the current epoch, when the map was created, and the last time it changed. To view a monitor map, execute ceph mon dump.
  2. The OSD Map: Contains the cluster fsid, when the map was created and last modified, a list of pools, replica sizes, PG numbers, a list of OSDs and their status (e.g., upin). To view an OSD map, execute ceph osd dump.
  3. The PG Map: Contains the PG version, its time stamp, the last OSD map epoch, the full ratios, and details on each placement group such as the PG ID, the Up Set, the Acting Set, the state of the PG (e.g., active + clean), and data usage statistics for each pool.
  4. The CRUSH Map: Contains a list of storage devices, the failure domain hierarchy (e.g., device, host, rack, row, room, etc.), and rules for traversing the hierarchy when storing data. To view a CRUSH map, execute ceph osd getcrushmap -o {filename}; then, decompile it by executing crushtool -d {comp-crushmap-filename} -o {decomp-crushmap-filename}. You can view the decompiled map in a text editor or with cat.
  5. The MDS Map: Contains the current MDS map epoch, when the map was created, and the last time it changed. It also contains the pool for storing metadata, a list of metadata servers, and which metadata servers are up and in. To view an MDS map, execute ceph mds dump.

Each map maintains an iterative history of its operating state changes. Ceph Monitors maintain a master copy of the cluster map including the cluster members, state, changes, and the overall health of the Ceph Storage Cluster.


[参考]
1. http://docs.ceph.com/docs/master/architecture/#cluster-map
2. http://ceph.com/
相关实践学习
通义万相文本绘图与人像美化
本解决方案展示了如何利用自研的通义万相AIGC技术在Web服务中实现先进的图像生成。
目录
相关文章
|
3月前
|
存储 人工智能 安全
如何从零搭建出设备巡检二维码管理系统
通过为每台设备生成专属二维码,让每台设备都有自己的“电子档案”,记录设备基本信息以及巡检、维修等全生命周期数据
如何从零搭建出设备巡检二维码管理系统
|
机器学习/深度学习 人工智能 TensorFlow
人工智能与图像识别:基于卷积神经网络的猫狗分类器
人工智能与图像识别:基于卷积神经网络的猫狗分类器
|
存储 运维 机器人
Nvidia和AMD显卡是如何制作的
Nvidia和AMD显卡是如何制作的
605 0
|
移动开发
钉钉H5微应用配置IP,应用首页地址报错:app url exceeds max length limit,这个怎么处理?
钉钉H5微应用配置IP,应用首页地址报错:app url exceeds max length limit,这个怎么处理?
1265 0
|
9月前
|
机器学习/深度学习 人工智能 缓存
MHA2MLA:0.3%数据微调!复旦团队开源推理加速神器,KV缓存狂降96.87%
MHA2MLA是复旦大学、华东师范大学、上海AI Lab等机构联合推出的数据高效微调方法,通过引入多头潜在注意力机制(MLA),显著优化基于Transformer的LLM推理效率,降低推理成本。
369 1
MHA2MLA:0.3%数据微调!复旦团队开源推理加速神器,KV缓存狂降96.87%
|
存储 弹性计算 数据挖掘
阿里云服务器e实例和u1实例有什么区别?ECS经济型和通用算力性能特性及优势详解
阿里云ECS云服务器的经济型e实例和通用算力型u1实例在性能、适用场景和价格上各有优势。e实例适合个人开发者和轻量级应用,性价比高;u1实例则更适合中小企业,提供更稳定的性能和更高的网络带宽。选择时可根据具体需求和预算进行决策。
|
存储 安全 Linux
Golang的GMP调度模型与源码解析
【11月更文挑战第11天】GMP 调度模型是 Go 语言运行时系统的核心部分,用于高效管理和调度大量协程(goroutine)。它通过少量的操作系统线程(M)和逻辑处理器(P)来调度大量的轻量级协程(G),从而实现高性能的并发处理。GMP 模型通过本地队列和全局队列来减少锁竞争,提高调度效率。在 Go 源码中,`runtime.h` 文件定义了关键数据结构,`schedule()` 和 `findrunnable()` 函数实现了核心调度逻辑。通过深入研究 GMP 模型,可以更好地理解 Go 语言的并发机制。
528 1
|
机器学习/深度学习 存储 算法
OD初始认识
OD初始认识
|
PHP 移动开发 安全
PHP应用如何对接微信公众号JSAPI支付
本文介绍了微信支付的多种方式,包括JSAPI支付、APP支付、Native支付、付款码支付和H5支付。
317 8
|
机器学习/深度学习 传感器 算法
构建未来:基于机器学习的智能健康监测系统
【5月更文挑战第12天】 在数字医疗领域,智能健康监测系统的出现正在革新我们对健康管理和疾病预防的理解。本文将探讨一个基于机器学习技术的智能健康监测系统的设计与实现,它能够实时跟踪个体的健康指标并通过预测性分析提前警示潜在的健康问题。通过融合生物统计学、数据挖掘及模式识别等先进技术,该系统旨在为个人用户提供量身定制的健康建议,并为医疗专业人员提供决策支持。文章首先概述了系统框架和关键技术,随后详细讨论了机器学习模型的建立过程以及如何优化这些模型以提高预测的准确性。最后,我们通过实验结果验证了系统的有效性,并讨论了未来的发展方向。