Cloudera Certified Administrator for Apache Hadoop(CCAH认证)

简介: Exam Sections and Blueprint 1. HDFS (17%) Describe the function of HDFS daemons Describe the normal operatio...

Exam Sections and Blueprint

1. HDFS (17%)

  • Describe the function of HDFS daemons
  • Describe the normal operation of an Apache Hadoop cluster, both in data storage and in data processing
  • Identify current features of computing systems that motivate a system like Apache Hadoop
  • Classify major goals of HDFS Design
  • Given a scenario, identify appropriate use case for HDFS Federation
  • Identify components and daemon of an HDFS HA-Quorum cluster
  • Analyze the role of HDFS security (Kerberos)
  • Determine the best data serialization choice for a given scenario
  • Describe file read and write paths
  • Identify the commands to manipulate files in the Hadoop File System Shell

2. YARN and MapReduce version 2 (MRv2) (17%)

  • Understand how upgrading a cluster from Hadoop 1 to Hadoop 2 affects cluster settings
  • Understand how to deploy MapReduce v2 (MRv2 / YARN), including all YARN daemons
  • Understand basic design strategy for MapReduce v2 (MRv2)
  • Determine how YARN handles resource allocations
  • Identify the workflow of MapReduce job running on YARN
  • Determine which files you must change and how in order to migrate a cluster from MapReduce version 1 (MRv1) to MapReduce version 2 (MRv2) running on YARN

3. Hadoop Cluster Planning (16%)

  • Principal points to consider in choosing the hardware and operating systems to host an Apache Hadoop cluster
  • Analyze the choices in selecting an OS
  • Understand kernel tuning and disk swapping
  • Given a scenario and workload pattern, identify a hardware configuration appropriate to the scenario
  • Given a scenario, determine the ecosystem components your cluster needs to run in order to fulfill the SLA
  • Cluster sizing: given a scenario and frequency of execution, identify the specifics for the workload, including CPU, memory, storage, disk I/O
  • Disk Sizing and Configuration, including JBOD versus RAID, SANs, virtualization, and disk sizing requirements in a cluster
  • Network Topologies: understand network usage in Hadoop (for both HDFS and MapReduce) and propose or identify key network design components for a given scenario

4. Hadoop Cluster Installation and Administration (25%)

  • Given a scenario, identify how the cluster will handle disk and machine failures
  • Analyze a logging configuration and logging configuration file format
  • Understand the basics of Hadoop metrics and cluster health monitoring
  • Identify the function and purpose of available tools for cluster monitoring
  • Be able to install all the ecoystme components in CDH 5, including (but not limited to): Impala, Flume, Oozie, Hue, Cloudera Manager, Sqoop, Hive, and Pig
  • Identify the function and purpose of available tools for managing the Apache Hadoop file system

5. Resource Management (10%)

  • Understand the overall design goals of each of Hadoop schedulers
  • Given a scenario, determine how the FIFO Scheduler allocates cluster resources
  • Given a scenario, determine how the Fair Scheduler allocates cluster resources under YARN
  • Given a scenario, determine how the Capacity Scheduler allocates cluster resources

6. Monitoring and Logging (15%)

  • Understand the functions and features of Hadoop’s metric collection abilities
  • Analyze the NameNode and JobTracker Web UIs
  • Understand how to monitor cluster daemons
  • Identify and monitor CPU usage on master nodes
  • Describe how to monitor swap and memory allocation on all nodes
  • Identify how to view and manage Hadoop’s log files
  • Interpret a log file
http://www.cloudera.com/training/certification/ccah.html
目录
相关文章
|
3月前
|
消息中间件 分布式计算 Hadoop
Apache Flink 实践问题之Flume与Hadoop之间的物理墙问题如何解决
Apache Flink 实践问题之Flume与Hadoop之间的物理墙问题如何解决
46 3
|
3月前
|
分布式计算 Hadoop 大数据
大数据处理框架在零售业的应用:Apache Hadoop与Apache Spark
【8月更文挑战第20天】Apache Hadoop和Apache Spark为处理海量零售户数据提供了强大的支持
61 0
|
5月前
|
存储 分布式计算 Hadoop
使用Apache Hadoop进行分布式计算的技术详解
【6月更文挑战第4天】Apache Hadoop是一个分布式系统框架,应对大数据处理需求。它包括HDFS(分布式文件系统)和MapReduce编程模型。Hadoop架构由HDFS、YARN(资源管理器)、MapReduce及通用库组成。通过环境搭建、编写MapReduce程序,可实现分布式计算。例如,WordCount程序用于统计单词频率。优化HDFS和MapReduce性能,结合Hadoop生态系统工具,能提升整体效率。随着技术发展,Hadoop在大数据领域将持续发挥关键作用。
|
6月前
|
移动开发 Linux Apache
apache 用户登录认证
在Redhat 9系统中,已安装Apache服务。遵循教程,首先创建用户"DL"并设置密码,然后创建用户目录/home/DL/public_html,存放index.html。启用Apache的userdir模块,取消UserDir disabled的注释,并重启服务。通过htpasswd创建用户认证文件,编辑userdir.conf添加权限设置,包括AllowOverride、authuserfile、authname、authtype和require user。最后,通过浏览器访问ip/~DL/进行测试,实现用户登录验证。
50 4
|
6月前
|
分布式计算 资源调度 Hadoop
Apache Hadoop入门指南:搭建分布式大数据处理平台
【4月更文挑战第6天】本文介绍了Apache Hadoop在大数据处理中的关键作用,并引导初学者了解Hadoop的基本概念、核心组件(HDFS、YARN、MapReduce)及如何搭建分布式环境。通过配置Hadoop、格式化HDFS、启动服务和验证环境,学习者可掌握基本操作。此外,文章还提及了开发MapReduce程序、学习Hadoop生态系统和性能调优的重要性,旨在为读者提供Hadoop入门指导,助其踏入大数据处理的旅程。
881 0
|
分布式计算 固态存储 Hadoop
Apache Doris Broker快速体验之Hadoop安装部署(1)1
Apache Doris Broker快速体验之Hadoop安装部署(1)1
142 0
|
6月前
|
资源调度 分布式计算 Hadoop
Apache Hadoop YARN基本架构
【2月更文挑战第24天】
|
6月前
|
SQL 分布式计算 安全
HIVE启动错误:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeExcept
HIVE启动错误:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeExcept
268 0
|
11月前
|
分布式计算 安全 Java
深入理解Java GSS(含kerberos认证及在hadoop、flink案例场景举例)
深入理解Java GSS(含kerberos认证及在hadoop、flink案例场景举例)
134 0
|
XML 分布式计算 Hadoop
Apache Doris Broker快速体验之Hadoop安装部署(1)2
Apache Doris Broker快速体验之Hadoop安装部署(1)2
205 0