HUE配置

简介:

Where is my hue.ini?

  • CDH package: /etc/hue/conf/hue.ini

  • A tarball release: /usr/share/desktop/conf/hue.ini

  • Development version: desktop/conf/pseudo-distributed.ini

  • Cloudera Manager: CM generates all the hue.ini for you, so no hassle \"\" /var/run/cloudera-scm-agent/process/`ls -alrt /var/run/cloudera-scm-agent/process | grep HUE | tail -1 | awk ‘{print $9}’`/hue.ini

[beeswax]

  # Host where HiveServer2 is running.

  hive_server_host=localhost

To point to another server, just replaced the host value by ‘hiveserver.ent.com’:

[beeswax]

  # Host where HiveServer2 is running.

  hive_server_host=hiveserver.ent.com

Note: Any line starting with a # is considered as a comment so is not used.

Note: The list of mis-configured services are listed on the /about/admin_wizard page.

Note: After each change in the ini file, Hue should be restarted to pick it up.

Note: In some cases, as explained in how to configure Hadoop for Hue documentation, the API of these services needs to be turned on and Hue set as proxy user.

Here are the main sections that you will need to update in order to have each service accessible in Hue:

HDFS

This is required for listing or creating files. Replace localhost by the real address of the NameNode (usually http://localhost:50070).

Enter this in hdfs-site.xml to enable WebHDFS in the NameNode and DataNodes:

<< code="">property>

  << code="">name>dfs.webhdfs.enabledname>

  << code="">value>truevalue>

property>

Configure Hue as a proxy user for all other users and groups, meaning it may submit a request on behalf of any other user. Add to core-site.xml:

<< code="">property>

  << code="">name>hadoop.proxyuser.hue.hostsname>

  << code="">value>*value>

property>

<< code="">property>

  << code="">name>hadoop.proxyuser.hue.groupsname>

  << code="">value>*value>

property>

Then, if the Namenode is on another host than Hue, don’t forget to update in the hue.ini:

[hadoop]

 

  `hdfs_clusters`

 

    [`default`]

 

      # Enter the filesystem uri

      fs_defaultfs=hdfs://localhost:8020

 

      # Use WebHdfs/HttpFs as the communication mechanism.

      # Domain should be the NameNode or HttpFs host.

      webhdfs_url=http://localhost:50070/webhdfs/v1

YARN

The Resource Manager is often on http://localhost:8088 by default. The ProxyServer and Job History servers also needs to be specified. Then Job Browser will let you list and kill running applications and get their logs.

[hadoop]

 

  `yarn_clusters`

 

    [`default`]

 

      # Enter the host on which you are running the ResourceManager

      resourcemanager_host=localhost     

 

      # Whether to submit jobs to this cluster

      submit_to=True

 

      # URL of the ResourceManager API

      resourcemanager_api_url=http://localhost:8088

 

      # URL of the ProxyServer API

      proxy_api_url=http://localhost:8088

 

      # URL of the HistoryServer API

      history_server_api_url=http://localhost:19888

Hive

Here we need a running HiveServer2 in order to send SQL queries.

[beeswax]

 

  # Host where HiveServer2 is running.

  hive_server_host=localhost

Note:
If HiveServer2 is on another machine and you are using security or customized HiveServer2 configuration, you will need to copy the hive-site.xml on the Hue machine too:

[beeswax]

 

  # Host where HiveServer2 is running.

  hive_server_host=localhost

 

  # Hive configuration directory, where hive-site.xml is located

  hive_conf_dir=/etc/hive/conf

Solr Search

We just need to specify the address of a Solr Cloud (or non Cloud Solr), then interactive dashboards capabilities are unleashed!

[search]

 

  # URL of the Solr Server

  solr_url=http://localhost:8983/solr/

Oozie

An Oozie server should be up and running before submitting or monitoring workflows.

[liboozie]

 

  # The URL where the Oozie service runs on.

  oozie_url=http://localhost:11000/oozie

HBase

The HBase app works with a HBase Thrift Server version 1. It lets you browse, query and edit HBase tables.

[hbase]

  # Comma-separated list of HBase Thrift server 1 for clusters in the format of '(name|host:port)'.

 hbase_clusters=(Cluster|localhost:9090)










本文转自 yntmdr 51CTO博客,原文链接:http://blog.51cto.com/yntmdr/1743223,如需转载请自行联系原作者
目录
相关文章
|
分布式计算 数据可视化 大数据
Hue--介绍、功能、架构 | 学习笔记
快速学习 Hue--介绍、功能、架构
2314 0
Hue--介绍、功能、架构 | 学习笔记
|
2月前
|
SQL 分布式计算 监控
|
SQL HIVE 数据安全/隐私保护
Hive整合Hue组件使用
Hive整合Hue组件使用
131 0
|
SQL 分布式计算 分布式数据库
Hive集成Hue安装部署
Hive集成Hue安装部署
247 0
|
SQL HIVE 数据安全/隐私保护
Hive Impala和Hue集成LDAP
Hive Impala和Hue集成LDAP
227 0
|
SQL 分布式计算 数据可视化
CDH 搭建_Hue|学习笔记
快速学习 CDH 搭建_Hue
330 0
CDH 搭建_Hue|学习笔记
|
Kubernetes JavaScript 关系型数据库
HUE部署
HUE部署
267 0
|
分布式计算 Java Hadoop
|
SQL 分布式计算 网络安全
如何在Aliyun E-MapReduce集群上使用Zeppelin和Hue
目前Aliyun E-MapReduce支持了zeppelin和hue,在Aliyun E-MapReduce集群上可以很方便的使用zeppelin和hue。本文将详细介绍如何在Aliyun E-MapReduce玩转Zeppelin和Hue!
12885 0