Spark standalone模式的安装(spark-1.6.1-bin-hadoop2.6.tgz)(master、slave1和slave2)

本文涉及的产品
注册配置 MSE Nacos/ZooKeeper,118元/月
服务治理 MSE Sentinel/OpenSergo,Agent数量 不受限
云原生网关 MSE Higress,422元/月
简介:

开篇要明白

  (1)spark-env.sh 是环境变量配置文件

  (2)spark-defaults.conf

  (3)slaves 是从节点机器配置文件

  (4)metrics.properties 是 监控

  (5)log4j.properties 是配置日志

  (5)fairscheduler.xml是公平调度

  (6)docker.properties 是 docker

  (7)我这里的Spark standalone模式的安装,是master、slave1和slave2。

  (8)Spark standalone模式的安装,其实,是可以不需安装hadoop的。(我这里是没有安装hadoop了,看到有些人写博客也没安装,也有安装的)

  (9)为了管理,安装zookeeper,(即管理master、slave1和slave2)

 

 

 

 

 

 

 首先,说下我这篇博客的Spark standalone模式的安装情况

 

 

 

 

 

 

我的安装分区如下,四台都一样。

 

 

 

 

 

 

 

 

 

 关于如何关闭防火墙

  我这里不多说,请移步

hadoop 50070 无法访问问题解决汇总

 

 

 

 

 

 

关于如何配置静态ip和联网

  我这里不多说,我的是如下,请移步

CentOS 6.5静态IP的设置(NAT和桥接联网方式都适用)

 

复制代码
DEVICE=eth0
HWADDR=00:0C:29:A9:45:18
TYPE=Ethernet
UUID=50fc177a-f282-4c83-bfbc-cb0f00b92507
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static

DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth0"

IPADDR=192.168.80.10
BCAST=192.168.80.255
GATEWAY=192.168.80.2
NETMASK=255.255.255.0

DNS1=192.168.80.2
DNS2=8.8.8.8
复制代码

 

 

复制代码
DEVICE=eth0
HWADDR=00:0C:29:18:ED:4A
TYPE=Ethernet
UUID=b5d059e4-3b92-41ef-889b-68f2f5684fac
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static

DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth0"
IPADDR=192.168.80.11
BCAST=192.168.80.255
GATEWAY=192.168.80.2
NETMASK=255.255.255.0

DNS1=192.168.80.2
DNS2=8.8.8.8
复制代码

 

 

 

 

复制代码
DEVICE=eth0
HWADDR=00:0C:29:8B:DE:B0
TYPE=Ethernet
UUID=1ba7be29-2c80-4875-8c11-1ed2a47c0a67
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static

DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth0"
IPADDR=192.168.80.12
BCAST=192.168.80.255
GATEWAY=192.168.80.2
NETMASK=255.255.255.0

DNS1=192.168.80.2
DNS1=8.8.8.8
复制代码

 

 

 

 

 

 

 

 

 

 

 

关于新建用户组和用户

  我这里不多说,我是spark,请移步

新建用户组、用户、用户密码、删除用户组、用户(适合CentOS、Ubuntu)

 

 

 

 

关于安装ssh、机器本身、机器之间进行免密码通信和时间同步

  我这里不多说,具体,请移步。在这一步,本人深有感受,有经验。最好建议拍快照。否则很容易出错!

  机器本身,即master与master、slave1与slave1、slave2与slave2。

  机器之间,即master与slave1、master与slave2。

        slave1与slave2。

hadoop-2.6.0.tar.gz + spark-1.5.2-bin-hadoop2.6.tgz 的集群搭建(3节点和5节点皆适用)

hadoop-2.6.0.tar.gz的集群搭建(5节点)

 

 

 

 

 

 

 

 

 关于如何先卸载自带的openjdk,再安装

  我这里不多说,我是jdk-8u60-linux-x64.tar.gz,请移步

  我的jdk是安装在/usr/local/jdk下,记得赋予权限组,chown -R spark:spark jdk

Centos 6.5下的OPENJDK卸载和SUN的JDK安装、环境变量配置

 

#java
export JAVA_HOME=/usr/local/jdk/jdk1.8.0_60
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin

 

 

 

 关于如何安装scala

  不多说,我这里是scala-2.10.5.tgz,请移步

  我的scala安装在/usr/local/scala,记得赋予用户组,chown -R spark:spark scala

 

hadoop-2.6.0.tar.gz + spark-1.6.1-bin-hadoop2.6.tgz的集群搭建(单节点)(CentOS系统)

#scala
export SCALA_HOME=/usr/local/scala/scala-2.10.5
export PATH=$PATH:$SCALA_HOME/bin

 

 

 

 关于如何安装spark

  我这里不多说,请移步见

  我的spark安装目录是在/usr/local/spark/,记得赋予用户组,chown -R spark:spark sparl

    只需去下面的博客,去看如何安装就好,至于spark的怎么配置。请见下面的spark  standalone模式的配置文件讲解。

hadoop-2.6.0.tar.gz + spark-1.6.1-bin-hadoop2.6.tgz的集群搭建(单节点)(CentOS系统)

#spark
export SPARK_HOME=/usr/local/spark/spark-1.6.1-bin-hadoop2.6
export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin

 

 

 

 

 

 

关于zookeeper的安装

  我这里不多说,请移步

hadoop-2.6.0-cdh5.4.5.tar.gz(CDH)的3节点集群搭建(含zookeeper集群安装)

 以及,之后,在spark 里怎么配置zookeeper。

Spark standalone简介与运行wordcount(master、slave1和slave2)

 

 

 

 

 

 

这里,我带大家来看官网

http://spark.apache.org/docs/latest

 

 

 

 

 

 

http://spark.apache.org/docs/latest/spark-standalone.html

 

 

http://spark.apache.org/docs/latest/spark-standalone.html#starting-a-cluster-manually

 

 

 

 

 

 

 

Spark Standalone部署配置---通过脚本启动集群

修改如下配置:

● slaves--指定在哪些节点上运行worker。

复制代码
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

# A Spark Worker will be started on each of the machines listed below.
slave1
slave2
复制代码

 

 spark-defaults.conf---spark提交job时的默认配置

复制代码
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

# Default system properties included when running spark-submit.
# This is useful for setting default environmental settings.

# Example:
# spark.master                     spark://master:7077
# spark.eventLog.enabled           true
# spark.eventLog.dir               hdfs://namenode:8021/directory
# spark.serializer                 org.apache.spark.serializer.KryoSerializer
# spark.driver.memory              5g
# spark.executor.extraJavaOptions  -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
复制代码

  大家,可以在这个配置文件里指定好,以后每次不需在命令行下指定了。当然咯,也可以不配置啦!(我一般是这里不配置,即这个文件不动它

 

 

 

 

spark-defaults.conf (这个作为可选可不选)(是因为或者是在spark-submit里也是可以加入的)(一般不选,不然固定死了)(我一般是这里不配置,即这个文件不动它

复制代码
spark.master                       spark://master:7077
spark.eventLog.enabled             true
spark.eventLog.dir                 hdfs://master:9000/sparkHistoryLogs spark.eventLog.compress true spark.history.fs.update.interval 5 spark.history.ui.port 7777 spark.history.fs.logDirectory hdfs://master:9000/sparkHistoryLogs
复制代码

  

 

 

 

● spark-env.sh—spark的环境变量

复制代码
#!/usr/bin/env bash

#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # This file is sourced when running various Spark programs. # Copy it as spark-env.sh and edit that to configure Spark for your site.  # Options read when launching programs locally with # ./bin/run-example or ./bin/spark-submit # - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files # - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node # - SPARK_PUBLIC_DNS, to set the public dns name of the driver program # - SPARK_CLASSPATH, default classpath entries to append # Options read by executors and drivers running inside the cluster # - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node # - SPARK_PUBLIC_DNS, to set the public DNS name of the driver program # - SPARK_CLASSPATH, default classpath entries to append # - SPARK_LOCAL_DIRS, storage directories to use on this node for shuffle and RDD data # - MESOS_NATIVE_JAVA_LIBRARY, to point to your libmesos.so if you use Mesos # Options read in YARN client mode # - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files # - SPARK_EXECUTOR_INSTANCES, Number of executors to start (Default: 2) # - SPARK_EXECUTOR_CORES, Number of cores for the executors (Default: 1). # - SPARK_EXECUTOR_MEMORY, Memory per Executor (e.g. 1000M, 2G) (Default: 1G) # - SPARK_DRIVER_MEMORY, Memory for Driver (e.g. 1000M, 2G) (Default: 1G) # - SPARK_YARN_APP_NAME, The name of your application (Default: Spark) # - SPARK_YARN_QUEUE, The hadoop queue to use for allocation requests (Default: ‘default’) # - SPARK_YARN_DIST_FILES, Comma separated list of files to be distributed with the job. # - SPARK_YARN_DIST_ARCHIVES, Comma separated list of archives to be distributed with the job. # Options for the daemons used in the standalone deploy mode # - SPARK_MASTER_IP, to bind the master to a different IP address or hostname # - SPARK_MASTER_PORT / SPARK_MASTER_WEBUI_PORT, to use non-default ports for the master 
# - SPARK_MASTER_OPTS, to set config properties only for the master (e.g. "-Dx=y") # - SPARK_WORKER_CORES, to set the number of cores to use on this machine # - SPARK_WORKER_MEMORY, to set how much total memory workers have to give executors (e.g. 1000m, 2g) # - SPARK_WORKER_PORT / SPARK_WORKER_WEBUI_PORT, to use non-default ports for the worker # - SPARK_WORKER_INSTANCES, to set the number of worker processes per node # - SPARK_WORKER_DIR, to set the working directory of worker processes # - SPARK_WORKER_OPTS, to set config properties only for the worker (e.g. "-Dx=y") # - SPARK_DAEMON_MEMORY, to allocate to the master, worker and history server themselves (default: 1g). # - SPARK_HISTORY_OPTS, to set config properties only for the history server (e.g. "-Dx=y") # - SPARK_SHUFFLE_OPTS, to set config properties only for the external shuffle service (e.g. "-Dx=y") # - SPARK_DAEMON_JAVA_OPTS, to set config properties for all daemons (e.g. "-Dx=y") # - SPARK_PUBLIC_DNS, to set the public dns name of the master or workers # Generic options for the daemons used in the standalone deploy mode # - SPARK_CONF_DIR Alternate conf dir. (Default: ${SPARK_HOME}/conf) # - SPARK_LOG_DIR Where log files are stored. (Default: ${SPARK_HOME}/logs) # - SPARK_PID_DIR Where the pid file is stored. (Default: /tmp) # - SPARK_IDENT_STRING A string representing this instance of spark. (Default: $USER) # - SPARK_NICENESS The scheduling priority for daemons. (Default: 0)



export JAVA_HOME=/usr/local/jdk/jdk1.8.0_60
export SCALA_HOME=/usr/local/scala/scala-2.10.5

export SPARK_MASTER_IP=192.168.80.10
export SPARK_WORKER_MERMORY=1G (官网上说是1g)
# SPARK_MASTER_WEBUI_PORT=8888 (这里自行可以去修改,我这里不做演示)

注意:SPARK_MASTER_PORT默认是8080,SPARK_MASTER_WEBUI_PORT默认是7077
复制代码

   因为,我说了,我的这篇博文定位是对spark的standalone模式的安装,所以,它是可以不用安装hadoop的,所以这里就不需配置hadoop了。

你们大家若有看到这里要配置,比如HADOOP_HOMEHADOOP_CONF_DIR等。那是spark的yarn模式的安装。!!!(注意)

 

 

 

 

● 在打算作为master的节点上启动集群—sbin/start-all.sh

 

 

 

 

 

 



本文转自大数据躺过的坑博客园博客,原文链接:http://www.cnblogs.com/zlslch/p/6632185.html,如需转载请自行联系原作者

相关实践学习
基于MSE实现微服务的全链路灰度
通过本场景的实验操作,您将了解并实现在线业务的微服务全链路灰度能力。
相关文章
|
1月前
|
分布式计算 Kubernetes Hadoop
大数据-82 Spark 集群模式启动、集群架构、集群管理器 Spark的HelloWorld + Hadoop + HDFS
大数据-82 Spark 集群模式启动、集群架构、集群管理器 Spark的HelloWorld + Hadoop + HDFS
148 6
|
1月前
|
分布式计算 资源调度 Hadoop
大数据-80 Spark 简要概述 系统架构 部署模式 与Hadoop MapReduce对比
大数据-80 Spark 简要概述 系统架构 部署模式 与Hadoop MapReduce对比
64 2
|
15天前
|
存储 分布式计算 Hadoop
数据湖技术:Hadoop与Spark在大数据处理中的协同作用
【10月更文挑战第27天】在大数据时代,数据湖技术凭借其灵活性和成本效益成为企业存储和分析大规模异构数据的首选。Hadoop和Spark作为数据湖技术的核心组件,通过HDFS存储数据和Spark进行高效计算,实现了数据处理的优化。本文探讨了Hadoop与Spark的最佳实践,包括数据存储、处理、安全和可视化等方面,展示了它们在实际应用中的协同效应。
59 2
|
16天前
|
存储 分布式计算 Hadoop
数据湖技术:Hadoop与Spark在大数据处理中的协同作用
【10月更文挑战第26天】本文详细探讨了Hadoop与Spark在大数据处理中的协同作用,通过具体案例展示了两者的最佳实践。Hadoop的HDFS和MapReduce负责数据存储和预处理,确保高可靠性和容错性;Spark则凭借其高性能和丰富的API,进行深度分析和机器学习,实现高效的批处理和实时处理。
56 1
|
7天前
|
分布式计算 资源调度 Hadoop
【赵渝强老师】部署Hadoop的本地模式
本文介绍了Hadoop的目录结构及本地模式部署方法,包括解压安装、设置环境变量、配置Hadoop参数等步骤,并通过一个简单的WordCount程序示例,演示了如何在本地模式下运行MapReduce任务。
|
1月前
|
SQL 分布式计算 Hadoop
Hadoop-12-Hive 基本介绍 下载安装配置 MariaDB安装 3台云服务Hadoop集群 架构图 对比SQL HQL
Hadoop-12-Hive 基本介绍 下载安装配置 MariaDB安装 3台云服务Hadoop集群 架构图 对比SQL HQL
60 3
|
1月前
|
SQL 存储 数据管理
Hadoop-15-Hive 元数据管理与存储 Metadata 内嵌模式 本地模式 远程模式 集群规划配置 启动服务 3节点云服务器实测
Hadoop-15-Hive 元数据管理与存储 Metadata 内嵌模式 本地模式 远程模式 集群规划配置 启动服务 3节点云服务器实测
56 2
|
3月前
|
分布式计算 资源调度 Hadoop
centos7二进制安装Hadoop3
centos7二进制安装Hadoop3
|
3月前
|
存储 分布式计算 Hadoop
Hadoop 运行的三种模式
【8月更文挑战第31天】
332 0
|
16天前
|
分布式计算 大数据 Apache
ClickHouse与大数据生态集成:Spark & Flink 实战
【10月更文挑战第26天】在当今这个数据爆炸的时代,能够高效地处理和分析海量数据成为了企业和组织提升竞争力的关键。作为一款高性能的列式数据库系统,ClickHouse 在大数据分析领域展现出了卓越的能力。然而,为了充分利用ClickHouse的优势,将其与现有的大数据处理框架(如Apache Spark和Apache Flink)进行集成变得尤为重要。本文将从我个人的角度出发,探讨如何通过这些技术的结合,实现对大规模数据的实时处理和分析。
48 2
ClickHouse与大数据生态集成:Spark & Flink 实战

相关实验场景

更多