hadoop2.6.0(单节点)下Sqoop-1.4.6安装与配置(数据读取涉及hadoop、hbase和hive)

本文涉及的产品
RDS DuckDB + QuickBI 企业套餐,8核32GB + QuickBI 专业版
RDS AI 助手,专业版
RDS MySQL DuckDB 分析主实例,基础系列 4核8GB
简介:

环境准备

  Java

  Hadoop(Hdfs/Yarn)

 

 

 

 

 

 

Hadoop2.6.0(单节点)下安装Sqoop

  第一步:上传sqoop的安装包,这里不多赘述。

[hadoop@djt002 sqoop]$ pwd
/usr/local/sqoop
[hadoop@djt002 sqoop]$ ls
sqoop-1.4.6.bin__hadoop-2.0.4-alpha
[hadoop@djt002 sqoop]$ mv sqoop-1.4.6.bin__hadoop-2.0.4-alpha/ sqoop-1.4.6
[hadoop@djt002 sqoop]$ ls
sqoop-1.4.6
[hadoop@djt002 sqoop]$ cd sqoop-1.4.6/
[hadoop@djt002 sqoop-1.4.6]$ pwd
/usr/local/sqoop/sqoop-1.4.6
[hadoop@djt002 sqoop-1.4.6]$

 

 

 

 

 

 

 

 

 

 

 

 

 

[hadoop@djt002 sqoop-1.4.6]$ ls
bin CHANGELOG.txt conf ivy lib NOTICE.txt README.txt sqoop-patch-review.py src
build.xml COMPILING.txt docs ivy.xml LICENSE.txt pom-old.xml sqoop-1.4.6.jar sqoop-test-1.4.6.jar testdata
[hadoop@djt002 sqoop-1.4.6]$ cd conf/
[hadoop@djt002 conf]$ pwd
/usr/local/sqoop/sqoop-1.4.6/conf
[hadoop@djt002 conf]$ ls
oraoop-site-template.xml sqoop-env-template.cmd sqoop-env-template.sh sqoop-site-template.xml sqoop-site.xml
[hadoop@djt002 conf]$ cp sqoop-env-template.sh sqoop-env.sh
[hadoop@djt002 conf]$ vim sqoop-env.sh

 

 

 

 

 

   第二步:配置文件

 

[hadoop@djt002 conf]$ vim sqoop-env.sh

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# included in all the hadoop scripts with source command
# should not be executable directly
# also should not be passed any arguments, since we need original $*

# Set Hadoop-specific environment variables here.

#Set path to where bin/hadoop is available
#export HADOOP_COMMON_HOME=    (建议都安装上)

#Set path to where hadoop-*-core.jar is available
#export HADOOP_MAPRED_HOME=     (建议都安装上)

#set the path to where bin/hbase is available  
#export HBASE_HOME=          (建议都安装上)

#Set the path to where bin/hive is available
#export HIVE_HOME=           (建议都安装上)

#Set the path for where zookeper config dir is
#export ZOOCFGDIR=            (因为,我这里是,hadoop-2.6.0的单节点分布,所以就没必要去配置Zookeeper了)

 

 

 

 

    如果数据读取不涉及hbase和hive,那么相关hbase和hive的配置可以不加;如果集群有独立的zookeeper集群,那么配置zookeeper,反之,不用配置。

在这里,我就全部配置吧,为了大家的方便!

 

  所以,就没配置Zookeeper了。

复制代码
export HADOOP_COMMON_HOME=/usr/local/hadoop/hadoop-2.6.0

export HADOOP_MAPRED_HOME=/usr/local/hadoop/hadoop-2.6.0

export HBASE_HOME=/usr/local/hbase/hbase-1.2.3

export HIVE_HOME=/usr/local/hive/hive-1.0.0
复制代码

 

   

 

 

  第三:配置环境变量

#sqoop
export SQOOP_HOME=/usr/local/sqoop/sqoop-1.4.6
export PATH=$PATH:$SQOOP_HOME/bin

 

 

 

 

  第四步:生效环境变量

source /etc/profile

 

   

 

 

  第五步:这里大家,记得要给sqoop安装目录,授予权限给hadoop用户

chown -R hadoop:hadoop sqoop-1.4.6

 

 

 

 

  第六步:将相关的驱动 jar 包拷贝到 sqoop/lib 目录下。

   这里,省略了,很多,包括。hadoo的相关核心jar包、hive的相关核心jar包和hbase的相关核心jar包(补补)

 

 

 

 

 

 

 

 

 

 测试

   比如,我这里打开下,数据库

Navicat for MySQL之MySQL客户端的下载、安装和使用

个人推荐,比较好的MySQL客户端工具

 

   得,先启动之前安装好的数据库。

复制代码
[hadoop@djt002 ~]$ su root
Password: 
[root@djt002 hadoop]# cd /usr/local/
[root@djt002 local]# pwd
/usr/local
[root@djt002 local]# service mysqld start
Starting mysqld:                                           [  OK  ]
[root@djt002 local]# 
复制代码

 

  然后,这边,选择连接。

 

 

复制代码
[hadoop@djt002 sqoop-1.4.6]$ sqoop list-databases --connect jdbc:mysql://djt002/ --username hive --password hive
Warning: /usr/local/sqoop/sqoop-1.4.6/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /usr/local/sqoop/sqoop-1.4.6/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /usr/local/sqoop/sqoop-1.4.6/../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
17/03/17 20:30:25 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
17/03/17 20:30:25 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
17/03/17 20:30:27 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
information_schema
hive
mysql
test
[hadoop@djt002 sqoop-1.4.6]$ sqoop list-tables --connect jdbc:mysql://djt002/hive --username hive --password hive
Warning: /usr/local/sqoop/sqoop-1.4.6/../hcatalog does not exist! HCatalog jobs will fail. Please set $HCAT_HOME to the root of your HCatalog installation. Warning: /usr/local/sqoop/sqoop-1.4.6/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. Warning: /usr/local/sqoop/sqoop-1.4.6/../zookeeper does not exist! Accumulo imports will fail. Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation. 17/03/17 20:30:48 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6 17/03/17 20:30:48 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 17/03/17 20:30:50 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset. BUCKETING_COLS CDS COLUMNS_V2 DATABASE_PARAMS DBS FUNCS FUNC_RU GLOBAL_PRIVS IDXS INDEX_PARAMS PARTITIONS PARTITION_KEYS PARTITION_KEY_VALS PARTITION_PARAMS PART_COL_PRIVS PART_COL_STATS PART_PRIVS ROLES SDS SD_PARAMS SEQUENCE_TABLE SERDES SERDE_PARAMS SKEWED_COL_NAMES SKEWED_COL_VALUE_LOC_MAP SKEWED_STRING_LIST SKEWED_STRING_LIST_VALUES SKEWED_VALUES SORT_COLS TABLE_PARAMS TAB_COL_STATS TBLS TBL_COL_PRIVS TBL_PRIVS VERSION [hadoop@djt002 sqoop-1.4.6]$ pwd /usr/local/sqoop/sqoop-1.4.6 [hadoop@djt002 sqoop-1.4.6]$ 
复制代码

 

   然后,继续,还没达到我们想要的目的效果。

继续,怎么做呢?

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

补充Sqoop命令怎么用?

  

复制代码
[hadoop@djt002 sqoop-1.4.6]$ sqoop help
Warning: /usr/local/sqoop/sqoop-1.4.6/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /usr/local/sqoop/sqoop-1.4.6/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /usr/local/sqoop/sqoop-1.4.6/../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
17/03/17 20:03:21 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
usage: sqoop COMMAND [ARGS]    即sqoop 命令 参数

Available commands:
  codegen            Generate code to interact with database records
  create-hive-table Import a table definition into Hive              跟hive有关 eval  Evaluate a SQL statement and display the results export Export an HDFS directory to a database table help  List available commands import Import a table from a database to HDFS            导入 import-all-tables Import tables from a database to HDFS import-mainframe Import datasets from a mainframe server to HDFS job Work with saved jobs list-databases List available databases on a server             列出数据库  list-tables List available tables in a database              列出数据表 merge Merge results of incremental imports             合并增量导入  metastore  Run a standalone Sqoop metastore               元数据存储 version Display version information                  版本号 See 'sqoop help COMMAND' for information on a specific command. [hadoop@djt002 sqoop-1.4.6]$ 
复制代码

 

  大家,最好,还是擅于读官方文档

 http://sqoop.apache.org/

 

http://sqoop.apache.org/docs/1.4.6/SqoopUserGuide.html

 

 

 

 

 

 

 

 

 

 

待整理,如下,暂时不要去看

 

 

 

二、Hadoop2.6.0下安装Sqoop

   我这里,暂时,以Ubuntu环境下。

     步骤一: 下载sqoop2安装包:

http://mirrors.hust.edu.cn/apache/sqoop/1.99.6/   或者   http://archive.apache.org/dist/sqoop/   (推荐这个)

   我一般,传到Linux里的目录是,/usr/local/sqoop,这个,自行去设置。

步骤二、解压安装包,sudo  tar  -zxvf  sqoop-1.99.6-bin-hadoop200.tar.gz

步骤三、修改文件的名字为sqoop1996 ,

           sudo  mv  sqoop-1.99.6-bin-hadoop200.tar.gz     sqoop-1.99.6

步骤四、进入sqoop-1.99.6文件,进行配置:cd sqoop-1.99.6

  1.配置环境变量,sudo gedit ~/.bashrc,写入sqoop的安装路径和path变量

        export SQOOP2_HOME=/usr/local/sqoop/sqoop-1.99.6

               export PATH=.:$SQOOP2_HOME/bin:$PATH

               export CALALINA_BASE=$SQOOP2_HOME/server

        2.把MySQL的connect包拷贝到sqoop1.99.6/server/lib下面

   3.修改配置文件$SQOOP2_HOME/server/conf/sqoop.properties

   修改配置文件$SQOOP2_HOME/server/conf/catalina.properties,如图红框部分修改为自己的安装目录

 

 

 步骤五、启动sqoop server

  在sqoop的安装目录下,执行bin/sqoop.sh server start

 

步骤六、启动sqoop client

  在sqoop的安装目录下,执行bin/sqoop.sh client start

 

 

 

 

 


本文转自大数据躺过的坑博客园博客,原文链接:http://www.cnblogs.com/zlslch/p/6116363.html,如需转载请自行联系原作者

相关文章
|
SQL 分布式计算 关系型数据库
基于云服务器的数仓搭建-hive/spark安装
本文介绍了在本地安装和配置MySQL、Hive及Spark的过程。主要内容包括: - **MySQL本地安装**:详细描述了内存占用情况及安装步骤,涉及安装脚本的编写与执行,以及连接MySQL的方法。 - **Hive安装**:涵盖了从上传压缩包到配置环境变量的全过程,并解释了如何将Hive元数据存储配置到MySQL中。 - **Hive与Spark集成**:说明了如何安装Spark并将其与Hive集成,确保Hive任务由Spark执行,同时解决了依赖冲突问题。 - **常见问题及解决方法**:列举了安装过程中可能遇到的问题及其解决方案,如内存配置不足、节点间通信问题等。
基于云服务器的数仓搭建-hive/spark安装
|
SQL 关系型数据库 MySQL
seatunnel配置mysql2hive
本文介绍了SeaTunnel的安装与使用教程,涵盖从安装、配置到数据同步的全过程。主要内容包括: 1. **SeaTunnel安装**:详细描述了下载、解压及配置连接器等步骤。 2. **模拟数据到Hive (fake2hive)**:通过编辑测试脚本,将模拟数据写入Hive表。 3. **MySQL到控制台 (mysql2console)**:创建配置文件并执行命令,将MySQL数据输出到控制台。 4. **MySQL到Hive (mysql2hive)**:创建Hive表,配置并启动同步任务,支持单表和多表同步。
|
分布式计算 Hadoop Shell
Hadoop-35 HBase 集群配置和启动 3节点云服务器 集群效果测试 Shell测试
Hadoop-35 HBase 集群配置和启动 3节点云服务器 集群效果测试 Shell测试
396 4
|
SQL 分布式计算 Hadoop
Hadoop-34 HBase 安装部署 单节点配置 hbase-env hbase-site 超详细图文 附带配置文件
Hadoop-34 HBase 安装部署 单节点配置 hbase-env hbase-site 超详细图文 附带配置文件
695 2
|
分布式计算 Hadoop
Hadoop-27 ZooKeeper集群 集群配置启动 3台云服务器 myid集群 zoo.cfg多节点配置 分布式协调框架 Leader Follower Observer
Hadoop-27 ZooKeeper集群 集群配置启动 3台云服务器 myid集群 zoo.cfg多节点配置 分布式协调框架 Leader Follower Observer
331 1
|
存储 SQL 消息中间件
Hadoop-26 ZooKeeper集群 3台云服务器 基础概念简介与环境的配置使用 架构组成 分布式协调框架 Leader Follower Observer
Hadoop-26 ZooKeeper集群 3台云服务器 基础概念简介与环境的配置使用 架构组成 分布式协调框架 Leader Follower Observer
298 0
|
SQL 分布式计算 关系型数据库
Hadoop-24 Sqoop迁移 MySQL到Hive 与 Hive到MySQL SQL生成数据 HDFS集群 Sqoop import jdbc ETL MapReduce
Hadoop-24 Sqoop迁移 MySQL到Hive 与 Hive到MySQL SQL生成数据 HDFS集群 Sqoop import jdbc ETL MapReduce
416 0
|
SQL 分布式计算 关系型数据库
Hadoop-23 Sqoop 数据MySQL到HDFS(部分) SQL生成数据 HDFS集群 Sqoop import jdbc ETL MapReduce
Hadoop-23 Sqoop 数据MySQL到HDFS(部分) SQL生成数据 HDFS集群 Sqoop import jdbc ETL MapReduce
264 0
|
SQL 分布式计算 关系型数据库
Hadoop-22 Sqoop 数据MySQL到HDFS(全量) SQL生成数据 HDFS集群 Sqoop import jdbc ETL MapReduce
Hadoop-22 Sqoop 数据MySQL到HDFS(全量) SQL生成数据 HDFS集群 Sqoop import jdbc ETL MapReduce
341 0
|
分布式计算 Java Hadoop
java使用hbase、hadoop报错举例
java使用hbase、hadoop报错举例
406 4

相关实验场景

更多