Sqoop导入到hdfs-阿里云开发者社区

开发者社区> 哈沙给> 正文

Sqoop导入到hdfs

简介:  1.注意win下直接复制进linux 改一下--等 sqoop-list-databases --connect jdbc:mysql://122.206.79.212:3306/ --username root -P     先看一下有什么数据库,发现有些数据库,能查询到的数据库才能导入,很奇怪。
+关注继续查看

 

 1.注意win下直接复制进linux 改一下--等

sqoop-list-databases --connect jdbc:mysql://122.206.79.212:3306/ --username root -P 

  

 先看一下有什么数据库,发现有些数据库,能查询到的数据库才能导入,很奇怪。

 

2.导入到hdfs

sqoop import  --connect jdbc:mysql://122.206.79.212:3306/dating --username root --password 123456 --table t_rec_top --driver com.mysql.jdbc.Driver 

  那个数据库 端口号 账户名 密码 那个表 不需要加上驱动  那没指定导入到hdfs的哪,肯定会有默认位置的

可以看出只有map任务 没有reduce任务

Warning: /home/hxsyl/Spark_Relvant/sqoop-1.4.6.bin__hadoop-2.0.4-alpha/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/hxsyl/Spark_Relvant/sqoop-1.4.6.bin__hadoop-2.0.4-alpha/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
17/03/15 11:05:12 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
17/03/15 11:05:12 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
17/03/15 11:05:12 WARN sqoop.ConnFactory: Parameter --driver is set to an explicit driver however appropriate connection manager is not being set (via --connection-manager). Sqoop is going to fall back to org.apache.sqoop.manager.GenericJdbcManager. Please specify explicitly which connection manager should be used next time.
17/03/15 11:05:12 INFO manager.SqlManager: Using default fetchSize of 1000
17/03/15 11:05:12 INFO tool.CodeGenTool: Beginning code generation
17/03/15 11:05:13 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM t_rec_top AS t WHERE 1=0
17/03/15 11:05:13 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM t_rec_top AS t WHERE 1=0
17/03/15 11:05:13 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/hxsyl/Spark_Relvant/hadoop-2.6.4/share/hadoop/mapreduce
Note: /tmp/sqoop-hxsyl/compile/ddeeb02cdbd25cddc2662317b89c80f1/t_rec_top.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
17/03/15 11:05:18 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hxsyl/compile/ddeeb02cdbd25cddc2662317b89c80f1/t_rec_top.jar
17/03/15 11:05:18 INFO mapreduce.ImportJobBase: Beginning import of t_rec_top
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hxsyl/Spark_Relvant/hadoop-2.6.4/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hxsyl/Spark_Relvant/hbase-1.2.4/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
17/03/15 11:05:19 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
17/03/15 11:05:19 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM t_rec_top AS t WHERE 1=0
17/03/15 11:05:21 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
17/03/15 11:05:21 INFO client.RMProxy: Connecting to ResourceManager at CentOSMaster/192.168.58.180:8032
17/03/15 11:05:28 INFO db.DBInputFormat: Using read commited transaction isolation
17/03/15 11:05:28 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(id), MAX(id) FROM t_rec_top
17/03/15 11:05:28 INFO mapreduce.JobSubmitter: number of splits:1
17/03/15 11:05:29 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1489547007191_0001
17/03/15 11:05:30 INFO impl.YarnClientImpl: Submitted application application_1489547007191_0001
17/03/15 11:05:31 INFO mapreduce.Job: The url to track the job: http://CentOSMaster:8088/proxy/application_1489547007191_0001/
17/03/15 11:05:31 INFO mapreduce.Job: Running job: job_1489547007191_0001
17/03/15 11:05:48 INFO mapreduce.Job: Job job_1489547007191_0001 running in uber mode : false
17/03/15 11:05:48 INFO mapreduce.Job:  map 0% reduce 0%
17/03/15 11:06:06 INFO mapreduce.Job:  map 100% reduce 0%
17/03/15 11:06:07 INFO mapreduce.Job: Job job_1489547007191_0001 completed successfully
17/03/15 11:06:07 INFO mapreduce.Job: Counters: 30
	File System Counters
		FILE: Number of bytes read=0
		FILE: Number of bytes written=127058
		FILE: Number of read operations=0
		FILE: Number of large read operations=0
		FILE: Number of write operations=0
		HDFS: Number of bytes read=99
		HDFS: Number of bytes written=21
		HDFS: Number of read operations=4
		HDFS: Number of large read operations=0
		HDFS: Number of write operations=2
	Job Counters 
		Launched map tasks=1
		Other local map tasks=1
		Total time spent by all maps in occupied slots (ms)=13150
		Total time spent by all reduces in occupied slots (ms)=0
		Total time spent by all map tasks (ms)=13150
		Total vcore-milliseconds taken by all map tasks=13150
		Total megabyte-milliseconds taken by all map tasks=13465600
	Map-Reduce Framework
		Map input records=1
		Map output records=1
		Input split bytes=99
		Spilled Records=0
		Failed Shuffles=0
		Merged Map outputs=0
		GC time elapsed (ms)=183
		CPU time spent (ms)=1200
		Physical memory (bytes) snapshot=107761664
		Virtual memory (bytes) snapshot=2069635072
		Total committed heap usage (bytes)=30474240
	File Input Format Counters 
		Bytes Read=0
	File Output Format Counters 
		Bytes Written=21
17/03/15 11:06:07 INFO mapreduce.ImportJobBase: Transferred 21 bytes in 46.7701 seconds (0.449 bytes/sec)
17/03/15 11:06:07 INFO mapreduce.ImportJobBase: Retrieved 1 records.

  

创建一个user/yonhumig的目录,其中t_rec_top里就是我们的数据,不过没有标头,可以看出只是以m,表示map任务就结束了

wc00是配置文件

"AS	1
"License");	1
${yarn.nodemanager.local-dirs}/usercache/${user}/appcache/application_${appid}.	1
(the	1
-->	3
2.0	1
<!--	3
</configuration>	1
</description>	1
</property>	15
<?xml	1
<configuration>	1
<description>Amount	1
<description>List	1
<description>Number	1
<description>The	7
<description>Where	1
<description>Whether	1
<description>fair-scheduler	1
<description>the	1
<name>yarn.log-aggregation-enable</name>	1
<name>yarn.nodemanager.aux-services</name>	1
<name>yarn.nodemanager.local-dirs</name>	1
<name>yarn.nodemanager.remote-app-log-dir</name>	1
<name>yarn.nodemanager.resource.cpu-vcores</name>	1
<name>yarn.nodemanager.resource.memory-mb</name>	1
<name>yarn.resourcemanager.address</name>	1
<name>yarn.resourcemanager.admin.address</name>	1
<name>yarn.resourcemanager.hostname</name>	1
<name>yarn.resourcemanager.resource-tracker.address</name>	1
<name>yarn.resourcemanager.scheduler.address</name>	1
<name>yarn.resourcemanager.scheduler.class</name>	1
<name>yarn.resourcemanager.webapp.address</name>	1
<name>yarn.resourcemanager.webapp.https.address</name>	1
<name>yarn.scheduler.fair.allocation.file</name>	1
<property>	15
<value>${yarn.home.dir}/etc/hadoop/fairscheduler.xml</value>	1
<value>${yarn.resourcemanager.hostname}:8030</value>	1
<value>${yarn.resourcemanager.hostname}:8031</value>	1
<value>${yarn.resourcemanager.hostname}:8032</value>	1
<value>${yarn.resourcemanager.hostname}:8033</value>	1
<value>${yarn.resourcemanager.hostname}:8088</value>	1
<value>${yarn.resourcemanager.hostname}:8090</value>	1
<value>/home/hxsyl/Spark_Relvant/yarn/local</value>	1
<value>/tmp/logs</value>	1
<value>12</value>	1
<value>30720</value>	1
<value>CentOSMaster</value>	1
<value>mapreduce_shuffle</value>	1
<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>	1
<value>true</value>	1
ANY	1
An	1
Apache	1
BASIS,	1
CONDITIONS	1
CPU	1
Configs	1
IS"	1
Individual	1
KIND,	1
LICENSE	1
License	3
License,	1
License.	2
Licensed	1
MB,	1
Manager	1
OF	1
OR	1
RM	3
RM.</description>	2
Resource	1
See	2
Site	1
Unless	1
Version	1
WARRANTIES	1
WITHOUT	1
YARN	1
You	1
a	1
a-zA-Z0-9_	1
accompanying	1
adddress	1
address	4
admin	1
aggregate	1
aggregation</description>	1
agreed	1
allocated	2
an	1
and	2
applicable	1
application's	1
application.</description>	2
applications	1
as	1
at	1
be	4
by	1
called	1
can	3
class	1
compliance	1
conf	1
configuration	1
contain	1
container_${contid},	1
containers'	1
containers.</description>	2
copy	1
cores	1
directories	1
directories,	1
directory	1
distributed	2
either	1
enable	1
except	1
express	1
file	2
file.	1
files	1
for	3
found	1
governing	1
hostname	1
http	1
http://www.apache.org/licenses/LICENSE-2.0	1
https	1
implied.	1
in	4
in.	1
in:	1
interface	1
interface.</description>	2
is	1
language	1
law	1
limitations	1
localized	2
location</description>	1
log	1
logs	1
manager	1
may	2
memory,	1
name	1
not	2
numbers</description>	1
obtain	1
of	11
on	1
only	1
or	2
permissions	1
physical	1
properties	1
required	1
resource	1
scheduler	1
scheduler.</description>	1
service	1
should	1
software	1
specific	2
start	1
store	1
subdirectories	1
that	2
the	15
this	1
this.	1
to	5
to.</description>	1
under	3
use	2
valid	1
version="1.0"?>	1
web	2
will	2
with	2
work	1
writing,	1
you	1

  

 --target-dir  /path       放到那个路径        -m :标书numberMapper

 

从hdfs上打开的文件可以看出  默认是逗号       --fields-terminated-by '\t'   这个分隔符不是为了写入到hdfs来分割,而是原始数据的分隔符

--columns 'id,account,income'    只导入某些特定的列

 

符合特定条件的列才被导入,--where "id>2 and id <9"

 

从多个表查询或者指定查询语句  --query "select * form t_detail where id >5 and $CONDITIONS"      $那个必须加 

但是如果-m大于1 就需要指定各个Mapper读取几条记录或者找分隔符 --split-by t_detail.id   $CONDITIONS就是根据分割的信息找到记录条数,进而切分数据,

 

建议使用单引号 使用双引号需要转义, --后边跟的是全称 -是简写

 

 

单引号与双引号的最大不同在于双引号仍然可以保有变量的内容,但单引号内仅能是
一般字符 ,而不会有特殊符号。我们以底下的例子做说明:假设您定义了一个变量, 
name=VBird ,现在想以 name 这个变量的内容定义出 myname 显示 VBird its me 这
个内容,要如何订定呢? 

[root@linux ~]# name=VBird 
[root@linux ~]# echo $name 
VBird 
[root@linux ~]# myname="$name its me" 
[root@linux ~]# echo $myname 
VBird its me 
[root@linux ~]# myname='$name its me' 
[root@linux ~]# echo $myname 
$name its me 

发现了吗?没错!使用了单引号的时候,那么 $name 将失去原有的变量内容, 仅为
一般字符的显示型态而已!这里必需要特别小心在意!

 

版权声明:本文内容由阿里云实名注册用户自发贡献,版权归原作者所有,阿里云开发者社区不拥有其著作权,亦不承担相应法律责任。具体规则请查看《阿里云开发者社区用户服务协议》和《阿里云开发者社区知识产权保护指引》。如果您发现本社区中有涉嫌抄袭的内容,填写侵权投诉表单进行举报,一经查实,本社区将立刻删除涉嫌侵权内容。

相关文章
阿里云服务器怎么设置密码?怎么停机?怎么重启服务器?
如果在创建实例时没有设置密码,或者密码丢失,您可以在控制台上重新设置实例的登录密码。本文仅描述如何在 ECS 管理控制台上修改实例登录密码。
10089 0
阿里云服务器如何登录?阿里云服务器的三种登录方法
购买阿里云ECS云服务器后如何登录?场景不同,阿里云优惠总结大概有三种登录方式: 登录到ECS云服务器控制台 在ECS云服务器控制台用户可以更改密码、更换系.
13893 0
windows server 2008阿里云ECS服务器安全设置
最近我们Sinesafe安全公司在为客户使用阿里云ecs服务器做安全的过程中,发现服务器基础安全性都没有做。为了为站长们提供更加有效的安全基础解决方案,我们Sinesafe将对阿里云服务器win2008 系统进行基础安全部署实战过程! 比较重要的几部分 1.
9161 0
如何设置阿里云服务器安全组?阿里云安全组规则详细解说
阿里云安全组设置详细图文教程(收藏起来) 阿里云服务器安全组设置规则分享,阿里云服务器安全组如何放行端口设置教程。阿里云会要求客户设置安全组,如果不设置,阿里云会指定默认的安全组。那么,这个安全组是什么呢?顾名思义,就是为了服务器安全设置的。安全组其实就是一个虚拟的防火墙,可以让用户从端口、IP的维度来筛选对应服务器的访问者,从而形成一个云上的安全域。
7503 0
阿里云服务器如何登录?阿里云服务器的三种登录方法
购买阿里云ECS云服务器后如何登录?场景不同,云吞铺子总结大概有三种登录方式: 登录到ECS云服务器控制台 在ECS云服务器控制台用户可以更改密码、更换系统盘、创建快照、配置安全组等操作如何登录ECS云服务器控制台? 1、先登录到阿里云ECS服务器控制台 2、点击顶部的“控制台” 3、通过左侧栏,切换到“云服务器ECS”即可,如下图所示 通过ECS控制台的远程连接来登录到云服务器 阿里云ECS云服务器自带远程连接功能,使用该功能可以登录到云服务器,简单且方便,如下图:点击“远程连接”,第一次连接会自动生成6位数字密码,输入密码即可登录到云服务器上。
22408 0
阿里云ECS云服务器初始化设置教程方法
阿里云ECS云服务器初始化是指将云服务器系统恢复到最初状态的过程,阿里云的服务器初始化是通过更换系统盘来实现的,是免费的,阿里云百科网分享服务器初始化教程: 服务器初始化教程方法 本文的服务器初始化是指将ECS云服务器系统恢复到最初状态,服务器中的数据也会被清空,所以初始化之前一定要先备份好。
7365 0
阿里云服务器ECS登录用户名是什么?系统不同默认账号也不同
阿里云服务器Windows系统默认用户名administrator,Linux镜像服务器用户名root
4509 0
+关注
哈沙给
渣渣一枚
1101
文章
0
问答
文章排行榜
最热
最新
相关电子书
更多
《2021云上架构与运维峰会演讲合集》
立即下载
《零基础CSS入门教程》
立即下载
《零基础HTML入门教程》
立即下载