sql server 导出数据到 Azure Hbase / Hive 详细步骤

本文涉及的产品
云数据库 RDS SQL Server,基础系列 2核4GB
简介: The Hadoop on Azure Sqoop Import Sample Tutorial Table of Contents   Overview   Goals Key technologies Setup and Configuration   Tut...

The Hadoop on Azure Sqoop Import Sample Tutorial


  

Overview

This tutorial shows how to use Sqoop to import data from a SQL database on Windows Azure to an Hadoop on Azure HDFS cluster.

While Hadoop is a natural choice for processing unstructured and semi-structured data, such as logs and files, there may also be a need to process structured data stored in relational databases.

Sqoop is a tool designed to transfer data between Hadoop and relational databases. You can use it to import data from a relational database management system (RDBMS) such as SQL or MySQL or Oracle into the Hadoop Distributed File System (HDFS), transform the data in Hadoop with MapReduce or Hive, and then export the data back into an RDBMS. In this tutorial, you are using a SQL Database for your relational database.

Sqoop is an open source software product of Cloudera, Inc. Software development for Sqoop has recently moved from gitHub to the Apache Sqoop  site.

In Hadoop on Azure, Sqoop is deployed from the Hadoop Command Shell on the head node of the HDFS cluster. You use the Remote Desktop feature available in the Hadoop on Azure portal to access the head node of the cluster for this deployment.

Goals

In this tutorial, you see three things:

  1. How to set up a SQL database on Windows Azure for use with the tutorial.

  2. How to use the Remote Desktop feature in Hadoop on Azure to access the head node of the HDFS cluster.

  3. How to import relational data from SQL Server to a Hadoop on Azure HDFS cluster by using Sqoop.

Key technologies

Setup and Configuration

You must have an account to access Hadoop on Azure and have created a cluster to work through this tutorial. To obtain an account and create an Hadoop cluster, follow the instructions outlined in the Getting started with Microsoft Hadoop on Azure section of the Introduction to Hadoop on Azure topic.

You will also need your outward facing IP address for your current location when configuring your firewall on SQL Database. To obtain it, go to the site WhatIsMyIP and make a note of it. Later in the procedure, you also need the outward facing IP address for the head of the Hadoop cluster. You can obtain this IP address in the same way.


Tutorial

This tutorial is composed of the following segments:

  1. How to set up a SQL database.

  2. How to use Sqoop from Hadoop on Azure to import data to the HDFS cluster.

How to set up a SQL database

Log in into your Windows Azure account. To create a database server, click the Database icon in the lower left-hand corner on the page.

On the Getting Started page, click the Create a new SQL Database Server option.

Select the type of subscription (such as Pay-As-You-Go) associated with you account in the Create Server window and press Next.

Select the appropriate Region in the Create Server window and click Next.

Specify the login and password of the server-level principal of your SQL Database server and then press Next.

Press Add to specify a firewall rule that allows your current location access to SQL Database to upload the AdventureWorks database. The firewall grants access based on the originating IP address of each request. Use the IP address found with the configuration preliminaries of this tutorial for the values to add. Specify a Rule name, such as shown, but remember to use your IP address, not the one used for illustration purposes below. (You must also add the outward IP address of the head node in you Hadoop cluster. If you know it already, add it now.) Then press the Finish button.

Download the AdventureWorks2012 database onto your local machine from Recommended Downloads link on the Adventure Works for SQL Database site.

Unzip the file, open an Administrator Command Prompt, and navigate to the AdventureWorks directory inside the AdventureWorks2012ForSQLAzure folder.

Run CreateAdventureWorksForSQLAzure.cmd by typing the following:

CreateAdventureWorksForSQLAzure.cmd servername username password

For example, if the assigned SQL Database server is named b1gl33p, the administrator user name "Fred", and the password "Secret", you would type the following:

CreateAdventureWorksForSQLAzure.cmd b1gl33p.database.windows.net Fred@b1gl33p Secret

The script creates the database, installs the schema, and populates the database with sample data.

Return to the WindowsAzurePlatform portal page, click your subscription on the left-hand side (Pay-As-You-Go in the example below) and select your database (here named wq6xlbyoq0). The AventureWorks2012 should be listed in the Database Name column. Select it and press the Manage icon at the top of the page.

Enter the credentials for the SQL database when prompted and press Log on.

This opens the Web interface for the Adventure Works database on SQL Database. Press the New Query icon at the top to open the query editor.

Since Sqoop currently adds square brackets to the table name, we need to add a synonym to support two-part naming for SQL Server tables. To do so, run the following query:

CREATE SYNONYM [Sales.SalesOrderDetail] FOR Sales.SalesOrderDetail

Run the following query and review its result.

select top 200 * from [Sales.SalesOrderDetail]

How to use Sqoop from Hadoop on Azure to import SQL Database query results to the HDFS cluster in Hadoop On Azure.

From your Account page, scroll down to the Open Ports icon in the Your cluster section and click the icon to open the ODBC Server port on the head node in your cluster.

Return to your Account page, scroll down to the Your cluster section and click the Remote Desktop icon this time to open the head node in your cluster.

Select Open when prompted to open the .rdp file.

Select Connect in the Remote Desktop Connection window.

Enter your credentials for the Hadoop cluster (not your Hadoop on Azure account) into the Windows Security window and select OK.

Open Internet Explorer and go to the site WhatIsMyIP to obtain the outward facing IP address for the head node of the cluster. Return the SQL Database management page and add a firewall rule that allows your Hadoop cluster access to SQL Database. The firewall grants access based on the originating IP address of each request.

Double-click on the Hadoop Command Shell icon in the upper left hand of the Desktop to open it. Navigate to the "c:\Apps\dist\sqoop\bin" directory and run the following command:

sqoop import --connect "jdbc:sqlserver://[serverName].database.windows.net;username=[userName]@[serverName];password=[password];database=AdventureWorks2012" --table Sales.SalesOrderDetail --target-dir /data/lineitemData -m 1

So, for example, for the following values:
* server name: wq6xlbyoq0
* username: HadoopOnAzureSqoopAdmin
* password: Pa$$w0rd

The sqoop command is:

sqoop import --connect "jdbc:sqlserver://wq6xlbyoq0.database.windows.net;username=HadoopOnAzureSqoopAdmin@wq6xlbyoq0;password=Pa$$w0rd;;database=AdventureWorks2012" --table Sales.SalesOrderDetail --target-dir /data/lineitemData -m 1

Return to the Accounts page of the Hadoop on Azure portal and open the Interactive Console this time. Run the #lsr command from the JavaScript console to list the files and directories on your HDFS cluster. 

Run the #tail command to view selected results from the part-m-0000 file.

tail /user/RAdmin/data/SalesOrderDetail/part-m-00000


Summary

In this tutorial, you have seen how to use Sqoop to import data from a SQL database on Windows Azure to an Hadoop on Azure HDFS cluster.

相关文章
|
8月前
|
SQL 关系型数据库 MySQL
基于SQL Server / MySQL进行百万条数据过滤优化方案
对百万级别数据进行高效过滤查询,需要综合使用索引、查询优化、表分区、统计信息和视图等技术手段。通过合理的数据库设计和查询优化,可以显著提升查询性能,确保系统的高效稳定运行。
295 9
|
7月前
|
SQL 容灾 关系型数据库
阿里云DTS踩坑经验分享系列|DTS打通SQL Server数据通道能力介绍
SQL Server 以其卓越的易用性和丰富的软件生态系统,在数据库行业中占据了显著的市场份额。作为一款商业数据库,外部厂商在通过解析原生日志实现增量数据捕获上面临很大的挑战,DTS 在 SQL Sever 数据通道上深研多年,提供了多种模式以实现 SQL Server 增量数据捕获。用户可以通过 DTS 数据传输服务,一键打破自建 SQL Server、RDS SQL Server、Azure、AWS等他云 SQL Server 数据孤岛,实现 SQL Server 数据源的流动。
374 0
阿里云DTS踩坑经验分享系列|DTS打通SQL Server数据通道能力介绍
|
11月前
|
缓存 监控 Shell
如何使用 HBase Shell 进行数据的实时监控和备份?
如何使用 HBase Shell 进行数据的实时监控和备份?
186 5
|
11月前
|
Shell 分布式数据库 Hbase
如何使用 HBase Shell 进行数据的批量导入和导出?
如何使用 HBase Shell 进行数据的批量导入和导出?
786 5
|
11月前
|
SQL 存储 缓存
SQL Server 数据太多如何优化
11种优化方案供你参考,优化 SQL Server 数据库性能得从多个方面着手,包括硬件配置、数据库结构、查询优化、索引管理、分区分表、并行处理等。通过合理的索引、查询优化、数据分区等技术,可以在数据量增大时保持较好的性能。同时,定期进行数据库维护和清理,保证数据库高效运行。
277 4
|
12月前
|
SQL 关系型数据库 Shell
SQL整库导出语录及其实用技巧与方法
在数据库管理和备份恢复过程中,整库导出是一项至关重要的任务
|
12月前
|
SQL 分布式计算 Hadoop
Hadoop-14-Hive HQL学习与测试 表连接查询 HDFS数据导入导出等操作 逻辑运算 函数查询 全表查询 WHERE GROUP BY ORDER BY(一)
Hadoop-14-Hive HQL学习与测试 表连接查询 HDFS数据导入导出等操作 逻辑运算 函数查询 全表查询 WHERE GROUP BY ORDER BY(一)
178 4
|
12月前
|
SQL Oracle 关系型数据库
SQL整库导出语录:全面解析与高效执行策略
在数据库管理和维护过程中,整库导出是一项常见的需求,无论是为了备份、迁移还是数据分析,掌握如何高效、准确地导出整个数据库至关重要
|
12月前
|
SQL 分布式计算 关系型数据库
Hadoop-21 Sqoop 数据迁移工具 简介与环境配置 云服务器 ETL工具 MySQL与Hive数据互相迁移 导入导出
Hadoop-21 Sqoop 数据迁移工具 简介与环境配置 云服务器 ETL工具 MySQL与Hive数据互相迁移 导入导出
294 3
|
12月前
|
SQL
Hadoop-14-Hive HQL学习与测试 表连接查询 HDFS数据导入导出等操作 逻辑运算 函数查询 全表查询 WHERE GROUP BY ORDER BY(二)
Hadoop-14-Hive HQL学习与测试 表连接查询 HDFS数据导入导出等操作 逻辑运算 函数查询 全表查询 WHERE GROUP BY ORDER BY(二)
140 2

热门文章

最新文章