PivotalR between R & PostgreSQL-like Databases(for exp : Greenplum, hadoop access by hawq)

本文涉及的产品
云原生数据库 PolarDB PostgreSQL 版,标准版 2核4GB 50GB
云原生数据库 PolarDB MySQL 版,通用型 2核4GB 50GB
简介:
PivotalR是R的一个包, 这个包提供了将R翻译成SQL语句的能力, 即对大数据进行挖掘的话. 用户将大数据存储在数据库中, 例如PostgreSQL , Greenplum. 
用户在R中使用R的语法即可, 不需要直接访问数据库, 因为PivotalR 会帮你翻译成SQL语句, 并且返回结果给R.
这个过程不需要传输原始数据到R端, 所以可以完成R不能完成的任务(因为R是数据在内存中的运算, 如果数据量超过内存会有问题)
PivotalR还封装了MADlib, 里面包含了大量的机器学习的函数, 回归分析的函数等.


这个包的说明 : 
PivotalR-package 
An R font-end to PostgreSQL and Greenplum database, and wrapper
for in-database parallel and distributed machine learning open-source
library MADlib

Description
PivotalR is a package that enables users of R, the most popular open source statistical programming
language and environment to interact with the Pivotal (Greenplum) Database as well as Pivotal
HD/HAWQ for Big Data analytics. It does so by providing an interface to the operations on tables/views
in the database. These operations are almost the same as those of data.frame. Thus the
users of R do not need to learn SQL when they operate on the objects in the database. The latest
code is available at https://github.com/madlib-internal/PivotalR. A training video and a
quick-start guide are available at http://zimmeee.github.io/gp-r/#pivotalr.

Details
Package: PivotalR
Type: Package
Version: 0.1.17
Date: 2014-09-15
License: GPL (>= 2)
Depends: methods, DBI, RPostgreSQL

This package enables R users to easily develop, refine and deploy R scripts that leverage the parallelism
and scalability of the database as well as in-database analytics libraries to operate on big
data sets that would otherwise not fit in R memory - all this without having to learn SQL because
the package provides an interface that they are familiar with.

The package also provides a wrapper for MADlib. MADlib is an open-source library for scalable
in-database analytics. It provides data-parallel implementations of mathematical, statistical and
machine-learning algorithms for structured and unstructured data. The number of machine learning
algorithms that MADlib covers is quickly increasing.

As an R front-end to the PostgreSQL-like databases, this package minimizes the amount of data
transferred between the database and R. All the big data is stored in the database. The user enters
their familiar R syntax, and the package translates it into SQL queries and sends the SQL query into
database for parallel execution. The computation result, which is small (if it is as big as the original
data, what is the point of big data analytics?), is returned to R to the user.

On the other hand, this package also gives the usual SQL users the access of utilizing the powerful
analytics and graphics functionalities of R. Although the database itself has difficulty in plotting,
the result can be analyzed and presented beautifully with R.

This current version of PivotalR provides the core R infrastructure and data frame functions as well
as over 50 analytical functions in R that leverage in-database execution. These include

* Data Connectivity - db.connect, db.disconnect, db.Rquery
* Data Exploration - db.data.frame, subsets
* R language features - dim, names, min, max, nrow, ncol, summary etc
* Reorganization Functions - merge, by (group-by), samples
* Transformations - as.factor, null replacement
* Algorithms - linear regression and logistic regression wrappers for MADlib

Note
This package is differernt from PL/R, which is another way of using R with PostgreSQL-like
databases. PL/R enables the users to run R scripts from SQL. In the parallel Greenplum database,
one can use PL/R to implement parallel algorithms.

However, PL/R still requires non-trivial knowledge of SQL to use it effectively. It is mostly limited
to explicitly parallel jobs. And for the end user, it is still a SQL interface.

This package does not require any knowledge of SQL, and it works for both explicitly and implicitly
parallel jobs by employing the open-source MADlib library. It is much more scalable. And for the
end user, it is a pure R interface with the conventional R syntax.

Author(s)
Author: Predictive Analytics Team at Pivotal Inc. <user@madlib.net>, with contributions from
Data Scientist Team at Pivotal Inc.
Maintainer: Caleb Welton, Pivotal Inc. <cwelton@pivotal.io>

References
[1] MADlib website, http://madlib.net
[2] MADlib user docs, http://doc.madlib.net/master
[3] MADlib Wiki page, http://github.com/madlib/madlib/wiki
[4] MADlib contribution guide, https://github.com/madlib/madlib/wiki/Contribution-Guide
[5] MADlib on GitHub, https://github.com/madlib/madlib

See Also
madlib.lm Linear regression
madlib.glm Linear, logistic and multinomial logistic regressions
madlib.summary summary of a table in the database.

Examples
## Not run:
## get the help for the package
help("PivotalR-package")
## get help for a function
help(madlib.lm)
## create multiple connections to different databases
db.connect(port = 5433) # connection 1, use default values for the parameters
db.connect(dbname = "test", user = "qianh1", password = "", host =
"remote.machine.com", madlib = "madlib07", port = 5432) # connection 2
db.list() # list the info for all the connections
## list all tables/views that has "ornst" in the name
db.objects("ornst")
## list all tables/views
db.objects(conn.id = 1)
## create a table and the R object pointing to the table
## using the example data that comes with this package
delete("abalone", conn.id = cid)
x <- as.db.data.frame(abalone, "abalone")
## OR if the table already exists, you can create the wrapper directly
## x <- db.data.frame("abalone")
dim(x) # dimension of the data table
names(x) # column names of the data table
madlib.summary(x) # look at a summary for each column
lk(x, 20) # look at a sample of the data
## look at a sample sorted by id column
lookat(sort(x, decreasing = FALSE, x$id), 20)
lookat(sort(x, FALSE, NULL), 20) # look at a sample ordered randomly
## linear regression Examples --------
## fit one different model to each group of data with the same sex
fit1 <- madlib.lm(rings ~ . - id | sex, data = x)
fit1 # view the result
lookat(mean((x$rings - predict(fit1, x))^2)) # mean square error
## plot the predicted values v.s. the true values
ap <- x$rings # true values
ap$pred <- predict(fit1, x) # add a column which is the predicted values
## If the data set is very big, you do not want to load all the
## data points into R and plot. We can just plot a random sample.
random.sample <- lk(sort(ap, FALSE, "random"), 1000) # sort randomly
plot(random.sample) # plot a random sample
## fit a single model to all data treating sex as a categorical variable ---------
y <- x # make a copy, y is now a db.data.frame object
y$sex <- as.factor(y$sex) # y becomes a db.Rquery object now
fit2 <- madlib.lm(rings ~ . - id, data = y)
fit2 # view the result
lookat(mean((y$rings - predict(fit2, y))^2)) # mean square error
## logistic regression Examples --------
## fit one different model to each group of data with the same sex
fit3 <- madlib.glm(rings < 10 ~ . - id | sex, data = x, family = "binomial")
fit3 # view the result
## the percentage of correct prediction
lookat(mean((x$rings < 10) == predict(fit3, x)))
## fit a single model to all data treating sex as a categorical variable ----------
y <- x # make a copy, y is now a db.data.frame object
y$sex <- as.factor(y$sex) # y becomes a db.Rquery object now
fit4 <- madlib.glm(rings < 10 ~ . - id, data = y, family = "binomial")
fit4 # view the result
## the percentage of correct prediction
lookat(mean((y$rings < 10) == predict(fit4, y)))
## Group by Examples --------
## mean value of each column except the "id" column
lk(by(x[,-1], x$sex, mean))
## standard deviation of each column except the "id" column
lookat(by(x[,-1], x$sex, sd))
## Merge Examples --------
## create two objects with different rows and columns
key(x) <- "id"
y <- x[1:300, 1:6]
z <- x[201:400, c(1,2,4,5)]
## get 100 rows
m <- merge(y, z, by = c("id", "sex"))
lookat(m, 20)
## operator Examples --------
y <- x$length + x$height + 2.3
z <- x$length * x$height / 3
lk(y < z, 20)
## ------------------------------------------------------------------------
## Deal with NULL values
delete("null_data")
x <- as.db.data.frame(null.data, "null_data")
## OR if the table already exists, you can create the wrapper directly
## x <- db.data.frame("null_data")
dim(x)
names(x)
## ERROR, because of NULL values
fit <- madlib.lm(sf_mrtg_pct_assets ~ ., data = x)
## remove NULL values
y <- x # make a copy
for (i in 1:10) y <- y[!is.na(y[i]),]
dim(y)
fit <- madlib.lm(sf_mrtg_pct_assets ~ ., data = y)
fit
## Or we can replace all NULL values
x[is.na(x)] <- 45
## End(Not run)

安装,使用 : 
> install.packages("PivotalR")
> library(PivotalR)
Loading required package: Matrix
Attaching package: ‘PivotalR’
The following objects are masked from ‘package:stats’:
    sd, var
The following object is masked from ‘package:base’:
    cbind

[参考]
相关实践学习
使用PolarDB和ECS搭建门户网站
本场景主要介绍基于PolarDB和ECS实现搭建门户网站。
阿里云数据库产品家族及特性
阿里云智能数据库产品团队一直致力于不断健全产品体系,提升产品性能,打磨产品功能,从而帮助客户实现更加极致的弹性能力、具备更强的扩展能力、并利用云设施进一步降低企业成本。以云原生+分布式为核心技术抓手,打造以自研的在线事务型(OLTP)数据库Polar DB和在线分析型(OLAP)数据库Analytic DB为代表的新一代企业级云原生数据库产品体系, 结合NoSQL数据库、数据库生态工具、云原生智能化数据库管控平台,为阿里巴巴经济体以及各个行业的企业客户和开发者提供从公共云到混合云再到私有云的完整解决方案,提供基于云基础设施进行数据从处理、到存储、再到计算与分析的一体化解决方案。本节课带你了解阿里云数据库产品家族及特性。
目录
相关文章
|
7月前
|
SQL Oracle 关系型数据库
实时计算 Flink版操作报错之往GREENPLUM 6 写数据,用postgresql-42.2.9.jar 报 ON CONFLICT (uuid) DO UPDATE SET 语法有问题。怎么解决
在使用实时计算Flink版过程中,可能会遇到各种错误,了解这些错误的原因及解决方法对于高效排错至关重要。针对具体问题,查看Flink的日志是关键,它们通常会提供更详细的错误信息和堆栈跟踪,有助于定位问题。此外,Flink社区文档和官方论坛也是寻求帮助的好去处。以下是一些常见的操作报错及其可能的原因与解决策略。
|
6月前
|
SQL 关系型数据库 PostgreSQL
PostgreSQL和greenplum的copy命令可以添加字段吗?
【6月更文挑战第5天】PostgreSQL和greenplum的copy命令可以添加字段吗?
97 3
|
6月前
|
监控 关系型数据库 数据库
PostgreSQL和greenplum的copy命令如何使用?
【6月更文挑战第5天】PostgreSQL和greenplum的copy命令如何使用?
182 2
|
7月前
|
SQL 分布式计算 Hadoop
org.apache.hadoop.security.AccessControlException Permission denied: user=anonymous, access=WRITE...
在尝试通过 HiveServer2 远程执行 DDL 操作时遇到权限错误,错误信息显示匿名用户(`anonymous`)无权执行写操作。解决方案包括:1) 使用 `hdfs dfs -chmod -R +777 /warehouse` 给目录授权(不推荐,仅适用于测试环境);2) 配置 Hive 和 Hadoop,创建具有权限的用户,如 `ad`,并将该用户添加到 Hive 的管理员角色和 Hadoop 的 proxyuser 配置中,然后重启相关服务。通过 `beeline` 测试连接和操作,确认权限问题已解决。
367 0
|
7月前
|
监控 关系型数据库 Java
SpringBoot【集成 01】Druid+Dynamic+Greenplum(实际上用的是PostgreSQL的驱动)及 dbType not support 问题处理(附hikari相关配置)
SpringBoot【集成 01】Druid+Dynamic+Greenplum(实际上用的是PostgreSQL的驱动)及 dbType not support 问题处理(附hikari相关配置)
344 0
|
分布式计算 资源调度 Hadoop
Hadoop常见错误及解决方案、Permission denied: user=dr.who, access=WRITE, inode=“/“:summer:supergroup:drwxr-xr-x
Hadoop常见错误及解决方案、Permission denied: user=dr.who, access=WRITE, inode=“/“:summer:supergroup:drwxr-xr-x
Hadoop常见错误及解决方案、Permission denied: user=dr.who, access=WRITE, inode=“/“:summer:supergroup:drwxr-xr-x
|
算法 关系型数据库 PostgreSQL
PostgreSQL/GreenPlum Merge Inner Join解密
PostgreSQL/GreenPlum Merge Inner Join解密
86 0
PostgreSQL/GreenPlum Merge Inner Join解密
|
SQL 存储 关系型数据库
【学习资料】第2期PostgreSQL、Greenplum 技术+108个场景结合最佳实践《如来神掌》
大家好,这里是PostgreSQL、Greenplum 《如来神掌》 - 目录 - 珍藏级
【学习资料】第2期PostgreSQL、Greenplum 技术+108个场景结合最佳实践《如来神掌》
|
SQL Oracle 关系型数据库
【学习资料】第1期Oracle DBA 增值 PostgreSQL,Greenplum 学习计划 - 珍藏级
大家好,这里是Oracle DBA 增值 PostgreSQL,Greenplum 学习计划 - 珍藏级
|
关系型数据库 数据库 C语言
Greenplum 6已合并到PostgreSQL 9.3版本 - 比上一代GP提升:8倍读,195倍更新、删除 - 另有大量PG新特性
标签 PostgreSQL , Greenplum , 6 , gin , 异步事务 背景 Greenplum 6已合并到PostgreSQL 9.3版本 - 相比5性能提升:读8倍,更新、删除195倍 - 另有大量PG新特性,详见PostgreSQL release notes https://www.postgresql.org/docs/11/static/release.html 例如 1、GIN、SPGIST 索引接口。
7546 0