开发者社区> 问答> 正文

Spark无法使用JDBC think 驱动程序连接到Ignite

社区小助手 2018-12-06 15:40:31 627

我正在使用Java 8,Spark 2.1.1,Ignite 2.5和BoneCP 0.8.0

Maven pom.xml看起来像这样:

<?xml version="1.0" encoding="UTF-8"?>

     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
     xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>

<groupId>test</groupId>
<artifactId>ignite-tester</artifactId>
<version>1.0-SNAPSHOT</version>

<properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
    <maven.compiler.target>1.8</maven.compiler.target>
    <maven.compiler.source>1.8</maven.compiler.source>
    <java.version>1.8</java.version>
    <kafka.version>0.10.1.2.6.2.0-205</kafka.version>
    <spark.version>2.1.1.2.6.2.0-205</spark.version>
</properties>

<build>
    <plugins>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-assembly-plugin</artifactId>
            <executions>
                <execution>
                    <phase>package</phase>
                    <goals>
                        <goal>single</goal>
                    </goals>
                    <configuration>
                        <archive>
                            <manifest>
                                <addClasspath>true</addClasspath>
                                <mainClass>spark.IgniteTester</mainClass>
                            </manifest>
                        </archive>
                        <descriptorRefs>
                            <descriptorRef>jar-with-dependencies</descriptorRef>
                        </descriptorRefs>
                    </configuration>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>

<dependencies>

    <dependency>
        <groupId>org.apache.ignite</groupId>
        <artifactId>ignite-core</artifactId>
        <version>2.5.0</version>
    </dependency>

    <dependency>
        <groupId>com.jolbox</groupId>
        <artifactId>bonecp</artifactId>
        <version>0.8.0.RELEASE</version>
    </dependency>

    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-sql-kafka-0-10_2.11</artifactId>
        <version>${spark.version}</version>
        <scope>compile</scope>
    </dependency>

    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-sql_2.11</artifactId>
        <version>${spark.version}</version>
        <scope>compile</scope>
    </dependency>

    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-streaming_2.11</artifactId>
        <version>${spark.version}</version>
        <scope>compile</scope>
    </dependency>

</dependencies>


我的项目编译成一个'fat'jar,它包含所有依赖项,但在Spark集群上运行下一个代码时:

public static void main(String[] args) {

try {
    Class.forName("org.apache.ignite.IgniteJdbcThinDriver").newInstance();
    BoneCPConfig config = new BoneCPConfig();
    config.setJdbcUrl("jdbc:ignite:thin://myhost:10840;user=myusername;password=mypassword");
    pool = new BoneCP(config);
} catch (Exception e) {
    logger.error("could not load Ignite driver", e);
    return;
}

}
结果以下异常:

ERROR IgniteTester: could not load Ignite driver
java.sql.SQLException: Unable to open a test connection to the given database. JDBC url = jdbc:ignite:thin://myhost:10840;user=myusername;password=mypassword, username = null. Terminating connection pool (set lazyInit to true if you expect to start your database after your app). Original Exception: ------
java.sql.SQLException: No suitable driver found for jdbc:ignite:thin://myhost:10840;user=myusername;password=mypassword

    at java.sql.DriverManager.getConnection(DriverManager.java:689)
    at java.sql.DriverManager.getConnection(DriverManager.java:208)
    at com.jolbox.bonecp.BoneCP.obtainRawInternalConnection(BoneCP.java:361)
    at com.jolbox.bonecp.BoneCP.<init>(BoneCP.java:416)
    at spark.IgniteTester.main(IgniteTester.java:56)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit

$$ runMain(SparkSubmit.scala:751) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at com.jolbox.bonecp.PoolUtil.generateSQLException(PoolUtil.java:192) at com.jolbox.bonecp.BoneCP.(BoneCP.java:422) at spark.IgniteTester.main(IgniteTester.java:56) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit $$

runMain(SparkSubmit.scala:751)

    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

Caused by: java.sql.SQLException: No suitable driver found for jdbc:ignite:thin://myhost:10840;user=myusername;password=mypassword

    at java.sql.DriverManager.getConnection(DriverManager.java:689)
    at java.sql.DriverManager.getConnection(DriverManager.java:208)
    at com.jolbox.bonecp.BoneCP.obtainRawInternalConnection(BoneCP.java:361)
    at com.jolbox.bonecp.BoneCP.<init>(BoneCP.java:416)
    ... 10 more

提交脚本如下所示:

spark-submit \
--class spark.IgniteTester \
--master yarn \
--deploy-mode master \
--driver-memory 1g \
--executor-cores 1 \
--num-executors 1 \
--executor-memory 1664mb \
ignite-tester.jar
使用“本地”Spark实例时,它使用think JDBC驱动程序连接到Ignite。

XML 分布式计算 资源调度 Java 数据库连接 Apache Maven Spark 数据格式
分享到
取消 提交回答
全部回答(1)
  • 社区小助手
    2019-07-17 23:18:35

    谁最终到达这里 - 对我来说有用的是将BoneCP作为数据库连接池并使用单个数据库连接:

    Class.forName("org.apache.ignite.IgniteJdbcThinDriver");
    Connection conn = DriverManager.getConnection("jdbc:ignite:thin://myhost:10840;user=myusername;password=mypassword");
    ResultSet rs = conn.prepareStatement("SELECT * FROM MY_TABLE").executeQuery());
    如果你仔细想一想,根据我的理解,只使用单个数据库连接是完全合理的 - 无论如何,Spark在“单个”线程模型中运行代码,因此无论如何都没有真正的好处

    0 0
+ 订阅

大数据计算实践乐园,近距离学习前沿技术

推荐文章
相似问题