开发者社区> 技术小哥哥> 正文

Spark Shell启动时遇到<console>:14: error: not found: value spark import spark.implicits._ <console>:14: error: not found: value spark import spark.sql错误的解决

简介:
+关注继续查看

 这里我,使用的是spark-2.2.0-bin-hadoop2.6.tgz + hadoop-2.6.0.tar.gz 的单节点来测试下。

  其中,hadoop-2.6.0的单节点配置文件,我就不赘述了。  

  这里,我重点写下spark on yarn。我这里采取的是这模式。

 

 

spark-defaults.conf

  默认,保持不修改。

 

 

spark-env.sh

复制代码
export JAVA_HOME=/home/spark/app/jdk1.8.0_60
export SCALA_HOME=/home/spark/app/scala-2.10.4
export HADOOP_HOME=/home/spark/app/hadoop-2.6.0
export HADOOP_CONF_DIR=/home/spark/app/hadoop-2.6.0/etc/hadoop
export SPARK_MASTER_IP=192.168.80.218
export SPARK_WORKER_MERMORY=1G
复制代码

 

 

 

 slaves

sparksinglenode

 

 

 

 

 

 

 

  问题详情

  我已经是启动了hadoop进程。

   然后,来执行

[spark@sparksinglenode spark-2.2.0-bin-hadoop2.6]$ bin/spark-shell

 

 

 

复制代码
  at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)
  at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)
  at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
  at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
  at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)
  at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
  at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)
  at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
  ... 70 more
Caused by: org.apache.hadoop.ipc.RemoteException: Cannot create directory /tmp/hive/spark/1b6e6e4f-7e08-4d49-8783-4e722bab607a. Name node is in safe mode.
The reported blocks 0 needs additional 5 blocks to reach the threshold 0.9990 of total blocks 5.
The number of live datanodes 1 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
复制代码

 

 

 

 

复制代码
  at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2713)
  at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
  at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
  at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
  at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
  at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
  at org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:639)
  at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:574)
  at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508)
  ... 84 more
<console>:14: error: not found: value spark
       import spark.implicits._
              ^
<console>:14: error: not found: value spark
       import spark.sql
              ^
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.2.0
      /_/
         
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_60)
Type in expressions to have them evaluated.
Type :help for more information.

scala> 
复制代码

 

 

 

 

 

 

解决办法

复制代码
[spark@sparksinglenode ~]$ jps
5733 SecondaryNameNode
6583 Jps
5464 NameNode
5933 ResourceManager
6031 NodeManager
5583 DataNode
[spark@sparksinglenode ~]$ hdfs dfsadmin -safemode leave
17/08/29 05:29:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Safe mode is OFF
[spark@sparksinglenode ~]$ 
复制代码

 

 

  

 

 

 

  再次执行,成功了

复制代码
[spark@sparksinglenode spark-2.2.0-bin-hadoop2.6]$ bin/spark-shell
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
17/08/29 05:30:57 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/08/29 05:31:06 WARN DataNucleus.General: Plugin (Bundle) "org.datanucleus.api.jdo" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/home/spark/app/spark-2.2.0-bin-hadoop2.6/jars/datanucleus-api-jdo-3.2.6.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/home/spark/app/spark/jars/datanucleus-api-jdo-3.2.6.jar."
17/08/29 05:31:07 WARN DataNucleus.General: Plugin (Bundle) "org.datanucleus.store.rdbms" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/home/spark/app/spark/jars/datanucleus-rdbms-3.2.9.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/home/spark/app/spark-2.2.0-bin-hadoop2.6/jars/datanucleus-rdbms-3.2.9.jar."
17/08/29 05:31:07 WARN DataNucleus.General: Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/home/spark/app/spark-2.2.0-bin-hadoop2.6/jars/datanucleus-core-3.2.10.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/home/spark/app/spark/jars/datanucleus-core-3.2.10.jar."
17/08/29 05:31:28 WARN metastore.ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
Spark context Web UI available at http://192.168.80.218:4040
Spark context available as 'sc' (master = local[*], app id = local-1503955860647).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.2.0
      /_/
         
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_60)
Type in expressions to have them evaluated.
Type :help for more information.

scala> 
复制代码

 

 

 

  或者

[spark@sparksinglenode spark-2.2.0-bin-hadoop2.6]$ bin/spark-shell --master yarn-client

  注意,这里的--master是固定参数


本文转自大数据躺过的坑博客园博客,原文链接:http://www.cnblogs.com/zlslch/p/7445916.html,如需转载请自行联系原作者

版权声明:本文内容由阿里云实名注册用户自发贡献,版权归原作者所有,阿里云开发者社区不拥有其著作权,亦不承担相应法律责任。具体规则请查看《阿里云开发者社区用户服务协议》和《阿里云开发者社区知识产权保护指引》。如果您发现本社区中有涉嫌抄袭的内容,填写侵权投诉表单进行举报,一经查实,本社区将立刻删除涉嫌侵权内容。

相关文章
SAP Spartacus 如何借助env-cmd 实现 B2B 和 B2C 功能启动的无缝切换
在之前的 SAP Spartacus 标准开发中,每次我从 Github 上 clone 了最新的代码后,必须手动修改 environment.ts 里的配置值,将 CX_BASE_URL 和 b2b 修改成实际值:
23 0
VM启动报错Cannot open the disk,Failed to lock the file
在windows下运行VMware创建的虚拟机时出错,无法运行。 错误提示大概为: Failed to lock the fileCannot open the disk 'D:\Windows Server 2008 R2 x64.vmdk' or one of the snapshot disks it depends on. 解决方法:        把虚拟机文件夹里【.lc
1038 0
Oracle启动监听报错:The listener supports no services解决
问题原因:数据库实例没注册到listener 解决方法一: 在listener.ora里面添加了一行 SID_LIST_LISTENER = (SID_LIST =   (SID_DESC =   (GLOBAL_DBNAME = orcl)   (SID_NAME = orcl)   ) ) 注:里面的orcl根据你安装的数据库实例名确定,我用此法解决了这个报错。
806 0
Eclipse中启动tomcat报错java.lang.OutOfMemoryError: PermGen space的解决方法
  有的项目引用了太多的jar包,或者反射生成了太多的类,异或有太多的常量池,就有可能会报java.lang.OutOfMemoryError: PermGen space的错误, 我们知道可以通过jvm参数 -XX:MaxPermSize=256m来配置这部分堆内存的大小。
704 0
mysql:failed,启动失败,mmap failed-Cannot allocate memory for the buffer pool
在使用mysql5.7进行当做网站的数据库时,有时候会莫名死掉,网站也会因此打不开,重新启动mysql也无法正常启动。通过查看mysql的告警日志发现: cat /etc/my.cnf ... log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid ... cat /var/log/mysqld.log 发现是因为mysql无法分配足够的内存供使用,因此无法正常启动。
1126 0
VxWorks6.6 pcPentium BSP 使用说明(二):创建启动盘
<p style=""><span> </span> 本篇介绍从Solaris、Linux、Windows或VxWorks创建VxWorks启动盘的方法。</p> <p style=""><span style="color:#3366ff; padding-bottom:0px; margin:0px; padding-left:0px; padding-right:0px; padd
2067 0
Apache2.4配置SSL后启动报错AH02577: Init: SSLPassPhraseDialog builtin
Apache2.4配置SSL后启动报错:AH02577: Init: SSLPassPhraseDialog builtin is not supported on Win32 (key file C:/Apache24/conf/server.key) 原因是因为Win32不支持加密。
1683 0
启动tomcat时 错误: 代理抛出异常 : java.rmi.server.ExportException: Port already in use: 1099的解决办法
原文:启动tomcat时 错误: 代理抛出异常 : java.rmi.server.ExportException: Port already in use: 1099的解决办法 一.问题描述   今天一来公司,在IntelliJ IDEA 中启动Tomcat服务器时就出现了如下图所示的错误:      错误: 代理抛出异常错误: java.
2189 0
2010
文章
0
问答
文章排行榜
最热
最新
相关电子书
更多
OceanBase 入门到实战教程
立即下载
阿里云图数据库GDB,加速开启“图智”未来.ppt
立即下载
实时数仓Hologres技术实战一本通2.0版(下)
立即下载