spark-2.2.0-bin-hadoop2.6和spark-1.6.1-bin-hadoop2.6发行包自带案例全面详解(java、python、r和scala)之Basic包下的SparkTC.scala(图文详解)

简介:

spark-1.6.1-bin-hadoop2.6里Basic包下的SparkTC.scala

 

 

复制代码
/*
 * Licensed to the Apache Software Foundation (ASF) under one or more
 * contributor license agreements.  See the NOTICE file distributed with
 * this work for additional information regarding copyright ownership.
 * The ASF licenses this file to You under the Apache License, Version 2.0
 * (the "License"); you may not use this file except in compliance with
 * the License.  You may obtain a copy of the License at
 *
 *    http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

// scalastyle:off println
//package org.apache.spark.examples
package zhouls.bigdata

import scala.util.Random
import scala.collection.mutable
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.SparkContext._



/**
 * Transitive closure on a graph.
 */
object SparkTC {
  
  val numEdges = 200
  val numVertices = 100
  val rand = new Random(42)

  def generateGraph: Seq[(Int, Int)] = {
    val edges: mutable.Set[(Int, Int)] = mutable.Set.empty
    while (edges.size < numEdges) {
      val from = rand.nextInt(numVertices)
      val to = rand.nextInt(numVertices)
      if (from != to) edges.+=((from, to))
    }
    edges.toSeq
  }

  
  /*
   * 主函数
   */
  def main(args: Array[String]) {
    val sparkConf = new SparkConf().setAppName("SparkTC").setMaster("local")
    val spark = new SparkContext(sparkConf)
    val slices = if (args.length > 0) args(0).toInt else 2
    var tc = spark.parallelize(generateGraph, slices).cache()

    // Linear transitive closure: each round grows paths by one edge,
    // by joining the graph's edges with the already-discovered paths.
    // e.g. join the path (y, z) from the TC with the edge (x, y) from
    // the graph to obtain the path (x, z).

    // Because join() joins on keys, the edges are stored in reversed order.
    val edges = tc.map(x => (x._2, x._1))//翻转起点和终点,方便join, (x,y) (y,z) ==>(x,z) 需要翻转(x,y)为(y,x)才能join出正确结果

    // This join is iterated until a fixed point is reached.(不断join,union并计算个数直到不变)
    var oldCount = 0L
    var nextCount = tc.count()
    do {
      oldCount = nextCount
      // Perform the join, obtaining an RDD of (y, (z, x)) pairs,
      // then project the result to obtain the new (x, z) paths.
      tc = tc.union(tc.join(edges).map(x => (x._2._2, x._2._1))).distinct().cache()
      nextCount = tc.count()
    } while (nextCount != oldCount)

    println("TC has " + tc.count() + " edges.")
    spark.stop()
  }
}
// scalastyle:on println
复制代码

 

 

 

 

 

 

 

 

复制代码
/*
 * Licensed to the Apache Software Foundation (ASF) under one or more
 * contributor license agreements.  See the NOTICE file distributed with
 * this work for additional information regarding copyright ownership.
 * The ASF licenses this file to You under the Apache License, Version 2.0
 * (the "License"); you may not use this file except in compliance with
 * the License.  You may obtain a copy of the License at
 *
 *    http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

// scalastyle:off println
package org.apache.spark.examples

import scala.collection.mutable
import scala.util.Random
import org.apache.spark.sql.SparkSession

/**
 * Transitive closure on a graph.
 */
object SparkTC {
  
  val numEdges = 200
  val numVertices = 100
  val rand = new Random(42)

  /*
   * 1. 计算传递闭包(可到达路径数目)
     * 2. 自动生成图,使用可变Set存储起点,终点 
   */
  def generateGraph: Seq[(Int, Int)] = {
    val edges: mutable.Set[(Int, Int)] = mutable.Set.empty
    while (edges.size < numEdges) {
      val from = rand.nextInt(numVertices)
      val to = rand.nextInt(numVertices)
      if (from != to) edges.+=((from, to))
    }
    edges.toSeq
  }

  def main(args: Array[String]) {
    val spark = SparkSession
      .builder
      .master("local")
      .appName("SparkTC")
      .getOrCreate() 
      
    val slices = if (args.length > 0) args(0).toInt else 2
    var tc = spark.sparkContext.parallelize(generateGraph, slices).cache()

    // Linear transitive closure: each round grows paths by one edge,
    // by joining the graph's edges with the already-discovered paths.
    // e.g. join the path (y, z) from the TC with the edge (x, y) from
    // the graph to obtain the path (x, z).

    // Because join() joins on keys, the edges are stored in reversed order.
    val edges = tc.map(x => (x._2, x._1))//翻转起点和终点,方便join, (x,y) (y,z) ==>(x,z) 需要翻转(x,y)为(y,x)才能join出正确结果

    
    // This join is iterated until a fixed point is reached.(不断join,union并计算个数直到不变)
    var oldCount = 0L
    var nextCount = tc.count()
    do {
      oldCount = nextCount
      // Perform the join, obtaining an RDD of (y, (z, x)) pairs,
      // then project the result to obtain the new (x, z) paths.
      tc = tc.union(tc.join(edges).map(x => (x._2._2, x._2._1))).distinct().cache()
      nextCount = tc.count()
    } while (nextCount != oldCount)

    println("TC has " + tc.count() + " edges.")
    spark.stop()
  }
}
// scalastyle:on println


本文转自大数据躺过的坑博客园博客,原文链接:http://www.cnblogs.com/zlslch/p/7457244.html,如需转载请自行联系原作者
相关文章
|
1月前
|
存储 人工智能 运维
【01】做一个精美的打飞机小游戏,浅尝阿里云通义灵码python小游戏开发AI编程-之飞机大战小游戏上手实践-优雅草央千澈-用ai开发小游戏尝试-分享源代码和游戏包
【01】做一个精美的打飞机小游戏,浅尝阿里云通义灵码python小游戏开发AI编程-之飞机大战小游戏上手实践-优雅草央千澈-用ai开发小游戏尝试-分享源代码和游戏包
194 48
【01】做一个精美的打飞机小游戏,浅尝阿里云通义灵码python小游戏开发AI编程-之飞机大战小游戏上手实践-优雅草央千澈-用ai开发小游戏尝试-分享源代码和游戏包
|
3月前
|
开发者 Python
如何在Python中管理模块和包的依赖关系?
在实际开发中,通常会结合多种方法来管理模块和包的依赖关系,以确保项目的顺利进行和可维护性。同时,要及时更新和解决依赖冲突等问题,以保证代码的稳定性和可靠性
155 62
|
1月前
|
人工智能 Python
【02】做一个精美的打飞机小游戏,python开发小游戏-鹰击长空—优雅草央千澈-持续更新-分享源代码和游戏包供游玩-记录完整开发过程-用做好的素材来完善鹰击长空1.0.1版本
【02】做一个精美的打飞机小游戏,python开发小游戏-鹰击长空—优雅草央千澈-持续更新-分享源代码和游戏包供游玩-记录完整开发过程-用做好的素材来完善鹰击长空1.0.1版本
54 7
|
26天前
|
测试技术 Python
【03】做一个精美的打飞机小游戏,规划游戏项目目录-分门别类所有的资源-库-类-逻辑-打包为可玩的exe-练习python打包为可执行exe-优雅草卓伊凡-持续更新-分享源代码和游戏包供游玩-1.0.2版本
【03】做一个精美的打飞机小游戏,规划游戏项目目录-分门别类所有的资源-库-类-逻辑-打包为可玩的exe-练习python打包为可执行exe-优雅草卓伊凡-持续更新-分享源代码和游戏包供游玩-1.0.2版本
100 31
【03】做一个精美的打飞机小游戏,规划游戏项目目录-分门别类所有的资源-库-类-逻辑-打包为可玩的exe-练习python打包为可执行exe-优雅草卓伊凡-持续更新-分享源代码和游戏包供游玩-1.0.2版本
|
4月前
|
分布式计算 Kubernetes Hadoop
大数据-82 Spark 集群模式启动、集群架构、集群管理器 Spark的HelloWorld + Hadoop + HDFS
大数据-82 Spark 集群模式启动、集群架构、集群管理器 Spark的HelloWorld + Hadoop + HDFS
240 6
|
4月前
|
分布式计算 资源调度 Hadoop
大数据-80 Spark 简要概述 系统架构 部署模式 与Hadoop MapReduce对比
大数据-80 Spark 简要概述 系统架构 部署模式 与Hadoop MapReduce对比
110 2
|
3月前
|
数据可视化 Python
如何在Python中解决模块和包的依赖冲突?
解决模块和包的依赖冲突需要综合运用多种方法,并且需要团队成员的共同努力和协作。通过合理的管理和解决冲突,可以提高项目的稳定性和可扩展性
|
3月前
|
Python
Python的模块和包
总之,模块和包是 Python 编程中非常重要的概念,掌握它们可以帮助我们更好地组织和管理代码,提高开发效率和代码质量
133 61
|
3月前
|
测试技术 Python
手动解决Python模块和包依赖冲突的具体步骤是什么?
需要注意的是,手动解决依赖冲突可能需要一定的时间和经验,并且需要谨慎操作,避免引入新的问题。在实际操作中,还可以结合使用其他方法,如虚拟环境等,来更好地管理和解决依赖冲突😉。
|
3月前
|
持续交付 Python
如何在Python中自动解决模块和包的依赖冲突?
完全自动解决所有依赖冲突可能并不总是可行,特别是在复杂的项目中。有时候仍然需要人工干预和判断。自动解决的方法主要是提供辅助和便捷,但不能完全替代人工的分析和决策😉。

热门文章

最新文章