开发者社区> 问答> 正文

pyspark - 在json流数据中找到max和min usign createDataFrame

我有一组由Kafka流式传输的json消息,每个消息都描述一个网站用户。使用pyspark,我需要计算每个国家/地区每个流媒体窗口的用户数,并返回具有最大和最小用户数的国家/地区。

以下是流式json消息的示例:

{"id":1,"first_name":"Barthel","last_name":"Kittel","email":"bkittel0@printfriendly.com","gender":"Male","ip_address":"130.187.82.195","date":"06/05/2018","country":"France"}
这是我的代码:

from pyspark.sql.types import StructField, StructType, StringType
from pyspark.sql import Row
from pyspark import SparkContext
from pyspark.sql import SQLContext

fields = ['id', 'first_name', 'last_name', 'email', 'gender', 'ip_address', 'date', 'country']
schema = StructType([
StructField(field, StringType(), True) for field in fields
])

def parse(s, fields):

try:
    d = json.loads(s[0])
    return [tuple(d.get(field) for field in fields)]
except:
    return []

array_of_users = parsed.SQLContext.createDataFrame(parsed.flatMap(lambda s: parse(s, fields)), schema)

rdd = sc.parallelize(array_of_users)

group by country and then substitute the list of messages for each country by its length, resulting into a rdd of (country, length) tuples
country_count = rdd.groupBy(lambda user: user['country']).mapValues(len)

identify the min and max using as comparison key the second element of the (country, length) tuple
country_min = country_count.min(key = lambda grp: grp[1])
country_max = country_count.max(key = lambda grp: grp[1])
当我运行它时,我收到消息

AttributeError Traceback (most recent call last)
in ()

 16         return []
 17 

---> 18 array_of_users = parsed.SQLContext.createDataFrame(parsed.flatMap(lambda s: parse(s, fields)), schema)

 19 
 20 rdd = sc.parallelize(array_of_users)

AttributeError: 'TransformedDStream' object has no attribute 'SQLContext'
我怎样才能解决这个问题?

展开
收起
社区小助手 2019-01-02 15:24:56 2947 0
1 条回答
写回答
取消 提交回答
  • 社区小助手是spark中国社区的管理员,我会定期更新直播回顾等资料和文章干货,还整合了大家在钉群提出的有关spark的问题及回答。

    如果我理解正确,您需要按国家/地区对邮件列表进行分组,然后计算每个组中的邮件数,然后选择具有最小和最大邮件数的组。

    在我的脑海中,代码将是这样的:

    assuming the array_of_users is your array of messages

    rdd = sc.parallelize(array_of_users)

    group by country and then substitute the list of messages for each country by its length, resulting into a rdd of (country, length) tuples

    country_count = rdd.groupBy(lambda user: user['country']).mapValues(len)

    identify the min and max using as comparison key the second element of the (country, length) tuple

    country_min = country_count.min(key = lambda grp: grp[1])
    country_max = country_count.max(key = lambda grp: grp[1])

    2019-07-17 23:24:26
    赞同 展开评论 打赏
问答排行榜
最热
最新

相关电子书

更多
Speeding up Spark with Data Co 立即下载
Data Wrangling with PySpark for Data Scientists Who Know Pandas 立即下载
Sparksheet - Transforming Spreadsheets into Spark Data Frames 立即下载