rdd = sc.parallelizeDoubles(testData);
|
Now we’ll calculate the mean of our dataset.
1
|
LOGGER.info("Mean: " + rdd.mean());
|
There are similar methods for other statistics operation such as max, standard deviation, …etc.
Every time one of this method is invoked , Spark performs the operation on the entire RDD data. If more than one operations performed, it will repeat again and again which is very inefficient. To solve this, Spark provides “StatCounter” class which executes once and provides results of all basic statistics operations in the same time.
1
|
StatCounter statCounter = rdd.stats();
|
Now results can be accessed as follows,
1
2
3
4
5
6
7
|
LOGGER.info("Count: " + statCounter.count());
LOGGER.info("Min: " + statCounter.min());
LOGGER.info("Max: " + statCounter.max());
LOGGER.info("Sum: " + statCounter.sum());
LOGGER.info("Mean: " + statCounter.mean());
LOGGER.info("Variance: " + statCounter.variance());
LOGGER.info("Stdev: " + statCounter.stdev());
|
摘自:http://www.sparkexpert.com/tag/rdd/
本文转自张昺华-sky博客园博客,原文链接:http://www.cnblogs.com/bonelee/p/7154042.html,如需转载请自行联系原作者