[Spark] 求助几个 Spark 问题

Q1: Someone handed you this dataset (~1GB), and you discovered that it’s over 1,000 tiny files.
var df = spark.read.format(“orc”).load(clean_tracker_cstt_path)
○ Using Spark, please show how you can improve storage efficiency, and explain why this is important.
○ After improving storage efficiency, please explain impact on loading and using dataset in Spark.

Q2: Given the schema below, use Spark 2.x Dataframe API to give count of events per day for the last 7 days.

root
|– action_id: integer (nullable = true)
|– receive_time: timestamp (nullable = true)
|– uuid: string (nullable = true)

Q3: You have calculated the Daily Event Count above using Spark API. Now please find the Min, Max, Mean, and Standard Deviation of Daily Count by using Scala. Only built-in Scala functions may be used. Please format the answer with 2 decimal places, e.g. “The Average Daily Count from Last 7 Days is x.xx”.