Cumulative or Moving AggregationsΒΆ

Let us understand how we can take care of cumulative or moving aggregations using Spark SQL.

  • When it comes to Windowing or Analytic Functions we can also specify window using ROWS BETWEEN clause.

  • We can leverage ROWS BETWEEN for cumulative aggregations or moving aggregations.

  • Here is an example of cumulative sum.

Let us start spark context for this Notebook so that we can execute the code provided. You can sign up for our 10 node state of the art cluster/labs to learn Spark SQL using our unique integrated LMS.

val username = System.getProperty("user.name")
import org.apache.spark.sql.SparkSession

val username = System.getProperty("user.name")
val spark = SparkSession.
    builder.
    config("spark.ui.port", "0").
    config("spark.sql.warehouse.dir", s"/user/${username}/warehouse").
    enableHiveSupport.
    appName(s"${username} | Spark SQL - Windowing Functions").
    master("yarn").
    getOrCreate

If you are going to use CLIs, you can use Spark SQL using one of the 3 approaches.

Using Spark SQL

spark2-sql \
    --master yarn \
    --conf spark.ui.port=0 \
    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse

Using Scala

spark2-shell \
    --master yarn \
    --conf spark.ui.port=0 \
    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse

Using Pyspark

pyspark2 \
    --master yarn \
    --conf spark.ui.port=0 \
    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse
%%sql

USE itversity_hr
%%sql

SELECT e.employee_id, e.department_id, e.salary,
    sum(e.salary) OVER (
        PARTITION BY e.department_id
        ORDER BY e.salary DESC
        ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW
    ) AS sum_sal_expense
FROM employees e
ORDER BY e.department_id, e.salary DESC
%%sql

USE itversity_retail
%%sql

SELECT t.*,
    round(sum(t.revenue) OVER (
        PARTITION BY date_format(order_date, 'yyyy-MM')
        ORDER BY order_date
        ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW
    ), 2) AS cumulative_daily_revenue
FROM daily_revenue t
ORDER BY date_format(order_date, 'yyyy-MM'),
    order_date
spark.sql("""
SELECT t.*,
    round(sum(t.revenue) OVER (
        PARTITION BY date_format(order_date, 'yyyy-MM')
        ORDER BY order_date
        ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW
    ), 2) AS cumulative_daily_revenue
FROM daily_revenue t
ORDER BY date_format(order_date, 'yyyy-MM'), 
    order_date
""").
    show(100, false)
  • Here is an example for moving sum.

%%sql

USE itversity_retail
%%sql

SELECT t.*,
    round(sum(t.revenue) OVER (
        ORDER BY order_date
        ROWS BETWEEN 3 PRECEDING AND CURRENT ROW
    ), 2) AS moving_3day_revenue
FROM daily_revenue t
ORDER BY order_date
spark.sql("""
    SELECT t.*,
        round(sum(t.revenue) OVER (
            ORDER BY order_date
            ROWS BETWEEN 3 PRECEDING AND CURRENT ROW
        ), 2) AS moving_3day_revenue
    FROM daily_revenue t
    ORDER BY order_date
""").
    show(30, false)
%%sql

SELECT t.*,
    round(sum(t.revenue) OVER (
        PARTITION BY date_format(order_date, 'yyyy-MM')
        ORDER BY order_date
        ROWS BETWEEN 3 PRECEDING AND CURRENT ROW
    ), 2) AS moving_3day_revenue
FROM daily_revenue t
ORDER BY date_format(order_date, 'yyyy-MM'),
    order_date
spark.sql("""
SELECT t.*,
    round(sum(t.revenue) OVER (
        PARTITION BY date_format(order_date, 'yyyy-MM')
        ORDER BY order_date
        ROWS BETWEEN 3 PRECEDING AND CURRENT ROW
    ), 2) AS moving_3day_revenue
FROM daily_revenue t
ORDER BY date_format(order_date, 'yyyy-MM'), 
    order_date
""").
    show(100, false)