Getting first and last valuesΒΆ

Let us see how we can get first and last value based on the criteria.

Let us start spark context for this Notebook so that we can execute the code provided. You can sign up for our 10 node state of the art cluster/labs to learn Spark SQL using our unique integrated LMS.

val username = System.getProperty("user.name")
import org.apache.spark.sql.SparkSession

val username = System.getProperty("user.name")
val spark = SparkSession.
    builder.
    config("spark.ui.port", "0").
    config("spark.sql.warehouse.dir", s"/user/${username}/warehouse").
    enableHiveSupport.
    appName(s"${username} | Spark SQL - Windowing Functions").
    master("yarn").
    getOrCreate

If you are going to use CLIs, you can use Spark SQL using one of the 3 approaches.

Using Spark SQL

spark2-sql \
    --master yarn \
    --conf spark.ui.port=0 \
    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse

Using Scala

spark2-shell \
    --master yarn \
    --conf spark.ui.port=0 \
    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse

Using Pyspark

pyspark2 \
    --master yarn \
    --conf spark.ui.port=0 \
    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse

Here is the example of using first_value.

%%sql

USE itversity_retail
%%sql

SELECT t.*,
  first_value(order_item_product_id) OVER (
    PARTITION BY order_date ORDER BY revenue DESC
  ) first_product_id,
  first_value(revenue) OVER (
    PARTITION BY order_date ORDER BY revenue DESC
  ) first_revenue
FROM daily_product_revenue t
ORDER BY order_date, revenue DESC
LIMIT 100
spark.sql("""
    SELECT t.*,
      first_value(order_item_product_id) OVER (
        PARTITION BY order_date ORDER BY revenue DESC
      ) first_product_id,
      first_value(revenue) OVER (
        PARTITION BY order_date ORDER BY revenue DESC
      ) first_revenue
    FROM daily_product_revenue t
    ORDER BY order_date, revenue DESC
""").
    show(100, false)

Let us see an example with last_value. While using last_value we need to specify ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING.

  • By default it uses ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW.

  • The last value with in UNBOUNDED PRECEDING AND CURRENT ROW will be current record.

  • To get the right value, we have to change the windowing clause to ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING.

%%sql

USE itversity_retail
%%sql

SELECT t.*,
    last_value(order_item_product_id) OVER (
        PARTITION BY order_date ORDER BY revenue
        ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW
    ) last_product_id,
    last_value(revenue) OVER (
        PARTITION BY order_date ORDER BY revenue
    ) last_revenue
FROM daily_product_revenue AS t
ORDER BY order_date, revenue DESC
LIMIT 100
%%sql

SELECT t.*,
    last_value(order_item_product_id) OVER (
        PARTITION BY order_date ORDER BY revenue
        ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING
    ) last_product_id,
    last_value(revenue) OVER (
        PARTITION BY order_date ORDER BY revenue
        ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING
    ) last_revenue
FROM daily_product_revenue AS t
ORDER BY order_date, revenue DESC
LIMIT 100
spark.sql("""
    SELECT t.*,
      last_value(order_item_product_id) OVER (
        PARTITION BY order_date ORDER BY revenue
        ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING
      ) last_product_id,
      last_value(revenue) OVER (
        PARTITION BY order_date ORDER BY revenue
        ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING
      )  last_revenue
    FROM daily_product_revenue AS t
    ORDER BY order_date, revenue DESC
""").
    show(100, false)