Order of execution of SQLΒΆ

Let us review the order of execution of SQL. First let us review the order of writing the query.

Let us start spark context for this Notebook so that we can execute the code provided. You can sign up for our 10 node state of the art cluster/labs to learn Spark SQL using our unique integrated LMS.

val username = System.getProperty("user.name")
import org.apache.spark.sql.SparkSession

val username = System.getProperty("user.name")
val spark = SparkSession.
    builder.
    config("spark.ui.port", "0").
    config("spark.sql.warehouse.dir", s"/user/${username}/warehouse").
    enableHiveSupport.
    appName(s"${username} | Spark SQL - Windowing Functions").
    master("yarn").
    getOrCreate

If you are going to use CLIs, you can use Spark SQL using one of the 3 approaches.

Using Spark SQL

spark2-sql \
    --master yarn \
    --conf spark.ui.port=0 \
    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse

Using Scala

spark2-shell \
    --master yarn \
    --conf spark.ui.port=0 \
    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse

Using Pyspark

pyspark2 \
    --master yarn \
    --conf spark.ui.port=0 \
    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse
  1. SELECT

  2. FROM

  3. JOIN or OUTER JOIN with ON

  4. WHERE

  5. GROUP BY and optionally HAVING

  6. ORDER BY

Let us come up with a query which will compute daily revenue using COMPLETE or CLOSED orders and also sorted by order_date.

%%sql

USE itversity_retail
%%sql

SELECT o.order_date,
  round(sum(oi.order_item_subtotal), 2) AS revenue
FROM orders o JOIN order_items oi
ON o.order_id = oi.order_item_order_id
WHERE o.order_status IN ('COMPLETE', 'CLOSED')
GROUP BY o.order_date
ORDER BY o.order_date
LIMIT 10
%%sql

SELECT o.order_date,
    round(sum(oi.order_item_subtotal), 2) AS revenue
FROM orders o JOIN order_items oi
ON o.order_id = oi.order_item_order_id
WHERE o.order_status IN ('COMPLETE', 'CLOSED')
GROUP BY o.order_date
    HAVING round(sum(oi.order_item_subtotal), 2) >= 50000
ORDER BY order_date
LIMIT 10

However order of execution is typically as follows.

  1. FROM

  2. JOIN or OUTER JOIN with ON

  3. WHERE

  4. GROUP BY and optionally HAVING

  5. SELECT

  6. ORDER BY

As SELECT is executed before ORDER BY clause, we will not be able to refer the aliases defined in SELECT caluse in other clauses except for ORDER BY in most of the traditional databases. However, in Spark we can specify the aliases defined in SELECT in HAVING as well as ORDER BY.

Error

This will fail as revenue which is an alias defined in SELECT cannot be used in WHERE.

%%sql

SELECT o.order_date,
    round(sum(oi.order_item_subtotal), 2) AS revenue
FROM orders o JOIN order_items oi
ON o.order_id = oi.order_item_order_id
WHERE o.order_status IN ('COMPLETE', 'CLOSED')
    AND revenue >= 50000
GROUP BY o.order_date
ORDER BY order_date
LIMIT 10

Note

This will work as revenue which is an alias defined in SELECT can be used in HAVING as well as ORDER BY.

%%sql

SELECT o.order_date,
    round(sum(oi.order_item_subtotal), 2) AS revenue
FROM orders o JOIN order_items oi
ON o.order_id = oi.order_item_order_id
WHERE o.order_status IN ('COMPLETE', 'CLOSED')
GROUP BY o.order_date
    HAVING revenue >= 50000
ORDER BY order_date,
    revenue DESC
LIMIT 10