Overview of Sub QueriesΒΆ
Let us recap about Sub Queries.
Let us start spark context for this Notebook so that we can execute the code provided. You can sign up for our 10 node state of the art cluster/labs to learn Spark SQL using our unique integrated LMS.
val username = System.getProperty("user.name")
import org.apache.spark.sql.SparkSession
val username = System.getProperty("user.name")
val spark = SparkSession.
builder.
config("spark.ui.port", "0").
config("spark.sql.warehouse.dir", s"/user/${username}/warehouse").
enableHiveSupport.
appName(s"${username} | Spark SQL - Windowing Functions").
master("yarn").
getOrCreate
If you are going to use CLIs, you can use Spark SQL using one of the 3 approaches.
Using Spark SQL
spark2-sql \
--master yarn \
--conf spark.ui.port=0 \
--conf spark.sql.warehouse.dir=/user/${USER}/warehouse
Using Scala
spark2-shell \
--master yarn \
--conf spark.ui.port=0 \
--conf spark.sql.warehouse.dir=/user/${USER}/warehouse
Using Pyspark
pyspark2 \
--master yarn \
--conf spark.ui.port=0 \
--conf spark.sql.warehouse.dir=/user/${USER}/warehouse
We typically have Sub Queries in FROM Clause.
We need not provide alias to the Sub Queries in FROM Clause in Spark SQL. In earlier versions, you might have to provide alias for the Sub Query.
We use Sub Queries quite often over queries using Analytics/Windowing Functions
%%sql
SELECT * FROM (SELECT current_date)
%%sql
SELECT * FROM (SELECT current_date) AS q
Let us see few more examples with respect to Sub Queries.
%%sql
USE itversity_retail
%%sql
SELECT * FROM (
SELECT order_date, count(1) AS order_count
FROM orders
GROUP BY order_date
) q
LIMIT 10
Note
Here is an example of how we can filter based up on the derived columns using sub query. However, this can be achieved with direct query as well using HAVING
.
%%sql
SELECT * FROM (
SELECT order_date, count(1) AS order_count
FROM orders
GROUP BY order_date
) q
WHERE q.order_count > 10
%%sql
SELECT order_date, count(1) AS order_count
FROM orders
GROUP BY order_date
HAVING count(1) > 10