Overview of Windowing FunctionsΒΆ
Let us get an overview of Analytics or Windowing Functions in Spark SQL.
Let us start spark context for this Notebook so that we can execute the code provided. You can sign up for our 10 node state of the art cluster/labs to learn Spark SQL using our unique integrated LMS.
val username = System.getProperty("user.name")
import org.apache.spark.sql.SparkSession
val username = System.getProperty("user.name")
val spark = SparkSession.
builder.
config("spark.ui.port", "0").
config("spark.sql.warehouse.dir", s"/user/${username}/warehouse").
enableHiveSupport.
appName(s"${username} | Spark SQL - Windowing Functions").
master("yarn").
getOrCreate
If you are going to use CLIs, you can use Spark SQL using one of the 3 approaches.
Using Spark SQL
spark2-sql \
--master yarn \
--conf spark.ui.port=0 \
--conf spark.sql.warehouse.dir=/user/${USER}/warehouse
Using Scala
spark2-shell \
--master yarn \
--conf spark.ui.port=0 \
--conf spark.sql.warehouse.dir=/user/${USER}/warehouse
Using Pyspark
pyspark2 \
--master yarn \
--conf spark.ui.port=0 \
--conf spark.sql.warehouse.dir=/user/${USER}/warehouse
Aggregate Functions (
sum
,min
,max
,avg
)Window Functions (
lead
,lag
,first_value
,last_value
)Rank Functions (
rank
,dense_rank
,row_number
etc)For all the functions we use
OVER
clause.For aggregate functions we typically use
PARTITION BY
For global ranking and windowing functions we can use
ORDER BY sorting_column
and for ranking and windowing with in a partition or group we can usePARTITION BY partition_column ORDER BY sorting_column
.
%%sql
USE itversity_hr
%%sql
SELECT employee_id, department_id, salary FROM employees LIMIT 10
%%sql
SELECT employee_id, department_id, salary,
count(1) OVER (PARTITION BY department_id) AS employee_count,
rank() OVER (ORDER BY salary DESC) AS rnk,
lead(employee_id) OVER (PARTITION BY department_id ORDER BY salary DESC) AS lead_emp_id,
lead(salary) OVER (PARTITION BY department_id ORDER BY salary DESC) AS lead_emp_sal
FROM employees
ORDER BY employee_id