LOAD vs. INSERTΒΆ
Let us compare and contrast LOAD and INSERT commands. These are the main approaches using which we get data into Spark Metastore tables.
Let us start spark context for this Notebook so that we can execute the code provided. You can sign up for our 10 node state of the art cluster/labs to learn Spark SQL using our unique integrated LMS.
val username = System.getProperty("user.name")
import org.apache.spark.sql.SparkSession
val username = System.getProperty("user.name")
val spark = SparkSession.
builder.
config("spark.ui.port", "0").
config("spark.sql.warehouse.dir", s"/user/${username}/warehouse").
enableHiveSupport.
appName(s"${username} | Spark SQL - Managing Tables - DML and Partitioning").
master("yarn").
getOrCreate
If you are going to use CLIs, you can use Spark SQL using one of the 3 approaches.
Using Spark SQL
spark2-sql \
--master yarn \
--conf spark.ui.port=0 \
--conf spark.sql.warehouse.dir=/user/${USER}/warehouse
Using Scala
spark2-shell \
--master yarn \
--conf spark.ui.port=0 \
--conf spark.sql.warehouse.dir=/user/${USER}/warehouse
Using Pyspark
pyspark2 \
--master yarn \
--conf spark.ui.port=0 \
--conf spark.sql.warehouse.dir=/user/${USER}/warehouse
LOAD will copy the files by dividing them into blocks.
LOAD is the fastest way of getting data into Spark Metastore tables. However, there will be minimal validations at File level.
There will be no transformations or validations at data level.
If it require any transformation while getting data into Spark Metastore table, then we need to use INSERT command.
Here are some of the usage scenarios of insert:
Changing delimiters in case of text file format
Changing file format
Loading data into partitioned or bucketed tables (if bucketing is supported).
Apply any other transformations at data level (widely used)
%%sql
USE itversity_retail
%%sql
DROP TABLE IF EXISTS order_items
%%sql
CREATE TABLE order_items (
order_item_id INT,
order_item_order_id INT,
order_item_product_id INT,
order_item_quantity INT,
order_item_subtotal FLOAT,
order_item_product_price FLOAT
) STORED AS parquet
%%sql
LOAD DATA LOCAL INPATH '/data/retail_db/order_items'
INTO TABLE order_items
val username = System.getProperty("user.name")
import sys.process._
s"hdfs dfs -ls /user/${username}/warehouse/${username}_retail.db/order_items" !
%%sql
SELECT * FROM order_items LIMIT 10