SQL Pivot: Converting Rows to Columns
source link: https://www.tuicool.com/articles/hit/FBJR3uQ
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
Pivot was first introduced in Apache Spark 1.6 as a new DataFrame feature that allows users to rotate a table-valued expression by turning the unique values from one column into individual columns.
The upcoming Apache Spark 2.4 release extends this powerful functionality of pivoting data to our SQL users as well. In this blog, using temperatures recordings in Seattle, we’ll show how we can use this common SQL Pivot feature to achieve complex datatransformations.
Examining Summer Temperatures with Pivot
This summer in Seattle temperatures rose to uncomfortable levels, peaking to high 80s and 90, for nine days in July.Date Temp (°F) 07-22-2018 86 07-23-2018 90 07-24-2018 91 07-25-2018 92 07-26-2018 92 07-27-2018 88 07-28-2018 85 07-29-2018 94 07-30-2018 89
Suppose we want to explore or examine if there were a historical trend in rising mercury levels. One intuitive way to examine and present these numbers is to have months as the columns and then each year’s monthly average highs in a single row. That way it will be easy to compare the temperatures both horizontally, between adjacent months, and vertically, between different years.
Now that we have support for
syntax inSpark SQL, we can achieve this with the following SQL query.
SELECT * FROM ( SELECT year(date) year, month(date) month, temp FROM high_temps WHERE date BETWEEN DATE '2015-01-01' AND DATE '2018-08-31' ) PIVOT ( CAST(avg(temp) AS DECIMAL(4, 1)) FOR month in ( 1 JAN, 2 FEB, 3 MAR, 4 APR, 5 MAY, 6 JUN, 7 JUL, 8 AUG, 9 SEP, 10 OCT, 11 NOV, 12 DEC ) ) ORDER BY year DESC
The above query will produce a result like:YEAR JAN FEB MAR APR MAY JUNE JULY AUG SEPT OCT NOV DEC 2018 49.7 45.8 54.0 58.6 70.8 71.9 82.8 79.1 NULL NULL NULL NULL 2017 43.7 46.6 51.6 57.3 67.0 72.1 78.3 81.5 73.8 61.1 51.3 45.6 2016 49.1 53.6 56.4 65.9 68.8 73.1 76.0 79.5 69.6 60.6 56.0 41.9 2015 50.3 54.5 57.9 59.9 68.0 78.9 82.6 79.0 68.5 63.6 49.4 47.1
Well, looks like there are good years and bad years. The year 2016 seems a rather energy-friendly year.
Pivoting in SQL
Let’s take a closer look at this query to understand how it works. First, we need to specify the
clause, which is the input of the pivot, in other words, the table or subquery based on which the pivoting will be performed. In our case, we are concerned about the years, the months, and the high temperatures, so those are the fields that appear in the sub-query.
Second, let’s consider another important part of the query, the
clause. The first argument of the
clause is an aggregate function and the column to be aggregated. We then specify the pivot column in the
sub-clause as the second argument, followed by the
operator containing the pivot column values as the last argument.
The pivot column is the point around which the table will be rotated, and the pivot column values will be transposed into columns in the output table. The
clause also allows you to specify an alias for each pivot value, making it easy to generate more meaningful column names.
An important idea about pivot is that it performs a grouped aggregation based on a list of implicit
columns together with the pivot column. The implicit
columns are columns from the
clause that do not appear in any aggregate function or as the pivot column.
In the above query, with the pivot column being the column month and the implicit
column being the column year, the expression
will be aggregated on each distinct value pair of
, where month equals to one of the specified pivot column values. As a result, each of these aggregated values will be mapped into its corresponding cell of row
It is worth noting that because of this implicit
, we need to make sure that any column that we do not wish to be part of the pivot output should be left out from the
clause, otherwise the query would produce undesired results.
Specifying Multiple Aggregate Expressions
The above example shows only one aggregate expression being used in the
clause, while in fact, users can specify multiple aggregate expressions if needed. Again, with the weather data above, we can list the maximum high temperatures along with the average high temperatures between June and September.
SELECT * FROM ( SELECT year(date) year, month(date) month, temp FROM high_temps WHERE date BETWEEN DATE '2015-01-01' AND DATE '2018-08-31' ) PIVOT ( CAST(avg(temp) AS DECIMAL(4, 1)) avg, max(temp) max FOR month in (6 JUN, 7 JUL, 8 AUG, 9 SEP) ) ORDER BY year DESC
In case of multiple aggregate expressions, the columns will be the Cartesian product of the pivot column values and the aggregate expressions, with the names as
Grouping Columns vs. Pivot Columns
Now suppose we want to include low temperatures in our exploration of temperature trends from this table of daily low temperatures:Date Temp (°F) … … 08-01-2018 59 08-02-2018 58 08-03-2018 59 08-04-2018 58 08-05-2018 59 08-06-2018 59 … …
To combine this table with the previous table of daily high temperatures, we could join these two tables on the “Date” column. However, since we are going to use pivot, which performs grouping on the dates, we can simply concatenate the two tables using
. And you’ll see later, this approach also provides us with more flexibility:
SELECT date, temp, 'H' as flag FROM high_temps UNION ALL SELECT date, temp, 'L' as flag FROM low_temps
Now let’s try our pivot query with the new combined table:
SELECT * FROM ( SELECT year(date) year, month(date) month, temp, flag `H/L` FROM ( SELECT date, temp, 'H' as flag FROM high_temps UNION ALL SELECT date, temp, 'L' as flag FROM low_temps ) WHERE date BETWEEN DATE '2015-01-01' AND DATE '2018-08-31' ) PIVOT ( CAST(avg(temp) AS DECIMAL(4, 1)) FOR month in (6 JUN, 7 JUL, 8 AUG, 9 SEP) ) ORDER BY year DESC, `H/L` ASC
As a result, we get the average high and average low for each month of the past 4 years in one table. Note that we need to include the column
in the pivot query, otherwise the expression
would be based on a mix of high and low temperatures.
You might have noticed that now we have two rows for each year, one for the high temperatures and the other for low temperatures. That’s because we have included one more column,
, in the pivot input, which in turn becomes another implicit grouping column in addition to the original column
Alternatively, instead of being a grouping column, the
can also serve as a pivot column. So now we have two pivot columns,
SELECT * FROM ( SELECT year(date) year, month(date) month, temp, flag FROM ( SELECT date, temp, 'H' as flag FROM high_temps UNION ALL SELECT date, temp, 'L' as flag FROM low_temps ) WHERE date BETWEEN DATE '2015-01-01' AND DATE '2018-08-31' ) PIVOT ( CAST(avg(temp) AS DECIMAL(4, 1)) FOR (month, flag) in ( (6, 'H') JUN_hi, (6, 'L') JUN_lo, (7, 'H') JUL_hi, (7, 'L') JUL_lo, (8, 'H') AUG_hi, (8, 'L') AUG_lo, (9, 'H') SEP_hi, (9, 'L') SEP_lo ) ) ORDER BY year DESC
This query presents us with a different layout of the same data, with one row for each year, but two columns for each month.year JUN_hi JUN_lo JUL_hi JUL_lo AUG_hi AUG_lo SEP_hi SEP_lo 2018 71.9 53.4 82.8 58.5 79.1 58.5 NULL NULL 2017 72.1 53.7 78.3 56.3 81.5 59.0 73.8 55.6 2016 73.1 53.9 76.0 57.6 79.5 57.9 69.6 52.9 2015 78.9 56.4 82.6 59.9 79.0 58.5 68.5 52.5
To run the query examples used in this blog, please check the pivot SQL examples in thisaccompanying blog.
Thanks to the Apache Spark community contributors for their contributions!
Aggregate valuable and interesting links.
Joyk means Joy of geeK