Transform Columns Values To Columns In Pyspark Dataframe
I would like to transform the values of a column into multiple columns of a dataframe in pyspark on databricks. e.g from pyspark.sql import SparkSession spark = SparkSession.buil
Solution 1:
What you're looking for a a combination of pivot
and aggregation functions, such as collect_list()
or collect_set()
. Have a look at the available aggregation functions here: https://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=agg#module-pyspark.sql.functions.
Here's some code example:
from pyspark.sql import SparkSession
import pyspark.sql.functions as f
df = spark._sc.parallelize([
["dapd", "shop", "retail"],
["dapd", "shop", "on-line"],
["dapd", "payment", "credit"],
["wrfr", "shop", "supermarket"],
["wrfr", "shop", "brand store"],
["wrfr", "payment", "cash"]]
).toDF(["id", "value1", "value2"])
df.show()
+----+-------+-----------+
| id| value1| value2|
+----+-------+-----------+
|dapd| shop| retail|
|dapd| shop| on-line|
|dapd|payment| credit|
|wrfr| shop|supermarket|
|wrfr| shop|brand store|
|wrfr|payment| cash|
+----+-------+-----------+
df.groupBy('id').pivot('value1').agg(f.collect_list("value2")).show(truncate=False)
+----+--------+--------------------------+
|id |payment |shop |
+----+--------+--------------------------+
|dapd|[credit]|[retail, on-line] |
|wrfr|[cash] |[supermarket, brand store]|
+----+--------+--------------------------+
Solution 2:
there is something like this you can do.
newdf=df.groupby('id').pivot('value1').agg(func.collect_list(func.col('value2')))
newdf=newdf.withColumn('shop',func.concat_ws('|',func.col('shop')[0],func.col('shop')[1]))
newdf=newdf.withColumn('payment',func.col('payment')[0])
newdf.show(20, False)
+----+-------+-----------------------+
|id |payment|shop |
+----+-------+-----------------------+
|dapd|credit |retail|on-line |
|wrfr|cash |brand store|supermarket|
+----+-------+-----------------------+
Post a Comment for "Transform Columns Values To Columns In Pyspark Dataframe"