Solución:
Depende del tipo de su “lista”:
-
Si es de tipo
ArrayType()
:df = hc.createDataFrame(sc.parallelize([['a', [1,2,3]], ['b', [2,3,4]]]), ["key", "value"]) df.printSchema() df.show() root |-- key: string (nullable = true) |-- value: array (nullable = true) | |-- element: long (containsNull = true)
puede acceder a los valores como lo haría con Python usando
[]
:df.select("key", df.value[0], df.value[1], df.value[2]).show() +---+--------+--------+--------+ |key|value[0]|value[1]|value[2]| +---+--------+--------+--------+ | a| 1| 2| 3| | b| 2| 3| 4| +---+--------+--------+--------+ +---+-------+ |key| value| +---+-------+ | a|[1,2,3]| | b|[2,3,4]| +---+-------+
-
Si es de tipo
StructType()
: (tal vez construiste tu marco de datos leyendo un JSON)df2 = df.select("key", psf.struct( df.value[0].alias("value1"), df.value[1].alias("value2"), df.value[2].alias("value3") ).alias("value")) df2.printSchema() df2.show() root |-- key: string (nullable = true) |-- value: struct (nullable = false) | |-- value1: long (nullable = true) | |-- value2: long (nullable = true) | |-- value3: long (nullable = true) +---+-------+ |key| value| +---+-------+ | a|[1,2,3]| | b|[2,3,4]| +---+-------+
puede ‘dividir’ directamente la columna usando
*
:df2.select('key', 'value.*').show() +---+------+------+------+ |key|value1|value2|value3| +---+------+------+------+ | a| 1| 2| 3| | b| 2| 3| 4| +---+------+------+------+
Me gustaría agregar el caso de listas de tamaño (matrices) a la respuesta de pault.
En el caso de que nuestra columna contenga matrices de tamaño mediano (o de gran tamaño), aún es posible dividirlas en columnas.
from pyspark.sql.types import * # Needed to define DataFrame Schema.
from pyspark.sql.functions import expr
# Define schema to create DataFrame with an array typed column.
mySchema = StructType([StructField("V1", StringType(), True),
StructField("V2", ArrayType(IntegerType(),True))])
df = spark.createDataFrame([['A', [1, 2, 3, 4, 5, 6, 7]],
['B', [8, 7, 6, 5, 4, 3, 2]]], schema= mySchema)
# Split list into columns using 'expr()' in a comprehension list.
arr_size = 7
df = df.select(['V1', 'V2']+[expr('V2[' + str(x) + ']') for x in range(0, arr_size)])
# It is posible to define new column names.
new_colnames = ['V1', 'V2'] + ['val_' + str(i) for i in range(0, arr_size)]
df = df.toDF(*new_colnames)
El resultado es:
df.show(truncate= False)
+---+---------------------+-----+-----+-----+-----+-----+-----+-----+
|V1 |V2 |val_0|val_1|val_2|val_3|val_4|val_5|val_6|
+---+---------------------+-----+-----+-----+-----+-----+-----+-----+
|A |[1, 2, 3, 4, 5, 6, 7]|1 |2 |3 |4 |5 |6 |7 |
|B |[8, 7, 6, 5, 4, 3, 2]|8 |7 |6 |5 |4 |3 |2 |
+---+---------------------+-----+-----+-----+-----+-----+-----+-----+
¡Haz clic para puntuar esta entrada!
(Votos: 0 Promedio: 0)