Follow

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Contact

drop column based on condition pyspark

Could you drop columns based on a condition in Pyspark

The condition that I want to drop a column:

df_train.groupby().sum() == 0

Here is a quick example in pandas:

MEDevel.com: Open-source for Healthcare and Education

Collecting and validating open-source software for healthcare, education, enterprise, development, medical imaging, medical records, and digital pathology.

Visit Medevel

import pandas as pd
#create dataframe
df = pd.DataFrame(np.array([[0,2,1],[0,2,8],[0,6,2]]), columns=['a','b', 'c']) 

#remove columns with only zero value
df.loc[:,df.sum(axis=0) != 0 ]

If there are multiple ways, which one would be preferred?

>Solution :

If I correctly understood, you want to drop all columns where the sum for that column equal to 0.

You can first calculate sum for each column, then filter the list of columns where sum = 0 and pass that list to df.drop() method:

from pyspark.sql import functions as F


df = spark.createDataFrame([(0, 1, 2), (-1, 3, -6), (1, 4, 0)], ["col1", "col2", "col3"])

sums = df.select(*[F.sum(c).alias(c) for c in df.columns]).first()

cols_to_dop = [c for c in sums.asDict() if sums[c] == 0]

df = df.drop(*cols_to_dop)

df.show()
#+----+----+
#|col2|col3|
#+----+----+
#|   1|   2|
#|   3|  -6|
#|   4|   0|
#+----+----+
Add a comment

Leave a Reply

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use

Discover more from Dev solutions

Subscribe now to keep reading and get access to the full archive.

Continue reading