Follow

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Contact

Replace specific characters from a column in pyspark dataframe

I have the below pyspark dataframe.

column_a
name, varchar(10) country, age
name, age, decimal(15) percentage
name, varchar(12) country, age
name, age, decimal(10) percentage

I have to remove varchar and decimal from above dataframe irrespective of its length. Below is expected output.

column_a
name, country, age
name, age, percentage
name, country, age
name, age, percentage

How to achieve this in Pyspark.

MEDevel.com: Open-source for Healthcare and Education

Collecting and validating open-source software for healthcare, education, enterprise, development, medical imaging, medical records, and digital pathology.

Visit Medevel

>Solution :

You can replace patterns matching decimal() and varchar() using regexp_replace.


from pyspark.sql import functions as F

data = [("name, varchar(10) country, age",),
        ("name, age, decimal(15) percentage",),
        ("name, varchar(12) country, age",),
        ("name, age, decimal(10) percentage",), ]

df = spark.createDataFrame(data, ("column_a", ), )

df.withColumn("column_a", 
              F.regexp_replace("column_a", r"varchar\(\d*\)\s|decimal\(\d*\)\s", ""))\
  .show(truncate=False)
"""
+---------------------+
|column_a             |
+---------------------+
|name, country, age   |
|name, age, percentage|
|name, country, age   |
|name, age, percentage|
+---------------------+
"""
Add a comment

Leave a Reply

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use

Discover more from Dev solutions

Subscribe now to keep reading and get access to the full archive.

Continue reading