Follow

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Contact

Pyspark Compare column strings, grouping if alphabetic character sets are same, but avoid similar words?

I’m working on a project where I have a pyspark dataframe of two columns (word, word count) that are string and bigint respectively. The dataset is dirty such that some words have a non-letter character attached to them (ex. ‘date’, ‘[date’, ‘date]’ and ‘_date’ are all separate items but should be just ‘date’)

print(dirty_df.schema)
output---> StructType([StructField('count', LongType(), True), StructField('word', StringType(), True)])
dirty_df.show()
+------+------+
| count|  word|
+------+------+
|32375 |  date|
|359   | _date|
|306   | [date|
|213   | date]|
|209   |  snap|
|204   | _snap|
|107   | [snap|
|12    | snap]|

I need to reduce the dataframe such that date, _date, [date, and date] are all just ‘date’ with their counts being updated to match. Problem is: I need to avoid joining on similar words like’dates’, ‘dating’, ‘dated’, ‘todate’, etc.

Goal

MEDevel.com: Open-source for Healthcare and Education

Collecting and validating open-source software for healthcare, education, enterprise, development, medical imaging, medical records, and digital pathology.

Visit Medevel

+------+------+
| count|  word|
+------+------+
|33253 |  date|
|532   |  snap|

Any thoughts on how I could approach this?

>Solution :

Use regexp_replace function and replace all special characters([^a-zA-Z] replace all characters other than alphabets).

Example:

df = spark.createDataFrame([(32375,'date'),(359,'_date'),(306,'[date'),(213,'date]'),(209,'snap'),(204,'_snap'),(107,'[snap'),(12,'snap]')],['count','word'])
df.withColumn("word",regexp_replace(col("word"),"[^a-zA-Z]","")).groupBy("word").agg(sum(col("count")).alias("count")).show(10,False)
#+----+-----+
#|word|count|
#+----+-----+
#|date|33253|
#|snap|532  |
#+----+-----+

Other way:

If you want to replace only specific characters then use translate function

df.withColumn("word",expr('translate(word,"(_|]|[)","")')).groupBy("word").agg(sum(col("count")).alias("count")).show(10,False)

#+----+-----+
#|word|count|
#+----+-----+
#|date|33253|
#|snap|532  |
#+----+-----+
Add a comment

Leave a Reply

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use

Discover more from Dev solutions

Subscribe now to keep reading and get access to the full archive.

Continue reading