Pyspark write a DataFrame to csv files in S3 with a custom name

I am writing files to an S3 bucket with code such as the following: df.write.format(‘csv’).option(‘header’,’true’).mode("append").save("s3://filepath") This outputs to the S3 bucket as several files as desired, but each part has a long file name such as: part-00019-tid-5505901395380134908-d8fa632e-bae4-4c7b-9f29-c34e9a344680-236-1-c000.csv Is there a way to write this as a custom file name, preferably in the PySpark write function?… Read More Pyspark write a DataFrame to csv files in S3 with a custom name