How to use new Spark Context
I am currently running a jupyter notebook on GCP dataproc and hoping to increase the memory available via my config: I first stopped my spark context: import pyspark sc = spark.sparkContext sc.stop() Waited until running the next code block so sc.stop() can finish conf = pyspark.SparkConf().setAll([(‘spark.driver.maxResultSize’,’8g’)]) sc = pyspark.SparkContext(conf=conf) However when I run data =… Read More How to use new Spark Context