Are functions in Swift called in parallel by default or does the first one need to finish execution before the next is called?

I’d like to understand whether a function that may take time to execute will delay the execution of the function after it. Let me share the code:

First, we have a function that calculates an average value based on. This should be almost instant if the user has a few habits, but if the user has a high number of habits with a high number of entries, it might take a little longer.

var averagePercentage = 0.0

func calculateAveragePercentage() {
    var percentages: [Double] { { $0.percentageFormed } }
    if percentages.reduce(0,+) == 0 { averagePercentage = 0.0 }
    else { averagePercentage = Double(percentages.reduce(0,+)) / Double(percentages.count) }

percentageFormed for each habit is calculated like this:

var percentageFormed: Double {
    let validEntries = entries.filter({ $0.completed })
    if validEntries.count == 0 { return 0.0 }
    else { return Double(validEntries.count) / Double(daysToForm) }

What I am trying to understand, and I hope someone could help clarify this is the following: If in the viewDidLoad of a controller I call calculateAveragePercentage() and then I call a method that relies on this value, will the functions be executed in parallel? In which case there is a chance that setCircle() will be called before calculateAveragePercentage() has finished. There’s no completion handler for the operations in calculateAveragePercentage() so I am not sure if there are situations where this can break or if setCircle() will wait for calculateAveragePercentage() to finish no matter how long it takes.

override func viewDidLoad() {

>Solution :

It would be cool if it could work this way, but no, Swift statements are executed one line at a time, just as you see when hopping through the debugger. Of course that excludes code that uses concurrency primitives like threads, dispatch queues or tasks.

There is instruction-level parallelism happening at the CPU level, but that’s mostly impossible to observe as at the software level.

There are languages that auto-parallelize expressions, but they’re mostly an academic pursuit for now. It turns out to be quite difficult because:

  1. it’s pretty tricky to find out which parts are worth the overhead to parallelize, and which aren’t. There’s lots of non-local (and hardware specific) effects that make it hard to reason about (e.g. parallelizing one computation speeds it up, but it thrashes the cache and slows down another)
  1. Impure functions (which are hugely prevalent in most mainstream languages today) are hard to reason about and limit what optimizations can be made without accidentally introducing observable distinctions in behaviour.

Leave a Reply