Swift. Concurrency

Maxim Krylov
16 min readAug 16, 2020

Here is a quick introduction, along with the full articles list

When talking about concurrency in Swift, it is queues that are usually meant

A queue is an array of tasks, each of which starts execution in the order they were added. The main execution thread is also a queue that is called main

There are two types of queues: serial and concurrent

In a serial queue, tasks run one by one. The next task won’t be kicked off until the previous one is finished

In a concurrent queue, tasks are also run one by one. However, each task is executed on a separate thread, and the queue doesn’t wait for the previous task to run the next one. The next task immediately starts if it’s enough resources for having one more thread

A task can be added to any queue in two ways: sync and async

When adding a task via the method sync, the current thread gets blocked until the task is finished — no matter whether the queue is serial or concurrent

When using the method async, the current thread conversely proceeds execution immediately after adding the task

The current thread itself can be main, or one more serial or concurrent queue

Concurrency Issues

  • Race condition — a result depends on the order of concurrent task execution. The order may be different on each running
  • Priority inversion — priority of a particular task is unexpectedly increased or decreased
  • Deadlock — threads are blocked because their tasks are waiting for resources that are interlocked by them

A race condition occurs when concurrent tasks change shared resources. And depending on the order of those changes, the resources have different values

Priority inversion appears when concurrent tasks are waiting for shared resources that are locked by other tasks. As a result, even if their priorities are higher than others, they have to wait for low-priority tasks to get access to the locked resources

Deadlock is the most dangerous concurrency issue. It happens if there is nested concurrent access to shared resources. Let’s say there is a nested concurrent task. It is waiting for resources, which are locked by the enclosing task. And the enclosing task itself is waiting for the nested task is completed. So, both threads are entirely blocked by each other

Adding a task to a queue from the main thread through the method sync can cause deadlock. Don’t add tasks from the main thread via sync ever!

Grand Central Dispatch (GCD)

DispatchQueue is the main class of GCD

let queue = DispatchQueue(label: "queue")
queue.async { print("queue") }
print("main")
// main
// queue

DispatchQueue has a reference to the main queue and five global concurrent queues, which differ in their Quality of Service (priority) or QoS:

  • .userInteractive — is the highest QoS. It is used for tasks, which are triggered by a user and have to be completed first of all. The tasks shouldn’t take much time. E.g., a user is running a finger on the screen. There is a logic to handle the user action that takes some time for responding. That logic can be added as an async task to the queue with the highest priority to avoid producing UI lags. The main thread itself will proceed to listen to other user actions
  • .userInitiated — is less than the previous one, but still relatively high. The tasks should respond to user actions as soon as possible, but the user may wait for a while (several seconds). E.g., the user has pressed a button and is waiting for the result
  • .utility — is a middle QoS. The tasks aren’t triggered by a user, and they can take more than several seconds. E.g., a user goes on a screen, and it should download and show some image there. Until the image is shown, the user sees an activity indicator
  • .background — is the lowest QoS. The tasks aren’t related to UI and visualization at all. They can take even hours. E.g., synchronization with iCloud
  • .default — trying to take QoS from other sources. Otherwise, it sets the priority between .userInitiated and .utility

It is crucial to keep in mind that the global queues are system ones. And they can contain not only custom tasks

DispatchQueue.global(qos: .userInteractive).async { ... }
DispatchQueue.global(qos: .userInitiated).async { ... }
DispatchQueue.global(qos: .utility).async { ... }
DispatchQueue.global(qos: .background).async { ... }
DispatchQueue.global().async { ... }

The main queue is the main thread. Any UI update can be made only there. The main queue is the unique serial global queue. It shouldn’t have tasks that take a lot of resources or much time

DispatchQueue.main.async { ... }
// btw, again don't do DispatchQueue.main.sync { ... } ever!
// Experiment 1: main queue === main threadfunc currentQueueName() -> String? {
let name = __dispatch_queue_get_label(nil)
return String(cString: name, encoding: .utf8)
}
print(currentQueueName()!)
DispatchQueue.main.async {
print(currentQueueName()!)
}
// com.apple.main-thread
// com.apple.main-thread
// Experiment 2: global dispatch queue - sync vs asyncfunc task(_ symbol: String) {
let priority = qos_class_self().rawValue
for i in 0...3 {
print("\(symbol), i: \(i), priority: \(priority)")
}
}
DispatchQueue.global().sync { task("😁") }
task("😈")
/*
😁, i: 0, priority: 33
😁, i: 1, priority: 33
😁, i: 2, priority: 33
😁, i: 3, priority: 33
😈, i: 0, priority: 33
😈, i: 1, priority: 33
😈, i: 2, priority: 33
😈, i: 3, priority: 33
*/
// vsDispatchQueue.global().async { task("😁") }
task("😈")
/*
😈, i: 0, priority: 33
😁, i: 0, priority: 25
😈, i: 1, priority: 33
😁, i: 1, priority: 25
😈, i: 2, priority: 33
😈, i: 3, priority: 33
😁, i: 2, priority: 25
😁, i: 3, priority: 25
*/
// Experiment 3: global dispatch queue - prioritiesDispatchQueue.main.async { task("😁") }
DispatchQueue.global(qos: .userInteractive).async { task("😈") }
DispatchQueue.global(qos: .userInitiated).async { task("👻") }
DispatchQueue.global(qos: .utility).async { task("🤖") }
DispatchQueue.global(qos: .background).async { task("😾") }
/*
👻, i: 0, priority: 25
😈, i: 0, priority: 33
🤖, i: 0, priority: 17
😾, i: 0, priority: 9
👻, i: 1, priority: 25
😈, i: 1, priority: 33
😈, i: 2, priority: 33
😈, i: 3, priority: 33
👻, i: 2, priority: 25
👻, i: 3, priority: 25
🤖, i: 1, priority: 17
🤖, i: 2, priority: 17
🤖, i: 3, priority: 17
😁, i: 0, priority: 33
😁, i: 1, priority: 33
😁, i: 2, priority: 33
😁, i: 3, priority: 33
😾, i: 1, priority: 9
😾, i: 2, priority: 9
😾, i: 3, priority: 9
*/

Along with the global queues, DispatchQueue also enables to create private ones. When creating a private queue, it requires to provide a label that must be unique. Apple suggests naming the label in accordance with inverse domain notation, e.g., “com.myApp.myQueue”

By default, any private dispatch queue is serial

// Experiment 4: private dispatch queue by default is seriallet myQueue = DispatchQueue(label: "com.myApp.myQueue")
myQueue.async { task("😁") }
myQueue.async { task("😈") }
/*
😁, i: 0, priority: 25
😁, i: 1, priority: 25
😁, i: 2, priority: 25
😁, i: 3, priority: 25
😈, i: 0, priority: 25
😈, i: 1, priority: 25
😈, i: 2, priority: 25
😈, i: 3, priority: 25
*/
// Experiment 4: serial dispatch queue - asynclet myQueue = DispatchQueue(label: "com.myApp.myQueue") // serial
myQueue.async { task("😁") }
task("😈")
/*
😈, i: 0, priority: 33
😁, i: 0, priority: 25
😁, i: 1, priority: 25
😈, i: 1, priority: 33
😁, i: 2, priority: 25
😁, i: 3, priority: 25
😈, i: 2, priority: 33
😈, i: 3, priority: 33
*/

The second parameter of a private dispatch queue is qos

// Experiment 5: private dispatch queue - prioritieslet myQueue = DispatchQueue(label: "com.myApp.myQueue", qos: .userInteractive)
let yourQueue = DispatchQueue(label: "com.myApp.yourQueue", qos: .background)
myQueue.async { task("😁") }
yourQueue.async { task("😈") }
/*
😈, i: 0, priority: 9
😁, i: 0, priority: 33
😁, i: 1, priority: 33
😁, i: 2, priority: 33
😁, i: 3, priority: 33
😈, i: 1, priority: 9
😈, i: 2, priority: 9
😈, i: 3, priority: 9
*/

The experiment above shows that there are at least two ways of having concurrent behavior: either it is a single concurrent queue, or it is several queues (they can be serial), but tasks should be added there via the method async

For having a private concurrent queue, the third parameter attributes should obtain .concurrent flag

// Experiment 6: concurrent private dispatch queuelet myQueue = DispatchQueue(label: "com.myApp.myQueue", attributes: [.concurrent])myQueue.async { task("😁") }
myQueue.async { task("😈") }
/*
😈, i: 0, priority: 25
😁, i: 0, priority: 25
😁, i: 1, priority: 25
😈, i: 1, priority: 25
😁, i: 2, priority: 25
😈, i: 2, priority: 25
😁, i: 3, priority: 25
😈, i: 3, priority: 25
*/

By default, a task that is added to a queue starts automatically when the conditions are satisfied. For having manual running, there should be specified the .initiallyInteractive flag

let myQueue = DispatchQueue(label: "com.myApp.myQueue", attributes: [.concurrent, .initiallyInactive])myQueue.async { task("😁") }
myQueue.async { task("😈") }
myQueue.activate()

The next class of GCD is DispatchWorkItem. It is a kind of a wrapper on dispatch queue tasks

DispatchWorkItem can be used for changing the priority of a particular task over the whole dispatch queue. It can be reached via providing the .enforceQoS flag

// Experiment 7: DispatchWorkItem and .enforceQoSlet myQueue = DispatchQueue(label: "com.myApp.myQueue", qos: .background,  attributes: [.concurrent])let workItem = DispatchWorkItem(qos: .userInteractive, flags: [.enforceQoS]) {
task("😁")
}
myQueue.async { task("😈") }
myQueue.async(execute: workItem)
/*
😈, i: 0, priority: 9
😁, i: 0, priority: 33
😁, i: 1, priority: 33
😁, i: 2, priority: 33
😁, i: 3, priority: 33
😈, i: 1, priority: 9
😈, i: 2, priority: 9
😈, i: 3, priority: 9
*/

The method cancel() of DispatchWorkItem enables to mark a task in a queue isCanceled. The method doesn’t allow to cancel an async task already being executed (see Operation and OperationQueue). However, it can prevent a task from being running. E.g., having DispatchWorkItem and cancel() helps to implement debounce logic

class DebounceActionService {
private let debounceQueue = DispatchQueue(
label: UUID().uuidString,
attributes: [.concurrent]
)
private var debounceWorkItem: DispatchWorkItem?
public func asyncAfter(
delay: DispatchTimeInterval,
_ callback: @escaping () -> Void = {}
) {
debounceWorkItem?.cancel()
debounceWorkItem = DispatchWorkItem(block: callback)
debounceQueue.asyncAfter(
deadline: .now() + delay,
execute: debounceWorkItem!
)
}
}

For having a completion logic that should be triggered after a work item is finished, there should be used the method notify

// Experiment 8: DispatchWorkItem and notifylet myQueue = DispatchQueue(label: "com.myApp.myQueue", qos: .background, attributes: [.concurrent]
let workItem = DispatchWorkItem(qos: .userInteractive, flags: [.enforceQoS]) { task("😁") }
myQueue.async(execute: workItem)
workItem.notify(queue: DispatchQueue.main) {
print("work item has been finished")
}
task("😈")
/*
😁, i: 0, priority: 33
😈, i: 0, priority: 33
😁, i: 1, priority: 33
😈, i: 1, priority: 33
😁, i: 2, priority: 33
😈, i: 2, priority: 33
😁, i: 3, priority: 33
😈, i: 3, priority: 33
work item has been finished
*/

Note: notify adds a task to a specific queue via async

The class DispatchGroup is helpful when there are several async tasks, and all of them must be completed before having particular logic

// Experiment 9: DispatchGroup and notifylet myQueue = DispatchQueue(label: "com.myApp.myQueue", qos: .background,  attributes: [.concurrent])let dispatchGroup = DispatchGroup()
myQueue.async(group: dispatchGroup) { task("😁") }
myQueue.async(group: dispatchGroup) { task("😈") }
dispatchGroup.notify(queue: DispatchQueue.main) { print("done") }
/*
😁, i: 0, priority: 9
😈, i: 0, priority: 9
😈, i: 1, priority: 9
😁, i: 1, priority: 9
😈, i: 2, priority: 9
😈, i: 3, priority: 9
😁, i: 2, priority: 9
😁, i: 3, priority: 9
done
*/

If a task has an async operation inside that has to be finished before doing something, there can be used DispatchGroup the methods enter() and leave()

// Experiment 9: DispatchGroup and notify - enter and leavelet myQueue = DispatchQueue(label: "com.myApp.myQueue", qos: .background,  attributes: [.concurrent])let dispatchGroup = DispatchGroup()
myQueue.async(group: dispatchGroup) {
dispatchGroup.enter()
DispatchQueue.global().asyncAfter(deadline: .now() + 1.0) {
task("😁")
dispatchGroup.leave()
}
}
myQueue.async(group: dispatchGroup) {
dispatchGroup.enter()
DispatchQueue.global().asyncAfter(deadline: .now() + 1.0) {
task("😈")
dispatchGroup.leave()
}
}
dispatchGroup.notify(queue: DispatchQueue.main) {
print("done")
}
/*
😈, i: 0, priority: 9
😁, i: 0, priority: 9
😈, i: 1, priority: 9
😁, i: 1, priority: 9
😈, i: 2, priority: 9
😁, i: 2, priority: 9
😈, i: 3, priority: 9
😁, i: 3, priority: 9
done
*/

A barrier task is a task that is added to a concurrent queue (it doesn’t make sense for a serial one) to wait for all tasks before the barrier, then execute the barrier task itself (it is guaranteed that there can be no other running task in the queue during the barrier task execution), and after all, do away with other tasks in that queue

To enable the barrier logic, GCD sets a particular task in a dispatch queue as a barrier one via the flag .barrier

// Experiment 9: .barrierlet myQueue = DispatchQueue(label: "com.myApp.myQueue", qos: .background,  attributes: [.concurrent])myQueue.async { task("😁") }
myQueue.async { task("😈") }
myQueue.async(flags: .barrier) { task("✋")}
myQueue.async { task("👻") }
/*
😈, i: 0, priority: 9
😁, i: 0, priority: 9
😁, i: 1, priority: 9
😁, i: 2, priority: 9
😈, i: 1, priority: 9
😁, i: 3, priority: 9
😈, i: 2, priority: 9
😈, i: 3, priority: 9
✋, i: 0, priority: 9
✋, i: 1, priority: 9
✋, i: 2, priority: 9
✋, i: 3, priority: 9

👻, i: 0, priority: 9
👻, i: 1, priority: 9
👻, i: 2, priority: 9
👻, i: 3, priority: 9
*/

Thread-Safe Variable Pattern

// Problemvar result = 1let queue = DispatchQueue(label: "com.queue", attributes: [.concurrent])queue.async {
for _ in 0..<3 {
// let's say that 2 is a quite expensive operation
result = result * 2
}
}
queue.async {
for _ in 0..<3 {
result = result * 2
}
}
sleep(1)
print(result)
// We want to make concurrent computing of result,
// because 2 is expensive for doing it in a serial manner

// But here is a race condition. The result can be 8, 16, 32...
// However the correct result is 64
// Solutionclass ThreadSafeResult {
// serial queue
private let queue = DispatchQueue(label: UUID().uuidString)
private var resultValue = 0
public init(_ value: Int) {
set(value: value)
}
public func set(value: Int) {
queue.async {
self.resultValue = value
}
}
public func add(value: Int) {
queue.async {
self.resultValue = self.resultValue * value
}
}
public var value: Int {
var resultValue = 0
queue.sync { resultValue = self.resultValue }
return resultValue
}
}
var result = ThreadSafeResult(1)let queue = DispatchQueue(label: "com.queue", attributes: [.concurrent])queue.async {
for _ in 0..<3 {
result.add(value: 2) // still concurrent computing of 2
}
}
queue.async {
for _ in 0..<3 {
result.add(value: 2) // still concurrent computing of 2
}
}
sleep(1)
print(result.value)
// 64, no race condition

Any thread-safe variable should satisfy the following requirements:

  • Each write operation is async. There cannot be two or more write operations at the same time
  • Each read operation is sync. Any time when the operation is being executed, all write operations started before must be completed

The pattern puts a particular write operation into a single serial queue via the method async. Thus, because the queue is serial, there can be no concurrent changes that might happen in different threads (the first requirement is met). And because any read operation uses the method sync and takes place in the same serial queue, it is guaranteed that all operations before reading will be completed (the second requirement is met)

In the pattern, it also can use a concurrent queue instead of a serial one. And then all async write operations must be added as barriers

Operation and OperationQueue

Operation — is an OOP wrapper on a task. It has a state machine that enables to control of an operation before and during its execution. Also, it allows making a particular operation dependant on another operation

OperationQueue — is an OOP wrapper on a DispatchQueue. Any task there is an Operation. Tasks can be added via the method addOperation as a simple closure or an instance of Operation

addOperation is async function

let operationQueue = OperationQueue()
operationQueue.addOperation { print("hello world") }

By default, an operation is kicked off automatically after getting to a queue. For manual running, an operation has the method start()

start() itself is a sync function. The current thread won’t proceed with execution until the operation is finished

let someOperation = Operation()
someOperation.start()

Each operation can be executed just once. The second attempt to run a completed operation (via the method start() or addOperation) will produce a runtime error

let operationQueue = OperationQueue()
let anotherOperationQueue = OperationQueue(
let operation = Operation()
operationQueue.addOperation(operation)
anotherOperationQueue.addOperation(operation) // runtime error
// orlet operation = Operation()
operation.start()
operation.start() // runtime error

A particular operation can also be created via BlockOperation

The BlockOperation class is a concrete subclass of Operation that manages the concurrent execution of one or more blocks. You can use this object to execute several blocks at once without having to create separate operation objects for each. When executing more than one block, the operation itself is considered finished only when all blocks have finished executing.

let blockOperation = BlockOperation {
sleep(1)
print("hello")
}
blockOperation.addExecutionBlock {
print("world")
}
let operationQueue = OperationQueue()
operationQueue.addOperation(blockOperation)
// world
// hello

For having a custom operation, there should be a class that inherits from Operation and overrides the method main()

class CustomOperation: Operation {
public var operationName = ""

override func main() {
print("hello from \(operationName)")
}
}
let operation = CustomOperation()
operation.operationName = "custom operation"
operation.start()
// hello from custom operation

Any operation can be provisioned with a completionBlock — a closure that will always be executed after the operation gets finished

operation.completionBlock = {
print("hello from completion block")
}

Also, it’s possible to set qualityOfService for each particular operation via the corresponding property. Otherwise, the operation takes QoS from its queue

The state machine represents an operation life cycle

When getting to a queue, an operation has the state pending. Then after a while, it gets ready. Once the queue starts operation execution, the state is executing. After the operation is completed, it has the state finished. Any time the operation can be cancelled via the method cancel(). Then the state is cancelled

The corresponding properties isReady, isExecuting, isFinished, and isCancelled allow checking the operation state

Async Operation Pattern

If a custom operation doesn’t contain any async logic inside, it is enough to override the method main() and rely on a queue and the operation state machine

The problem of having an async logic is that when the logic starts, the current thread immediately proceeds execution. As a result, an operation that has an async logic will be finished before the logic gets completed

The pattern enables to control of the states manually. So, it allows us to set the operation state to finished only when the async logic will be completed

class AsyncOperation: Operation {

public enum State: String {
case isReady, isExecuting, isFinished
}
public var state: State = .isReady {
willSet {
willChangeValue(forKey: newValue.rawValue)
willChangeValue(forKey: state.rawValue)
}
didSet {
didChangeValue(forKey: oldValue.rawValue)
didChangeValue(forKey: state.rawValue)
}
}
override var isReady: Bool {
// super.isReady is used to preserve dependencies logic
return super.isReady && state == .isReady
}
override var isExecuting: Bool {
return state == .isExecuting
}
override var isFinished: Bool {
return state == .isFinished
}
override var isAsynchronous: Bool {
return true
}
override func start() {
if isCancelled {
state = .isFinished
return
}
main()
state = .isExecuting
}
override func cancel() {
super.cancel() // marks the operation isCancelled
state = .isFinished
}
}

Through the methods willChangeValue and didChangeValue, which are based on KVO, an operation can recalculate the value of the state properties. But because the properties isReady, isExecuting, isFinished are read-only, it introduces the property state used inside them, that stores the state of the operation and can be manually set to any state during the recalculation

// Custom async operation exampleclass DownloadImageOperation: AsyncOperation {

public var url: URL
public var image: UIImage?
init(url: URL) {
self.url = url
super.init()
}
override func main() {
if isCancelled {
return
}
downloadImage()
}
private func downloadImage() {
let task = createDownloadImageTask()
task.resume()
}
private func createDownloadImageTask() -> URLSessionDataTask {
return URLSession.shared.dataTask(with: url) { (data, _, _) in
if self.isCancelled {
return
}
guard let data = data else {
self.state = .isFinished
return
}
self.image = UIImage(data: data)
self.state = .isFinished
}
}
}
// ...let operation = DownloadImageOperation(url: url)
operation.qualityOfService = .utility
operation.completionBlock = {
if operation.isCancelled {
return
}
callback(operation.image, url)
}

OperationQueue has a static reference to the main thread — main, and also it has static property current, referring to the current thread

let queue = OperationQueue()
queue.addOperation {
print("current thread is \(OperationQueue.current!.name!)")
print("main thread is \(OperationQueue.main.name!)")
}
// current thread is NSOperationQueue 0x7fb4ee6080a0
// main thread is NSOperationQueue Main Queue

By default, any OperationQueue is concurrent. It takes as many threads for operations as available. For making a queue serial, the property maxConcurrentOperationCount should be set to 1

queue.maxConcurrentOperationCount = 1 // makes the queue serial

It is possible to cancel all operations inside a queue via the method cancelAllOperations(). The method invokes cancel() of each particular operation

queue.cancelAllOperations()

OperationQueue provides the method waitUntilAllOperationsAreFinished() to wait for all operations inside before having some logic. However, comparing to DispatchGroup and notify, the method waitUntilAllOperationsAreFinished() blocks the current thread

queue.waitUntilAllOperationsAreFinished() // blocks current thread

The queue property qualityOfService sets QoS for all operations inside the queue. By default the property value equals to .background

queue.qualityOfService = .utility // .background by default

OperationQueue also allows adding operations without automatic running. For making it available, the property isSuspended should be set to true. It doesn’t affect operations that were added before setting the property. But new operations won’t be started until the property is set back to false

queue.isSuspended = true

Each Operation can have dependencies. A particular operation won’t be kicked off while it has dependencies, which are not completed

To add or remove a dependency there should be invoked the corresponding methods addDependency and removeDependency

let operationA = BlockOperation { print("A") }
let operationB = BlockOperation { print("B") }
let operationC = BlockOperation { print("C") }
operationA.addDependency(operationB)
operationA.addDependency(operationC)
let queue = OperationQueue()
queue.addOperation(operationA)
queue.addOperation(operationB)
queue.addOperation(operationC)
// B
// C
// A

The property dependencies stores all operation dependencies. It can be useful for transferring data between operations

// somewhere inside in custom operationoverride func main() {
for operation in dependencies.filter({ $0 is SomeOperation }) {
print((operation as! SomeOperation).data)
}
}

But having a complicated graph of dependencies can cause deadlock

OperationQueue also supports the barrier mechanism. The method addBarrierBlock enables to add some barrier piece of code into a particular queue

queue.addBarrierBlock { print("barrier") }

GCD vs OperationQueue

OperationQueue is very useful for having async operations and dependencies. It has an easy way to cancel operations. However, waiting for operations blocks threads. And OperationQueue itself is slower than GCD (milliseconds vs. nanoseconds)

GCD is appropriate for having simple tasks that should be computed in other threads. Even if there is a simple async task, GCD can use DispatchGroup methods enter() and leave() to wait for the async logic before doing something. And DispatchGroup itself allows waiting for tasks without any thread blocking. But for complicated tasks and dependencies, it may be still better to use OperationQueue

Resources

--

--

Maxim Krylov
Maxim Krylov

No responses yet