Increasing Performance by Reducing Dynamic Dispatch in Swift

Ruslan Dzhafarov
5 min readMar 5, 2023

--

Dynamic dispatch is a fundamental concept in object-oriented programming that allows for code reuse and abstraction. However, it can also lead to decreased performance due to the overhead involved in runtime method lookup. In this article, we will explore how reducing dynamic dispatch can increase performance in your code.

Dynamic dispatch is the process of determining which method to invoke at runtime based on the type of the object. When a method is called on an object, the runtime must search for the correct implementation of that method based on the actual type of the object. This process can be time-consuming, especially in large codebases with complex class hierarchies.

One way to reduce the overhead of dynamic dispatch is to use static dispatch. Static dispatch, also known as compile-time dispatch, involves the compiler determining which method to invoke based on the declared type of the object. Since this process is performed at compile-time, there is no runtime overhead involved. However, this approach is less flexible than dynamic dispatch, as it cannot handle scenarios where the actual type of the object is not known until runtime.

Another technique to reduce the overhead of dynamic dispatch is inline caching. Inline caching is a method of caching the result of a dynamic dispatch for a specific object and reusing it for subsequent calls to the same method on the same object. This approach can significantly reduce the overhead of dynamic dispatch, as the lookup only needs to be performed once.

Inline caching works by recording the type of the object on which a method is called the first time it’s invoked. This information is stored in a cache, which is checked on subsequent invocations of the same method on the same object. If the type of the object matches the cached value, the cached method implementation is used. Otherwise, a dynamic dispatch is performed, and the result is stored in the cache for future invocations.

Type specialization is another technique to reduce dynamic dispatch overhead. Type specialization involves generating specialized versions of a method for specific types. By doing so, the compiler can eliminate the need for dynamic dispatch altogether for those types, resulting in faster code.

Type specialization is often used in conjunction with inline caching. When a method is called on an object, the runtime first checks the cache to see if a specialized implementation exists for that type. If it does, the specialized implementation is used, eliminating the need for dynamic dispatch.

Virtual tables are another technique that can help reduce the overhead of dynamic dispatch. A virtual table is a lookup table that is used to store the method implementations for a class. When an object is created from a class that uses virtual tables, a pointer to the virtual table is added to the object’s memory layout. When a method is called on the object, the runtime uses the pointer to access the appropriate method implementation from the virtual table.

Virtual tables can help reduce the overhead of dynamic dispatch by precomputing the method lookup table at runtime. This eliminates the need for the runtime to perform a dynamic dispatch to determine the appropriate method implementation, resulting in faster code.

In conclusion, dynamic dispatch is an essential tool for code reuse and abstraction, but it can sometimes come at a cost to performance. By using techniques such as static dispatch, inline caching, type specialization, and virtual tables, you can reduce the overhead of dynamic dispatch and improve the performance of your code. However, each of these techniques has its trade-offs, and it’s important to consider the specific needs of your code when deciding which approach to use.

To increase the performance of code written in Swift, there are several techniques that developers can employ to reduce the dynamic dispatch overhead. In this article, we will discuss three such techniques: using final, private, and Whole Module Optimization.

Using final keyword

The first technique is to use the final keyword when the declaration does not need to be overridden. By applying this restriction on a class, method, or property, the compiler can safely eliminate dynamic dispatch indirection. This allows for direct access to the object’s stored property, as well as a direct function call for the updatePoint() method. However, the update() method will still be called through dynamic dispatch, allowing for subclasses to override it.

class ParticleModel {

final var point = ( x: 0.0, y: 0.0 )
final var velocity = 100.0

final func updatePoint(newPoint: (Double, Double), newVelocity: Double) {
point = newPoint
velocity = newVelocity
}

func update(newP: (Double, Double), newV: Double) {
updatePoint(newP, newVelocity: newV)
}
}

It is also possible to mark an entire class as final, which forbids subclassing and implies that all functions and properties of the class are final as well.

final class ParticleModel {
var point = ( x: 0.0, y: 0.0 )
var velocity = 100.0
// ...
}

Using private keyword

The second technique involves using the private keyword to restrict the visibility of a declaration to the current file. This allows the compiler to find all potentially overriding declarations and infer the final keyword automatically. Assuming there are no subclassing declarations of ParticleModel in the current file, the compiler can replace all dynamically dispatched calls to private declarations with direct calls. This technique allows for direct access to the point and velocity properties and a direct call for the updatePoint() method. However, the update() method will still be called indirectly due to its non-private nature.

class ParticleModel {

private var point = ( x: 0.0, y: 0.0 )
private var velocity = 100.0

private func updatePoint(newPoint: (Double, Double), newVelocity: Double) {
point = newPoint
velocity = newVelocity
}

func update(newP: (Double, Double), newV: Double) {
updatePoint(newP, newVelocity: newV)
}
}

Using Whole Module Optimization

The third technique is to use Whole Module Optimization to infer final on internal declarations. By default, declarations with internal access are only visible within the module where they are declared. However, if Whole Module Optimization is enabled, the entire module is compiled together at the same time, allowing the compiler to infer final on internal declarations that have no visible overrides. In the original code snippet, the compiler can infer final on the properties point, velocity, and the method call updatePoint() when compiling with Whole Module Optimization. However, it cannot infer that update() is final since it has public access.

In conclusion, by using these three techniques — final, private, and Whole Module Optimization — developers can significantly reduce the dynamic dispatch overhead in their Swift code. This results in faster and more efficient code that can lead to better application performance.

--

--

Ruslan Dzhafarov
Ruslan Dzhafarov

Written by Ruslan Dzhafarov

Senior iOS Developer since 2013. Sharing expert insights, best practices, and practical solutions for common development challenges