Holder for a result that will be provided later.

final class EventLoopFuture<Value>


Functions that promise to do work asynchronously can return an EventLoopFuture<Value>. The recipient of such an object can then observe it to be notified when the operation completes.

The provider of a EventLoopFuture<Value> can create and return a placeholder object before the actual result is available. For example:

func getNetworkData(args) -> EventLoopFuture<NetworkResponse> {
    let promise = eventLoop.makePromise(of: NetworkResponse.self)
    queue.async {
        . . . do some work . . .
        . . . if it fails, instead . . .
    return promise.futureResult

Note that this function returns immediately; the promise object will be given a value later on. This behaviour is common to Future/Promise implementations in many programming languages. If you are unfamiliar with this kind of object, the following resources may be helpful:

If you receive a EventLoopFuture<Value> from another function, you have a number of options: The most common operation is to use flatMap() or map() to add a function that will be called with the eventual result. Both methods returns a new EventLoopFuture<Value> immediately that will receive the return value from your function, but they behave differently. If you have a function that can return synchronously, the map function will transform the result of type Value to a the new result of type NewValue and return an EventLoopFuture<NewValue>.

let networkData = getNetworkData(args)

// When network data is received, convert it.
let processedResult: EventLoopFuture<Processed> = { (n: NetworkResponse) -> Processed in
    ... parse network data ....
    return processedResult

If however you need to do more asynchronous processing, you can call flatMap(). The return value of the function passed to flatMap must be a new EventLoopFuture<NewValue> object: the return value of flatMap() is a new EventLoopFuture<NewValue> that will contain the eventual result of both the original operation and the subsequent one.

// When converted network data is available, begin the database operation.
let databaseResult: EventLoopFuture<DBResult> = processedResult.flatMap { (p: Processed) -> EventLoopFuture<DBResult> in
    return someDatabaseOperation(p)

In essence, future chains created via flatMap() provide a form of data-driven asynchronous programming that allows you to dynamically declare data dependencies for your various operations.

EventLoopFuture chains created via flatMap() are sufficient for most purposes. All of the registered functions will eventually run in order. If one of those functions throws an error, that error will bypass the remaining functions. You can use flatMapError() to handle and optionally recover from errors in the middle of a chain.

At the end of an EventLoopFuture chain, you can use whenSuccess() or whenFailure() to add an observer callback that will be invoked with the result or error at that point. (Note: If you ever find yourself invoking promise.succeed() from inside a whenSuccess() callback, you probably should use flatMap() or cascade(to:) instead.)

EventLoopFuture objects are typically obtained by:

  • Using .flatMap() on an existing future to create a new future for the next step in a series of operations.

  • Initializing an EventLoopFuture that already has a value or an error

Threading and Futures

One of the major performance advantages of NIO over something like Node.js or Python’s asyncio is that NIO will by default run multiple event loops at once, on different threads. As most network protocols do not require blocking operation, at least in their low level implementations, this provides enormous speedups on machines with many cores such as most modern servers.

However, it can present a challenge at higher levels of abstraction when coordination between those threads becomes necessary. This is usually the case whenever the events on one connection (that is, one Channel) depend on events on another one. As these Channels may be scheduled on different event loops (and so different threads) care needs to be taken to ensure that communication between the two loops is done in a thread-safe manner that avoids concurrent mutation of shared state from multiple loops at once.

The main primitives NIO provides for this use are the EventLoopPromise and EventLoopFuture. As their names suggest, these two objects are aware of event loops, and so can help manage the safety and correctness of your programs. However, understanding the exact semantics of these objects is critical to ensuring the safety of your code.


The most important principle of the EventLoopPromise and EventLoopFuture is this: all callbacks registered on an EventLoopFuture will execute on the thread corresponding to the event loop that created the Future, regardless of what thread succeeds or fails the corresponding EventLoopPromise.

This means that if your code created the EventLoopPromise, you can be extremely confident of what thread the callback will execute on: after all, you held the event loop in hand when you created the EventLoopPromise. However, if your code is handed an EventLoopFuture or EventLoopPromise, and you want to register callbacks on those objects, you cannot be confident that those callbacks will execute on the same EventLoop that your code does.

This presents a problem: how do you ensure thread-safety when registering callbacks on an arbitrary EventLoopFuture? The short answer is that when you are holding an EventLoopFuture, you can always obtain a new EventLoopFuture whose callbacks will execute on your event loop. You do this by calling EventLoopFuture.hop(to:). This function returns a new EventLoopFuture whose callbacks are guaranteed to fire on the provided event loop. As an added bonus, hopTo will check whether the provided EventLoopFuture was already scheduled to dispatch on the event loop in question, and avoid doing any work if that was the case.

This means that for any EventLoopFuture that your code did not create itself (via EventLoopPromise.futureResult), use of hopTo is strongly encouraged to help guarantee thread-safety. It should only be elided when thread-safety is provably not needed.

The “thread affinity” of EventLoopFutures is critical to writing safe, performant concurrent code without boilerplate. It allows you to avoid needing to write or use locks in your own code, instead using the natural synchronization of the EventLoop to manage your thread-safety. In general, if any of your ChannelHandlers or EventLoopFuture callbacks need to invoke a lock (either directly or in the form of DispatchQueue) this should be considered a code smell worth investigating: the EventLoop-based synchronization guarantees of EventLoopFuture should be sufficient to guarantee thread-safety.


Instance Properties

  • let eventLoop: EventLoop

    The EventLoop which is tied to the EventLoopFuture and is used to notify all registered callbacks.

Type Methods

Instance Methods


Conforms To

Removed Members