Simon Brand (a.k.a. TartanLlama) recently published an article called “Functional exceptionless error-handling with optional and expected” where he excellently explains how sum types (e.g. std::optional and std::variant) can be used in lieu of exceptions in order to implement error handling.

The article focuses on the usage of std::optional, on std::expected, and on two monadic operations, map and and_then, which reduce boilerplate and increase readability of the code. While Simon briefly mentions the benefits of ADTs (algebraic data types) over exceptions, I think that the topic deserves more attention, as some readers missed the point.

Here’s an example comment from the article’s /r/cpp thread:

In my previous article (“abstraction design and implementation: repeat) I’ve shown how to implement a simple repeat(n, f) abstraction that invokes a FunctionObject f $$n$$ times in a row.

A large part of the article focused on adding a noexcept specifier to repeat that correctly propagated noexcept-correctness depending on the actions performed on the passed arguments. Adding a noexcept specifier resulted in a less elegant, longer, and less accessible implementation.

Is it worth the hassle? Jason Rice posted a comment on the previous article asking:

You know that you’re obsessed with library design and abstractions when a simple for loop like this one…

for(int i = 0; i < 10; ++i)
{
foo();
}

greatly bothers you.

What’s wrong with it?

A very rare occurrence on my blog, but this post is not (directly) about C++. Having spoken at several conferences and meetups, creating slides is a task that I’ve always found unsatisfactory for a plethora of reasons. I want my slides to:

1. Be quick and simple to create. I don’t to spend time manually aligning text and shapes, I don’t want to manually go through 10 slides if I decide to update a code snippet.

2. Seamlessly support code snippets. I want nice and readable inline code keywords and code blocks with syntax highlighting.

In part 1 and part 2 we have explored some techniques that allow us to build simple future-like computation chains without type-erasure or allocations. While our examples demonstrated the idea of nesting computations by moving *this into a parent node (resulting in “huge types”), they did not implement any operation that could be executed in parallel.

Our goal is to have a new when_all node type at the end of this article, which takes an arbitrary amount of Callable objects, invokes them in parallel, aggregates the results, and invokes an eventual continuation afterwards. We’ll do this in a non-blocking manner: the thread that completes the last Callable will continue executing the rest of the computation chain without blocking or context-switching.

In part 1 we implemented a barebones future-like class that supported .then continuations without needing allocations or type-erasure. The idea behind it was to encode the entire computation chain into a single object with a huge type:

// pseudocode

auto f = initiate(A).then(B).then(C).then(D);
// ...would become something like:
/*
D<C<B<A>>>>
*/

We previously stored the “parent” node by moving *this as part of a generalized lambda capture, and stored the Callable itself via EBO (empty base optimization). As we will explicitly need access to the “parent” node’s type to support non-blocking schedulers and implement when_all in the future, it’s time significantly improve our design.

One of the best features of futures (or promises, depending on your language background) is the ability of composing them through asynchronous continuations. Example:

// pseudocode
auto f = when_all([]{ return http_get("cat.com/nicecat.png"); },
[]{ return http_get("dog.com/doggo.png"); })
.then([](auto p0, auto p1)
{
send_email("mail@grandma.com", combine(p0, p1));
});

f.execute(some_scheduler);

In the above pseudocode snippet, the http_get requests can happen in parallel (depending on how the scheduler works - you can imagine it’s a thread pool). As soon as both requests are done, the final lambda is automatically invoked with both payloads.

I find this really cool because it allows developers to define directed acyclic graphs of asynchronous/parallel computations with a very clear and readable syntax. This is why the currently crippled std::future is evolving into something more powerful that supports .then and when_all: these facilities are part of N4538: “Extensions for concurrency” - Anthony Williams introduced them excellently in his ACCU 2016 “Concurrency, Parallelism and Coroutines” talk.

This is the second and final part of my C++Now 2017 trip report. You can find the first part here: “c++now 2017 trip report - part 1/2”.