Welcome to my blog.

A very rare occurrence on my blog, but this post is not (directly) about C++. Having spoken at several conferences and meetups, creating slides is a task that I've always found unsatisfactory for a plethora of reasons. I want my slides to:

1. Be quick and simple to create. I don't to spend time manually aligning text and shapes, I don't want to manually go through 10 slides if I decide to update a code snippet.

2. Seamlessly support code snippets. I want nice and readable inline code keywords and code blocks with syntax highlighting.

In part 1 and part 2 we have explored some techniques that allow us to build simple future-like computation chains without type-erasure or allocations. While our examples demonstrated the idea of nesting computations by moving *this into a parent node (resulting in "huge types"), they did not implement any operation that could be executed in parallel.

Our goal is to have a new when_all node type at the end of this article, which takes an arbitrary amount of Callable objects, invokes them in parallel, aggregates the results, and invokes an eventual continuation afterwards. We'll do this in a non-blocking manner: the thread that completes the last Callable will continue executing the rest of the computation chain without blocking or context-switching.

In part 1 we implemented a barebones future-like class that supported .then continuations without needing allocations or type-erasure. The idea behind it was to encode the entire computation chain into a single object with a huge type:

// pseudocode

auto f = initiate(A).then(B).then(C).then(D);
// ...would become something like:
/*
D<C<B<A>>>>
*/

We previously stored the "parent" node by moving *this as part of a generalized lambda capture, and stored the Callable itself via EBO (empty base optimization). As we will explicitly need access to the "parent" node's type to support non-blocking schedulers and implement when_all in the future, it's time significantly improve our design.

One of the best features of futures (or promises, depending on your language background) is the ability of composing them through asynchronous continuations. Example:

// pseudocode
auto f = when_all([]{ return http_get("cat.com/nicecat.png"); },
[]{ return http_get("dog.com/doggo.png"); })
.then([](auto p0, auto p1)
{
send_email("mail@grandma.com", combine(p0, p1));
});

f.execute(some_scheduler);

In the above pseudocode snippet, the http_get requests can happen in parallel (depending on how the scheduler works - you can imagine it's a thread pool). As soon as both requests are done, the final lambda is automatically invoked with both payloads.

I find this really cool because it allows developers to define directed acyclic graphs of asynchronous/parallel computations with a very clear and readable syntax. This is why the currently crippled std::future is evolving into something more powerful that supports .then and when_all: these facilities are part of N4538: "Extensions for concurrency" - Anthony Williams introduced them excellently in his ACCU 2016 "Concurrency, Parallelism and Coroutines" talk.

This is the second and final part of my C++Now 2017 trip report. You can find the first part here: "c++now 2017 trip report - part 1/2".

### thursday, may 18

I'm back home in London after C++Now 2017. Besides experimenting on personal projects and playing Hollow Knight, I think that putting together the notes I've scribbled down during the conference into a coherent article is a good use of my time. I hope you'll find some of my thoughts interesting!

### background

In case you've never heard about it before, C++Now is a gathering of C++ experts in a beautiful (and expensive!) location in Aspen, CO. In constrast to other "more mainstream" conferences like CppCon and Meeting C++, most of the content is intended for advanced and expert code wizards.

One thing I loved about this year is the theme of the keynotes: other languages. The three talks were about Rust, Haskell and D. I find it very bold to have presentations on different languages at a C++ conference, especially when they're keynotes! This shows a level of open-mindedness, courage, and desire to make C++ and its users richer by taking inspiration from others - I feel glad to be part of this community.

This post continues (ends?) the story of my attempts to create a nice "pattern matching" syntax for variant visitation.

Back in October 2016, in part 1 of the series, I discussed a simple way of implementing a visit_in_place function that would allow variants to be visited providing a set of lambda expressions that would be overloaded on the spot.

std::variant<success, failure> payload{/* ... */};
visit_in_place(payload, [](const success& x){ /* ... */ },
[](const failure& x){ /* ... */ });