3

Bartosz Milewski's Programming Cafe

 3 years ago
source link: https://bartoszmilewski.com/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
Category Theory, Haskell, Concurrency, C++

September 6, 2020

Abstract: The recent breakthroughs in deciphering the language and the literature left behind by the now extinct Twinklean civilization provides valuable insights into their history, science, and philosophy.

The oldest documents discovered on the third planet of the star Lambda Combinatoris (also known as the Twinkle star) talk about the prehistory of the Twinklean thought. The ancient Book of Application postulated that the Essence of Being is decomposition, expressed symbolically as

   A = B C

meaning that A can be decomposed into B and C. The breakthrough came with the realization that, if C itself can be decomposed

   C = F G

then A could be further decomposed into

   A = B (F G)

Similarly, if B can be decomposed

   B = D E

   A = (D E) C

In the latter case (but not the former), it became customary to drop the parentheses and simply write it as

   A = D E C

Following these discoveries, the Twinklean civilization went through a period called The Great Decomposition that lasted almost three thousand years, during which essentially anything that could be decomposed was successfully decomposed.

At the end of The Great Decomposition, a new school of thought emerged, claiming that, if things can be decomposed into parts, they can be also recomposed from these parts.

Initially there was strong resistance to this idea. The argument was put forward that decomposition followed by recomposition doesn’t change anything. This was settled by the introduction of a special object called The Eye, denoted by I, defined by the unique property of leaving things alone

   I A = A

After the introduction of I, a long period of general stagnation accompanied by lack of change followed.

We also don’t have many records from the next period, as it was marked by attempts at forgetting things and promoting ignorance. It started by the introduction of K, which ignores one of its inputs

   K A B = A

Notice that this definition is a shorthand for the parenthesized version

   (K A) B = A

The argument for introducing K was that ignorance is an important part of understanding. By rejecting B we are saying that A is important. We are abstracting away the inessential part B.

For instance—the argument went—if we decompose C

   C = A B

and D happens to have a similar decomposition

   D = A E

then K will abstract the A part from both C and D. From the perspective of K, there is no difference between C and D.

The only positive outcome of the Era of Ignorance was the development of abstract mathematics. Twinklean thinkers argued that, if you disregard the particularities of the fruit in question, there is no difference between having three apples and three oranges. Number three was thus born, followed by many others (four and seven, to name just a few).

The final Industrial phase of the Twinklean civilization that ultimately led to their demise was marked by the introduction of S. The Twinklean industry was based on the principle of mass production; and mass production starts with duplication and reuse. Suppose you have a reusable part C. S allows you to duplicate C and combine it with both A and B.

   S A B C = (A C) (B C)

If you think of A and B as abstractions—that is the results of ignoring some parts of the whole—S lets you substitute C in place of those forgotten parts.

Or, conversely, it tells you that the object

   E = S A B C

can be decomposed into two parts that have something in common. This common part is C.

Unfortunately, during the Industrial period, a lot of Twinkleans lost their identity. They discovered that

   I = S K K

Indeed

   I A = S K K A = K A (K A) = A

But ultimately, what precipitated their end was the existential crisis. They lost their will to live because they couldn’t figure out Y.

Postscript

After submitting this paper to the journal of Compositionality, we have been informed by the reviewer that a similar theory of SKI combinators was independently developed on Earth by a Russian logician, Moses Schönfinkel. According to this reviewer, the answer to the meaning of life is the Y combinator, which introduces recursion and can be expressed as

   Y = S(K(SII))(S(S(KS)K)(K(SII)))

We were unable to verify this assertion, as it led us into a rabbit hole.

August 11, 2020

The series of posts about so called benign data races stirred a lot of controversy and led to numerous discussions at the startup I was working at called Corensic. Two bastions formed, one claiming that no data race was benign, and the other claiming that data races were essential for performance. Then it turned out that we couldn’t even agree on the definition of a data race. In particular, the C++11 definition seemed to deviate from the established notions.

What Is a Data Race Anyway?

First of all, let’s make sure we know what we’re talking about. In current usage a data race is synonymous with a low-level data race, as opposed to a high-level race that involves either multiple memory locations, or multiple accesses per thread. Everybody agrees on the meaning of data conflict, which is multiple threads accessing the same memory location, at least one of them through a write. But a data conflict is not necessarily a data race. In order for it to become a race, one more condition must be true: the access has to be “simultaneous.”

Unfortunately, simultaneity is not a well defined term in concurrent systems. Leslie Lamport was the first to observe that a distributed system follows the rules of Special Relativity, with no independent notion of simultaneity, rather than those of Galilean Mechanics, with its absolute time. So, really, what defines a data race is up to your notion of simultaneity.

Maybe it’s easier to define what isn’t, rather than what is, simultaneous? Indeed, if we can tell which event happened before another event, we can be sure that they weren’t simultaneous. Hence the use of the famous “happened before” relationship in defining data races. In Special Relativity this kind of relationship is established by the exchange of messages, which can travel no faster than the speed of light. The act of sending a message always happens before the act of receiving the same message. In concurrent programming this kind of connection is made using synchronizing actions. Hence an alternative definition of a data race: A memory conflict without intervening synchronization.

The simplest examples of synchronizing actions are the taking and the releasing of a lock. Imagine two threads executing this code:

  mutex.lock();
  x = x + 1;
  mutex.unlock();

In any actual execution, accesses to the shared variable x from the two threads will be separated by a synchronization. The happens-before (HB) arrow will always go from one thread releasing the lock to the other thread acquiring it. For instance in:

# Thread 1 Thread 2 1

mutex.lock(); 2

x = x + 1; 3

mutex.unlock(); 4 mutex.lock();

5 x = x + 1;

6 mutex.unlock();

the HB arrow goes from 3 to 4, clearly separating the conflicting accesses in 2 and 5.

Notice the careful choice of words: “actual execution.” The following execution that contains a race can never happen, provided the mutex indeed guarantees mutual exclusion:

# Thread 1 Thread 2 1

mutex.lock(); 2 mutex.lock();

3 x = x + 1; x = x + 1; 4

mutex.unlock(); 5 mutex.unlock();

It turns out that the selection of possible executions plays an important role in the definition of a data race. In every memory model I know of, only sequentially consistent executions are tried in testing for data races. Notice that non-sequentially-consistent executions may actually happen, but they do not enter the data-race test.

In fact, most languages try to provide the so called DRF (Data Race Free) guarantee, which states that all executions of data-race-free programs are sequentially consistent. Don’t be alarmed by the apparent circularity of the argument: you start with sequentially consistent executions to prove data-race freedom and, if you don’t find any data races, you conclude that all executions are sequentially consistent. But if you do find a data race this way, then you know that non-sequentially-consistent executions are also possible.

drf

DRF guarantee. If there are no data races for sequentially consistent executions, there are no non-sequentially consistent executions. But if there are data races for sequentially consistent executions, the non-sequentially consistent executions are possible.

As you can see, in order to define a data race you have to precisely define what you mean by “simultaneous,” or by “synchronization,” and you have to specify to which executions your definition may be applied.

The Java Memory Model

In Java, besides traditional mutexes that are accessed through “synchronized” methods, there is another synchronization device called a volatile variable. Any access to a volatile variable is considered a synchronization action. You can draw happens-before arrows not only between consecutive unlocks and locks of the same object, but also between consecutive accesses to a volatile variable. With this extension in mind, Java offers the the traditional DRF guarantee. The semantics of data-race free programs is well defined in terms of sequential consistency thus making every Java programmer happy.

But Java didn’t stop there, it also attempted to provide at least some modicum of semantics for programs with data races. The idea is noble–as long as programmers are human, they will write buggy programs. It’s easy to proclaim that any program with data races exhibits undefined behavior, but if this undefined behavior results in serious security loopholes, people get really nervous. So what the Java memory model guarantees on top of DRF is that the undefined behavior resulting from data races cannot lead to out-of-thin-air values appearing in your program (for instance, security credentials for an intruder).

It is now widely recognized that this attempt to define the semantics of data races has failed, and the Java memory model is broken (I’m citing Hans Boehm here).

The C++ Memory Model

Why is it so important to have a good definition of a data race? Is it because of the DRF guarantee? That seems to be the motivation behind the Java memory model. The absence of data races defines a subset of programs that are sequentially consistent and therefore have well-defined semantics. But these two properties: being sequentially consistent and having well-defined semantics are not necessarily the same. After all, Java tried (albeit unsuccessfully) to define semantics for non sequentially consistent programs.

So C++ chose a slightly different approach. The C++ memory model is based on partitioning all programs into three categories:

  1. Sequentially consistent,
  2. Non-sequentially consistent, but with defined semantics, and
  3. Incorrect programs with undefined semantics

The first category is very similar to race-free Java programs. The place of Java volatile is taken by C++11 default atomic. The word “default” is crucial here, as we’ll see in a moment. Just like in Java, the DRF guarantee holds for those programs.

It’s the second category that’s causing all the controversy. It was introduced not so much for security as for performance reasons. Sequential consistency is expensive on most multiprocessors. This is why many C++ programmers currently resort to “benign” data races, even at the risk of undefined behavior. Hans Boehm’s paper, How to miscompile programs with “benign” data races, delivered a death blow to such approaches. He showed, example by example, how legitimate compiler optimizations may wreak havoc on programs with “benign” data races.

Fortunately, C++11 lets you relax sequential consistency in a controlled way, which combines high performance with the safety of well-defined (if complex) semantics. So the second category of C++ programs use atomic variables with relaxed memory ordering semantics. Here’s some typical syntax taken from my previous blog post:

std::atomic<int> owner = 0
...
owner.load(memory_order_relaxed);

And here’s the controversial part: According to the C++ memory model, relaxed memory operations, like the above load, don’t contribute to data races, even though they are not considered synchronization actions. Remember one of the versions of the definition of a data race: Conflicting actions without intervening synchronization? That definition doesn’t work any more.

The C++ Standard decided that only conflicts for which there is no defined semantics are called data races.

Notice that some forms of relaxed atomics may introduce synchronization. For instance, a write access with memory_order_release “happens before” another access with memory_order_acquire, if the latter follows the former in a particular execution (but not if they are reversed!).

Conclusion

What does it all mean for the C++11 programmer? It means that there no longer is an excuse for data races. If you need benign data races for performance, rewrite your code using weak atomics. Weak atomics give you the same kind of performance as benign data races but they have well defined semantics. Traditional “benign” races are likely to be broken by optimizing compilers or on tricky architectures. But if you use weak atomics, the compiler will apply whatever means necessary to enforce the correct semantics, and your program will always execute correctly. It will even naturally align atomic variables to avoid torn reads and writes.

What’s more, since C++11 has well defined memory semantics, compiler writers are no longer forced to be conservative with their optimizations. If the programmer doesn’t specifically mark shared variables as atomic, the compiler is free to optimize code as if it were single-threaded. So all those clever tricks with benign data races are no longer guaranteed to work, even on relatively simple architectures, like the x86. For instance, compiler is free to use your lossy counter or a binary flag for its own temporary storage, as long as it restores it back later. If other threads access those variables through racy code, they might see arbitrary values as part of the “undefined behavior.” You have been warned!

August 5, 2020

This post is based on the talk I gave in Moscow, Russia, in February 2015 to an audience of C++ programmers.

Let’s agree on some preliminaries.

C++ is a low level programming language. It’s very close to the machine. C++ is engineering at its grittiest.

Category theory is the most abstract branch of mathematics. It’s very very high in the layers of abstraction. Category theory is mathematics at its highest.

So why have I decided to speak about category theory to C++ programmers? There are many reasons.

The main reason is that category theory captures the essence of programming. We can program at many levels, and if I ask somebody “What is programming?” most C++ programmers will probably say that it’s about telling the computer what to do. How to move bytes from memory to the processor, how to manipulate them, and so on.

But there is another view of programming and it’s related to the human side of programming. We are humans writing programs. We decide what to tell the computer to do.

We are solving problems. We are finding solutions to problems and translating them in the language that is understandable to the computer.

But what is problem solving? How do we, humans, approach problem solving? It was only a recent development in our evolution that we have acquired these fantastic brains of ours. For hundreds of millions of years not much was happening under the hood, and suddenly we got this brain, and we used this brain to help us chase animals, shoot arrows, find mates, organize hunting parties, and so on. It’s been going on for a few hundred thousand years. And suddenly the same brain is supposed to solve problems in software engineering.

So how do we approach problem solving? There is one general approach that we humans have developed for problem solving. We had to develop it because of the limitations of our brain, not because of the limitations of computers or our tools. Our brains have this relatively small cache memory, so when we’re dealing with a huge problem, we have to split it into smaller parts. We have to decompose bigger problems into smaller problems. And this is very human. This is what we do. We decompose, and then we attack each problem separately, find the solution; and once we have solutions to all the smaller problems, we recompose them.

So the essence of programming is composition.

If we want to be good programmers, we have to understand composition. And who knows more about composing than musicians? They are the original composers!

So let me show you an example. This is a piece by Johann Sebastian Bach. I’ll show you two versions of this composition. One is low level, and one is high level.

The low level is just sampled sound. These are bytes that approximate the waveform of the sound.

SampledMusic

And this is the same piece in typical music notation.

GavotteEnRondeau

Which one is easier to manipulate? Which one is easier to reason about? Obviously, the high level one!

Notice that, in the high level language, we use a lot of different abstractions that can be processed separately. We split the problem into smaller parts. We know that there are things called notes, and they can be reproduced, in this particular case, using violins. There are also some letters like E, A, B7: these are chords. They describe harmony. There is melody, there is harmony, there is the bass line.

Musicians, when they compose music, use higher level abstractions. These higher level abstractions are easier to manipulate, reason about, and modify when necessary.

And this is probably what Bach was hearing in his head.

And he chose to represent it using the high level language of musical notation.

Now, if you’re a rap musician, you work with samples, and you learn how to manipulate the low level description of music. It’s a very different process. It’s much closer to low-level C++ programming. We often do copy and paste, just like rap musicians. There’s nothing wrong with that, but sometimes we would like to be more like Bach.

So how do we approach this problem as programmers and not as musicians. We cannot use musical notation to lift ourselves to higher levels of abstraction. We have to use mathematics. And there is one particular branch of mathematics, category theory, that is exactly about composition. If programming is about composition, then this is what we should be looking at.

Category theory, in general, is not easy to learn, but the basic concepts of category theory are embarrassingly simple. So I will talk about some of those embarrassingly simple concepts from category theory, and then explain how to use them in programming in some weird ways that would probably not have occurred to you when you’re programming.

Categories

So what is this concept of a category? Two things: object and arrows between objects.

In category theory you don’t ask what these objects are. You call them objects, you give them names like A, B, C, D, etc., but you don’t ask what they are or what’s inside them. And then you have arrows that connect objects. Every arrow starts at some object and ends at some object. You can have many arrows going between two objects, or none whatsoever. Again, you don’t ask what these arrows are. You just give them names like f, g, h, etc.

And that’s it—that’s how you visualize a category: a bunch of objects and a bunch of arrows between them.

There are some operations on arrows and some laws that they have to obey, and they are also very simple.

Since composition is the essence of category theory (and of programming), we have to define composition in a category.

Composition

Whenever you have an arrow f going from object A to object B, here represented by two little piggies, and another arrow g going from object B to object C, there is an arrow called their composition, g ∘ f, that goes directly from object A to object C. We pronounce this “g after f.”

Composition is part of the definition of a category. Again, since we don’t know what these arrows are, we don’t ask what composition is. We just know that for any two composable arrows — such that the end of one coincides with the start of the other — there exists another arrow that’s their composition.

And this is exactly what we do when we solve problems. We find an arrow from A to B — that’s our subproblem. We find an arrow from B to C, that’s another subproblem. And then we compose them to get an arrow from A to C, and that’s a solution to our bigger problem. We can repeat this process, building larger and larger solutions by solving smaller problems and composing the solutions.

Notice that when we have three arrows to compose, there are two ways of doing that, depending on which pair we compose first. We don’t want composition to have history. We want to be able to say: This arrow is a composition of these three arrows: h after g after f, without having to use parentheses for grouping. That’s called associativity:

 (f ∘ g) ∘ h = f ∘ (g ∘ h)

Composition in a category must be associative.

And finally, every object has to have an identity arrow. It’s an arrow that goes from the object back to itself. You can have many arrows that loop back to the same object. But there is always one such loop for every object that is the identity with respect to composition.

Identity

It has the property that if you compose it with any other arrow that’s composable with it — meaning it either starts or ends at this object — you get that arrow back. It acts like multiplication by one. It’s an identity — it doesn’t change anything.

Monoid

I can immediately give you an example of a very simple category that I’m sure you know very well and have used all your adult life. It’s called a monoid. It’s another embarrassingly simple concept. It’s a category that has only one object. It may have lots of arrows, but all these arrows have to start at this object and end at this object, so they are all composable. You can compose any two arrows in this category to get another arrow. And there is one arrow that’s the identity. When composed with any other arrow it will give you back the same arrow.

Monoid

There are some very simple examples of monoids. We have natural numbers with addition and zero. An arrow corresponds to adding a number. For instance, you will have an arrow that corresponds to adding 5. You compose it with an arrow that corresponds to adding 3, and you get an arrow that corresponds to adding 8. Identity arrow corresponds to adding zero.

Multiplication forms a monoid too. The identity arrow corresponds to multiplying by 1. The composition rule for these arrows is just a multiplication table.

Strings form another interesting monoid. An arrow corresponds to appending a particular string. Unit arrow appends an empty string. What’s interesting about this monoid is that it has no additional structure. In particular, it doesn’t have an inverse for any of its arrows. There are no “negative” strings. There is no anti-“world” string that, when appended to “Hello world”, would result in the string “Hello“.

In each of these monoids, you can think of the one object as being a set: a set of all numbers, or a set of all strings. But that’s just an aid to imagination. All information about the monoid is in the composition rules — the multiplication table for arrows.

In programming we encounter monoids all over the place. We just normally don’t call them that. But every time you have something like logging, gathering data, or auditing, you are using a monoid structure. You’re basically adding some information to a log, appending, or concatenating, so that’s a monoidal operation. And there is an identity log entry that you may use when you have nothing interesting to add.

Types and Functions

So monoid is one example, but there is something closer to our hearts as programmers, and that’s the category of types and functions. And the funny thing is that this category of types and functions is actually almost enough to do programming, and in functional languages that’s what people do. In C++ there is a little bit more noise, so it’s harder to abstract this part of programming, but we do have types — it’s a strongly typed language, modulo implicit conversions. And we do have functions. So let’s see why this is a category and how it’s used in programming.

This category is actually called Set — a category of sets — because, to the lowest approximation, types are just sets of values. The type bool is a set of two values, true and false. The type int is a set of integers from something like negative two billion to two billion (on a 32-bit machine). All types are sets: whether it’s numbers, enums, structs, or objects of a class. There could be an infinite set of possible values, but it’s okay — a set may be infinite. And functions are just mappings between these sets. I’m talking about the simplest functions, ones that take just one value of some type and return another value of another type. So these are arrows from one type to another.

Can we compose these functions? Of course we can. We do it all the time. We call one function, it returns some value, and with this value we call another function. That’s function composition. In fact this is the basis of procedural decomposition, the first serious approach to formalizing problem solving in programming.

Here’s a piece of C++ code that composes two functions f and g.

C g_after_f(A x) {
    B y = f(x);
    return g(y);
}

In modern C++ you can make this code generic — a higher order function that accepts two functions and returns a third function that’s the composition of the two.

Can you compose any two functions? Yes — if they are composable. The output type of one must match the input type of another. That’s the essence of strong typing in C++ (modulo implicit conversions).

Is there an identity function? Well, in C++ we don’t have an identity function in the library, which is too bad. That’s because there’s a complex issue of how you pass things: is it by value, by reference, by const reference, by move, and so on. But in functional languages there is just one function called identity. It takes an argument and returns it back. But even in C++, if you limit yourself to functions that take arguments by value and return values, then it’s very easy to define a generic identity function.

Notice that the functions I’m talking about are actually special kind of functions called pure functions. They can’t have any side effects. Mathematically, a function is just a mapping from one set to another set, so it can’t have side effects. Also, a pure function must return the same value when called with the same arguments. This is called referential transparency.

A pure function doesn’t have any memory or state. It doesn’t have static variables, and doesn’t use globals. A pure function is an ideal we strive towards in programming, especially when writing reusable components and libraries. We don’t like having global variables, and we don’t like state hidden in static variables.

Moreover, if a function is pure, you can memoize it. If a function takes a long time to evaluate, maybe you’ll want to cache the value, so it can be retrieved quickly next time you call it with the same arguments.

Another property of pure functions is that all dependencies in your code only come through composition. If the result of one function is used as an argument to another then obviously you can’t run them in parallel or reverse the order of execution. You have to call them in that particular order. You have to sequence their execution. The dependencies between functions are fully explicit. This is not true for functions that have side effects. They may look like independent functions, but they have to be executed in sequence, or their side effects will be different.

We know that compiler optimizers will try to rearrange our code, but it’s very hard to do it in C++ because of hidden dependencies. If you have two functions that are not composed, they just calculate different things, and you try to call them in a different order, you might get a completely different result. It’s because of the order of side effects, which are invisible to the compiler. You would have to go deep into the implementation of the functions; you would have to analyse everything they are doing, and the functions they are calling, and so on, in order to find out what these side effects are; and only then you could decide: Oh, I can swap these two functions.

In functional programming, where you only deal with pure functions, you can swap any two functions that are not explicitly composed, and composition is immediately visible.

At this point I would expect half of the audience to leave and say: “You can’t program with pure functions, Programming is all about side effects.” And it’s true. So in order to keep you here I will have to explain how to deal with side effects. But it’s important that you start with something that is easy to understand, something you can reason about, like pure functions, and then build side effects on top of these things, so you can build up abstractions on top of other abstractions.

You start with pure functions and then you talk about side effects, not the other way around.

Auditing

Instead of explaining the general theory of side effects in category theory, I’ll give you an example from programming. Let’s solve this simple problem that, in all likelihood, most C++ programmers would solve using side effects. It’s about auditing.

You start with a sequence of functions that you want to compose. For instance, you have a function getKey. You give it a password and it returns a key. And you have another function, withdraw. You give it a key and gives you back money. You want to compose these two functions, so you start with a password and you get money. Excellent!

But now you have a new requirement: you want to have an audit trail. Every time one of these functions is called, you want to log something in the audit trail, so that you’ll know what things have happened and in what order. That’s a side effect, right?

How do we solve this problem? Well, how about creating a global variable to store the audit trail? That’s the simplest solution that comes to mind. And it’s exactly the same method that’s used for standard output in C++, with the global object std::cout. The functions that access a global variable are obviously not pure functions, we are talking about side effects.

string audit;

int logIn(string passwd){
  audit += passwd;
  return 42;
}

double withdraw(int key){
   audit += “withdrawing ”;
   return 100.0;
}

So we have a string, audit, it’s a global variable, and in each of these functions we access this global variable and append something to it. For simplicity, I’m just returning some fake numbers, not to complicate things.

This is not a good solution, for many reasons. It doesn’t scale very well. It’s difficult to maintain. If you want to change the name of the variable, you’d have to go through all this code and modify it. And if, at some point, you decide you want to log more information, not just a string but maybe a timestamp as well, then you have to go through all this code again and modify everything. And I’m not even mentioning concurrency. So this is not the best solution.

But there is another solution that’s really pure. It’s based on the idea that whatever you’re accessing in a function, you should pass explicitly to it, and then return it, with modifications, from the function. That’s pure. So here’s the next solution.

pair<int, string> 
logIn(string passwd, string audit){
  return make_pair(42, audit + passwd);
}

pair<double, string> 
withdraw(int key, string audit){
  return make_pair(100.0
                 , audit + “withdrawing ”);
}

You modify all the functions so that they take an additional argument, the audit string. And the return type is also changed. When we had an int before, it’s now a pair of int and string. When we had a double before, it’s now a pair of double and string. These function now call make_pair before they return, and they put in whatever they were returning before, plus they do this concatenation of a new message at the end of the old audit string. This is a better solution because it uses pure functions. They only depend on their arguments. They don’t have any state, they don’t access any global variables. Every time you call them with the same arguments, they produce the same result.

The problem though is that they don’t memoize that well. Look at the function logIn: you normally get the same key for the same password. But if you want to memoize it when it takes two arguments, you suddenly have to memoize it for all possible histories. Even if you call it with the same password, but the audit string is different, you can’t just access the cache, you have to cache a new pair of values. Your cache explodes with all possible histories.

An even bigger problem is security. Each of these functions has access to the complete log, including the passwords.

Also, each of these functions has to care about things that maybe it shouldn’t be bothered with. It knows about how to concatenate strings. It knows the details of the implementation of the log: that the log is a string. It must know how to accumulate the log.

Now I want to show you a solution that maybe is not that obvious, maybe a little outside of what we would normally think of.

pair<int, string> 
logIn(string passwd){
  return make_pair(42, passwd);
}

pair<double, string> 
withdraw(int key){
  return make_pair(100.0
                  ,“withdrawing ”);
}

We use modified functions, but they don’t take the audit string any more. They just return a pair of whatever they were returning before, plus a string. But each of them only creates a message about what it considers important. It doesn’t have access to any log and it doesn’t know how to work with an audit trail. It’s just doing its local thing. It’s only responsible for its local data. It’s not responsible for concatenation.

It still creates a pair and it has a modified return type.

We have one problem though: we don’t know how to compose these functions. We can’t pass a pair of key and string from logIn to withdraw, because withdraw expects an int. Of course we could extract the int and drop the string, but that would defeat the goal of auditing the code.

Let’s go back a little bit and see how we can abstract this thing. We have functions that used to return some types, and now they return pairs of the original type and a string. This should in principle work with any original type, not just an int or a double. In functional programming we call this “lifting.” Here we lift some type A to a new type, which is a pair of A and a string. Or we can say that we are “embellishing” the return type of a function by pairing it with a string.

I’ll create an alias for this new parameterised type and call it Writer.

template<class A>
using Writer = pair<A, string>;

My functions now return Writers: logIn returns a writer of int, and withdraw returns a writer of double. They return “embellished” types.

Writer<int> logIn(string passwd){
    return make_pair(42, passwd);
}

Writer<double> withdraw(int key){
    return make_pair(100.0, “withdrawing ”);
} 

So how do we compose these embellished functions?

In this case we want to compose logIn with withdraw to create a new function called transact. This new function transact will take a password, log the user in, withdraw money, and return the money plus the audit trail. But it will return the audit trail only from those two functions.

Writer<double> transact(string passwd){
  auto p1 logIn(passwd);
  auto p2 withdraw(p1.first);
  return make_pair(p2.first
          , p1.second + p2.second);
}

How is it done? It’s very simple. I call the first function, logIn, with the password. It returns a pair of key and string. Then I call the second function, passing it the first component of the pair — the key. I get a new pair with the money and a string. And then I perform the composition. I take the money, which is the first component of the second pair, and I pair it with the concatenation of the two string that were the second components of the pairs returned by logIn and withdraw.

So the accumulation of the log is done “in between” the calls (think of composition as happening between calls). I have these two functions, and I’m composing them in this funny way that involves the concatenation of strings. The accumulation of the log does not happen inside these two functions, as it happened before. It happens outside. And I can pull out this code and abstract the composition. It doesn’t really matter what functions I’m calling. I can do it for any two functions that return embellished results. I can write generic code that does it and I can call it “compose”.

template<class A, class B, class C>
function<Writer<C>(A)> compose(function<Writer<B>(A)> f
                              ,function<Writer<C>(B)> g)
{
    return [f, g](A x) {
        auto p1 = f(x);
        auto p2 = g(p1.first);
        return make_pair(p2.first
                  , p1.second + p2.second);
    };
}

What does compose do? It takes two functions. The first function takes A and returns a Writer of B. The second function takes a B and return a Writer of C. When I compose them, I get a function that takes an A and returns a Writer of C.

This higher order function just does the composition. It has no idea that there are functions like logIn or withdraw, or any other functions that I may come up with later. It takes two embellished functions and glues them together.

We’re lucky that in modern C++ we can work with higher order functions that take functions as arguments and return other functions.

This is how I would implement the transact function using compose.

Writer<double> transact(string passwd){
  return compose<string, int, double>
           (logIn, withdraw)(passwd);
}

The transact function is nothing but the composition of logIn and withdraw. It doesn’t contain any other logic. I’m using this special composition because I want to create an audit trail. And the audit trail is accumulated “between” the calls — it’s in the glue that glues these two functions together.

This particular implementation of compose requires explicit type annotations, which is kind of ugly. We would like the types to be inferred. And you can do it in C++14 using generalised lambdas with return type deduction. This code was contributed by Eric Niebler.

auto const compose = [](auto f, auto g) {
    return [f, g](auto x) {
        auto p1 = f(x);
        auto p2 = g(p1.first);
        return make_pair(p2.first
                    , p1.second + p2.second);
    };
};
Writer<double> transact(string passwd){
  return compose(logIn, withdraw)(passwd);
}

Back to Categories

Now that we’ve done this example, let’s go back to where we started. In category theory we have functions and we have composition of functions. Here we also have functions and composition, but it’s a funny composition. We have functions that take simple types, but they return embellished types. The types don’t match.

Let me remind you what we had before. We had a category of types and pure functions with the obvious composition.

  • Objects: types,
  • Arrows: pure functions,
  • Composition: pass the result of one function as the argument to another.

What we have created just now is a different category. Slightly different. It’s a category of embellished functions. Objects are still types: Types A, B, C, like integers, doubles, strings, etc. But an arrow from A to B is not a function from type A to type B. It’s a function from type A to the embellishment of the type B. The embellished type depends on the type B — in our case it was a pair type that combined B and a string — the Writer of B.

Now we have to say how to compose these arrows. It’s not as trivial as it was before. We have one arrow that takes A into a pair of B and string, and we have another arrow that takes B into a pair of C and string, and the composition should take an A and return a pair of C and string. And I have just defined this composition. I wrote code that does this:

auto const compose = [](auto f, auto g) {
    return [f, g](auto x) {
        auto p1 = f(x);
        auto p2 = g(p1.first);
        return make_pair(p2.first
                    , p1.second + p2.second);
    };
};

So do we have a category here? A category that’s different from the original category? Yes, we do! It has composition and it has identity.

What’s its identity? It has to be an arrow from the object to itself, from A to A. But an arrow from A to A is a function from A to a pair of A and string — to a Writer of A. Can we implement something like this? Yes, easily. We will return a pair that contains the original value and the empty string. The empty string will not contribute to our audit trail.

template<class A>
Writer<A> identity(A x) {
    return make_pair(x, "");
}

Is this composition associative? Yes, it is, because the underlying composition is associative, and the concatenation of strings is associative.

We have a new category. We have incorporated side effects by modifying the original category. We are still only using pure functions and yet we are able to accumulate an audit trail as a side effect. And we moved the side effects to the definition of composition.

It’s a funny new way of looking at programming. We usually see the functions, and the data being passed between functions, and here suddenly we see a new dimension to programming that is orthogonal to this, and we can manipulate it. We change the way we compose functions. We have this new power to change composition. We have a new way of solving problems by moving to these embellished functions and defining a new way of composing them. We can define new combinators to compose functions, and we’ll let the combinators do some work that we don’t want these functions to do. We can factor these things out and make them orthogonal.

Does this approach generalize?

One easy generalisation is to observe that the Writer structure works for any monoid. It doesn’t have to be just strings. Look at how composition and identity are defined in our new cateogory. The only properties of the log we are using are concatenation and unit. Concatenation must be associative for the composition to be associative. And we need a unit of concatenation so that we can define identity in our category. We don’t need anything else. This construction will work with any monoid.

And that’s great because you have one more dimension in which you can modify your code without touching the rest. You can change the format of the log, and all you need to modify in your code is compose and identity. You don’t have to go through all your functions and modify the code. They will still work because all the concatenation of logs is done inside compose.

Kleisli Categories

This was just a little taste of what is possible with category theory. The thing I called embellishment is called a functor in category theory. You can implement categorical functors in C++. There are all kinds of embellishments/functors that you can use here. And now I can tell you the secret: this funny composition of functions with the funny identity is really a monad in disguise. A monad is just a funny way of composing embellished functions so that they form a category. A category based on a monad is called a Kleisli category.

Are there any other interesting monads that I can use this construction with? Yes, lots! I’ll give you one example. Functions that return futures. That’s our new embellishment. Give me any type A and I will embellish it by making it into a future. This embellishment also produces a Kleisli category. The composition of functions that return futures is done through the combinator “then”. You call one function returning a future and compose it with another function returning a future by passing it to “then.” You can compose these function into chains without ever having to block for a thread to finish. And there is an identity, which is a function that returns a trivial future that’s always ready. It’s called make_ready_future. It’s an arrow that takes A and returns a future of A.

Now you understand what’s really happening. We are creating this new category based on future being a monad. We have new words to describe what we are doing. We are reusing an idea from category theory to solve a completely different problem.

Resumable Functions

There is one little invonvenience with this approach. It requires writing a lot of so called “boilerplate” code. Repetitive code that obscures the simple logic. Here it’s the glue code, the “compose” and the “then.” What you’d like to do is to write your code directly in terms of embellished function, and the composition to be implicit. People noticed this and came up with solutions. In case of futures, the practical solution is called resumable functions.

Resumable functions are designed to hide the composition of functions that return futures. Here’s an example.

int cnt = 0;
do
{
   cnt = await streamR.read(512, buf);
   if ( cnt == 0 ) break;
   cnt = await streamW.write(cnt, buf);
} while (cnt > 0);

This code copies a file using a buffer, but it does it asynchronously. We call a function read that’s asynchronous. It doesn’t immediately fill the buffer, it returns a future instead. Then we call the function write that’s also asynchronous. We do it in a loop.

This code looks almost like sequential code, except that it has these await keywords. These are the points of insertion of our composition. These are the places where the code is chopped into pieces and composed using then.

I won’t go into details of the implementation. The point is that the composition of these embellished functions is almost entirely hidden. It doesn’t look like composition in a Kleisli category, but it really is.

This solution is usually described at a very low level, in terms of coroutines implemented as state machines with static variables and gotos. And what is being lost in all this engineering talk is how general this idea is — the idea of overloading composition to build a category of embellished functions.

Just to drive this home, here’s an example of different code that does completely different stuff. It calculates Fibonacci numbers on demand. It’s a generator of Fibonacci numbers.

generator<int> fib() 
{
    int a = 0; 
    int b = 1; 
    for (;;) { 
        __yield_value a; 
        auto next = a + b; 
        a = b; 
        b = next; 
    } 
} 

Instead of await it has __yield_value. But it’s the same idea of resumable functions, only with a different monad. This monad is called a list monad. And this kind of code in combination with Eric Niebler’s proposed range library could lead to very powerful programming idioms.

Conclusion

Why do we have to separate the two notions: that of resumable functions and that of generators, if they are based on the same abstraction? Why do we have to reinvent the wheel?

There’s this great opportunity for C++, and I’m afraid it will be missed like so many other opportunities for great generalisations that were missed in the past. It’s the opportunity to introduce one general solution based on monads, rather than keep creating ad-hoc solutions, one problem at a time. The same very general pattern can be used to control all kinds of side effects. It can be used for auditing, exceptions, ranges, futures, I/O, continuations, and all kinds of user-defined monads.

This amazing power could be ours if we start thinking in more abstract terms, if we reach into category theory.

August 3, 2020

The main idea of functional programming is to treat functions like any other data types. In particular, we want to be able to pass functions as arguments to other functions, return them as values, and store them in data structures. But what kind of data type is a function? It’s a type that, when paired with another piece of data called the argument, can be passed to a function called apply to produce the result.

apply :: (a -> d, a) -> d

In practice, function application is implicit in the syntax of the language. But, as we will see, even if your language doesn’t support higher-order functions, all you need is to roll out your own apply.

But where do these function objects, arguments to apply, come from; and how does the built-in apply know what to do with them?

When you’re implementing a function, you are, in a sense, telling apply what to do with it–what code to execute. You’re implementing individual chunks of apply. These chunks are usually scattered all over your program, sometimes anonymously in the form of lambdas.

We’ll talk about program transformations that introduce more functions, replace anonymous functions with named ones, or turn some functions into data types, without changing program semantics. The main advantage of such transformations is that they may improve performance, sometimes drastically so; or support distributed computing.

Function Objects

As usual, we look to category theory to provide theoretical foundation for defining function objects. It turns out that we are able to do functional programming because the category of types and functions is cartesian closed. The first part, cartesian, means that we can define product types. In Haskell, we have the pair type (a, b) built into the language. Categorically, we would write it as a \times b. Product is functorial in both arguments so, in particular, we can define a functor

    L_a c = c \times a

It’s really a family of functors that it parameterized by a.

The right adjoint to this functor

    R_a d = a \to d

defines the function type a \to d (a.k.a., the exponential object d^a). The existence of this adjunction is what makes a category closed. You may recognize these two functors as, respectively, the writer and the reader functor. When the parameter a is restricted to monoids, the writer functor becomes a monad (the reader is already a monad).

An adjunction is defined as a (natural) isomorphism of hom-sets:

    D(L c, d) \cong C(c, R d)

or, in our case of two endofunctors, for some fixed a,

    C(c \times a, d) \cong C(c, a \to d)

In Haskell, this is just the definition of currying:

curry   :: ((c, a) -> d)   -> (c -> (a -> d))
uncurry :: (c -> (a -> d)) -> ((c, a) -> d)

You may recognize the counit of this adjunction

    \epsilon_d : L_a (R_a d) \to \mbox{Id}\; d

as our apply function

    \epsilon_d : ((a \to d) \times a) \to d

Adjoint Functor Theorem

In my previous blog post I discussed the Freyd’s Adjoint Functor theorem from the categorical perspective. Here, I’m going to try to give it a programming interpretation. Also, the original theorem was formulated in terms of finding the left adjoint to a given functor. Here, we are interested in finding the right adjoint to the product functor. This is not a problem, since every construction in category theory can be dualized by reversing the arrows. So instead of considering the comma category c/R, we’ll work with the comma category L/d. Its objects are pairs (c, f), in which f is a morphism

    f \colon L c \to d.

screen-shot-2020-06-25-at-11.44.46-am.png?w=357&h=197

This is the general picture but, in our case, we are dealing with a single category, and L is an endofunctor. We can implement the objects of our comma category in Haskell

data Comma a d c = Comma c ((c, a) -> d)

The type a is just a parameter, it parameterizes the (left) functor L_a

    L_a c = c \times a

and d is the target object of the comma category.

We are trying to construct a function object representing functions a->d, so what role does c play in all of this? To understand that, you have to take into account that a function object can be used to describe closures: functions that capture values from their environment. The type c represents those captured values. We’ll see this more explicitly later, when we talk about defunctionalizing closures.

Our comma category is a category of all closures that go from a to d while capturing all possible environments. The function object we are constructing is essentially a sum of all these closures, except that some of them are counted multiple times, so we need to perform some identifications. That’s what morphisms are for.

The morphisms of the comma category are morphisms h \colon c \to c' in \mathcal C that make the following triangles in \mathcal D commute.

Unfortunately, commuting diagrams cannot be expressed in Haskell. The closest we can get is to say that a morphism from

c1 :: Comma a d c
c2 :: Comma a d c'

is a function h :: c -> c' such that, if

c1 = Comma c f
f :: (c, a) -> d
c2 = Comma c' g
g :: (c', a) -> d
f = g . bimap h id

Here, bimap h id is the lifting of h to the functor L_a. More explicitly

f (c, x) = g (h c, x)

As we are interpreting c as the environment in which the closure is defined, the question is: does f use all of the information encoded in c or just a part of it? If it’s just a part, then we can factor it out. For instance, consider a lambda that captures an integer, but it’s only interested in whether the integer is even or odd. We can replace this lambda with one that captures a Boolean, and use the function even to transform the environment.

The next step in the construction is to define the projection functor from the comma category L/d back to \mathcal C that forgets the f part and just keeps the object c

    \pi_d \colon (c, f) \mapsto c

We use this functor to define a diagram in \mathcal C. Now, instead of taking its limit, as we did in the previous installment, we’ll take the colimit of this diagram. We’ll use this colimit to define the action of the right adjoint functor R on d.

    R d = \underset{L/d}{\mbox{colim}} \; \pi_d

In our case, the forgetful functor discards the function part of Comma a d c, keeping only the environment c. This means that, as long as d is not Void, we are dealing with a gigantic diagram that encompasses all objects in our category of types. The colimit of this diagram is a gigantic coproduct of everything, modulo identifications introduced by morphisms of the comma category. But these identifications are crucial in pruning out redundant closures. Every lambda that uses only part of the information from the captured environment can be identified with a simpler lambda that uses a simplified environment.

For illustration, consider a somewhat extreme case of constructing the function object 1 \to d, or d^1 (d to the power of the terminal object). This object should be isomorphic to d. Let’s see how this works: The terminal object 1 is the unit of the product, so

    L_1 c = c \times 1 \cong c

so the comma category L_1 / d is just the slice category C/d of arrows to d. It so happens that this category has the terminal object (d, id_d). The colimit of a diagram that has a terminal object is that terminal object. So, indeed, in this case, our construction produces a function object that is isomorphic to d.

    1 \to d \cong d

Intuitively, given a lambda that captures a value of type c from the environment and returns a d, we can trivially factor it out, using this lambda to transform the environment for c to d and then apply the identity on d. The latter corresponds to the comma category object (d, id_d), and the forgetful functor maps it to d.

It’s instructive to run a few more examples to get the hang of it. For instance, the function object Bool->d can be constructed by considering closures of the type

f :: (c, Bool) -> d

Any such closure can be factorized by the following transformation of the environment

h :: c -> (d, d)
h c = (f (c, True), f (c, False))

followed by

g :: ((d, d), Bool) -> d
g ((d1, d2), b) = if b then d1 else d2

Indeed:

f (c, b) = g (h c, b)

In other words
    2 \to d \cong d \times d
where 2 corresponds to the Bool type.

Counit

We are particularly interested in the counit of the adjunction. Its component at d is a morphism

    \epsilon_d : L R d \to d

It also happens to be an object in the comma category, namely

    (R d, \epsilon_d \colon L R d \to d).

In fact, it is the terminal object in that category. You can see that because for any other object (c, f \colon L c \to d) there is a morphism h \colon c \to R d that makes the following triangle commute:

screen-shot-2020-07-01-at-6.25.27-pm.png?w=206&h=98

This morphisms h is a leg in the terminal cocone that defines R d. We know for sure that c is in the base of that cocone, because it’s the projection \pi_d of (c, f \colon L c \to d).

screen-shot-2020-07-01-at-9.09.01-pm.png?w=410&h=163

To get some insight into the construction of the function object, imagine that you can enumerate the set of all possible environments c_i. The comma category L_a/d would then consist of pairs (c_i, f_i \colon (c_i, a) \to d). The coproduct of all those environments is a good candidate for the function object a \to d. Indeed, let’s try to define a counit for it:

    (\coprod c_i, a) \to d \cong \coprod (c_i, a) \to d \cong \prod ((c_i, a) \to d)

I used the distributive law:

    (\coprod c_i, a) \cong \coprod (c_i, a)

and the fact that the mapping out of a sum is the product of mappings. The right hand side can be constructed from the morphisms of the comma category.

So the object \coprod c_i satisfies at least one requirement of the function object: there is an implementation of apply for it. It is highly redundant, though. This is why, instead of the coproduct, we used the colimit in our construction of the function object. Also, we ignored the size issues.

Size Issues

As we discussed before, this construction doesn’t work in general because of size issues: the comma category is not necessarily small, and the colimit might not exist.

To address this problems, we have previously defined small solution sets. In the case of the right adjoint, a solution set is a family of objects that is weakly terminal in L/c. These are pairs (c_i, f_i \colon L c_i \to d) that, among themselves, can factor out any g \colon L c \to d

    g = f_i \circ L h

It means that we can always find an index i and a morphism h \colon c \to c_i to satisfy that equation. Every g might require a different f_i and h to factor through but, for any g, we are guaranteed to always find a pair.

screen-shot-2020-06-26-at-10.00.15-am.png?w=357&h=202

Once we have a complete solution set, the right adjoint R d is constructed by first forming a coproduct of all the c_i and then using a coequalizer to construct one terminal object.

What is really interesting is that, in some cases, we can just use the coproduct of the solution set, \coprod_i c_i to approximate the adjoint (thus skipping the equalizer part).

screen-shot-2020-06-26-at-11.41.37-am.png?w=359&h=208

The idea is that, in a particular program, we don’t need to represent all possible function types, just a (small) subset of those. We are also not particularly worried about uniqueness: it’s no problem if the same function ends up with multiple syntactic representations.

Let’s reformulate Freyd’s construction of the function object in programming terms. The solution set is the set of types c_i and functions
f_i \colon (c_i, a) \to d
such that, for any function
g \colon (c, a) \to d
that is of interest in our program (for instance, used as an argument to another function) there exists an i and a function
h \colon c \to c_i
such that g can be rewritten as
g (c, a) = f_i (h c, a)
In other words, every function of interest can be replaced by one of the solution-set functions. The environment for this standard function can be always extracted from the environment of the more general function.

CPS Transformation

A particular application of higher order functions shows up in the context of continuation passing transformation. Let’s look at a simple example. We are going to implement a function that traverses a binary tree containing strings, and concatenates them all into one string. Here’s the tree

data Tree = Leaf String 
          | Node Tree String Tree

Recursive traversal is pretty straightforward

show1 :: Tree -> String
show1 (Leaf s) = s
show1 (Node l s r) =
  show1 l ++  s ++ show1 r

We can test it on a small tree:

tree :: Tree
tree = Node (Node (Leaf "1 ") "2 " (Leaf "3 "))
            "4 " 
            (Leaf "5 ")
test = show1 tree

There is just one problem: recursion consumes the runtime stack, which is usually a limited resource. Your program may run out of stack space resulting in the “stack overflow” runtime error. This is why the compiler will turn recursion into iteration, whenever possible. And it is always possible if the function is tail recursive, that is, the recursive call is the last call in the function. No operation on the result of the recursive call is permitted in a tail recursive function.

This is clearly not happening in our implementation of show1: After the recursive call is made to traverse the left subtree, we still have to make another call to traverse the right tree, and the two results must be concatenated with the contents of the node.

Notice that this is not just a functional programming problem. In an imperative language, where iteration is the rule, tree traversal is still implemented using recursion. That’s because the data structure itself is recursive. It used to be a common interview question to implement non-recursive tree traversal, but the solution is always to explicitly implement your own stack (we’ll see how it’s done at the end of this post).

There is a standard procedure to make functions tail recursive using continuation passing style (CPS). The idea is simple: if there is stuff to do with the result of a function call, let the function we’re calling do it instead. This “stuff to do” is called a continuation. The function we are calling takes the continuation as an argument and, when it finishes its job, it calls it with the result. A continuation is a function, so CPS-transformed functions have to be higher-order: they must accept functions as arguments. Often, the continuations are defined on the spot using lambdas.

Here’s the CPS transformed tree traversal. Instead of returning a string, it accepts a continuation k, a function that takes a string and produces the final result of type a.

show2 :: Tree -> (String -> a) -> a
show2 (Leaf s) k = k s
show2 (Node lft s rgt) k =
  show2 lft (\ls -> 
    show2 rgt (\rs -> 
      k (ls ++ s ++ rs)))

If the tree is just a leaf, show2 calls the continuation with the string that’s stored in the leaf.

If the tree is a node, show2 calls itself recursively to convert the left child lft. This is a tail call, nothing more is done with its result. Instead, the rest of the work is packaged into a lambda and passed as a continuation to show2. This is the lambda

\ls -> 
    show2 rgt (\rs -> 
      k (ls ++ s ++ rs))

This lambda will be called with the result of traversing the left child. It will then call show2 with the right child and another lambda

\rs -> 
      k (ls ++ s ++ rs)

Again, this is a tail call. This lambda expects the string that is the result of traversing the right child. It concatenates the left string, the string from the current node, and the right string, and calls the original continuation k with it.

Finally, to convert the whole tree t, we call show2 with a trivial continuation that accepts the final result and immediately returns it.

show t = show2 t (\x -> x)

There is nothing special about lambdas as continuations. It’s possible to replace them with named functions. The difference is that a lambda can implicitly capture values from its environment. A named function must capture them explicitly. The three lambdas we used in our CPS-transformed traversal can be replaced with three named functions, each taking an additional argument representing the values captured from the environment:

done s = s
next (s, rgt, k) ls = show3 rgt (conc (ls, s, k))
conc (ls, s, k) rs = k (ls ++ s ++ rs)

The first function done is an identity function, it forces the generic type a to be narrowed down to String.

Here’s the modified traversal using named functions and explicit captures.

show3 :: Tree -> (String -> a) -> a
show3 (Leaf s) k = k s
show3 (Node lft s rgt) k =
  show3 lft (next (s, rgt, k))

show t = show3 t done

We can now start making the connection with the earlier discussion of the adjoint theorem. The three functions we have just defined, done, next, and conc, form the family

    f_i \colon (c_i, a) \to b.

They are functions of two arguments, or a pair of arguments. The first argument represents the object c_i, part of the solution set. It corresponds to the environment captured by the closure. The three c_i are, respectively

()
(String, Tree, String -> String)
(String, String, String->String)

(Notice the empty environment of done, here represented as the unit type ().)

The second argument of all three functions is of the type String, and the return type is also String so, according to Freyd’s theorem, we are in the process of defining the function object a \to b, where a is String and b is String.

Defunctionalization

Here’s the interesting part: instead of defining the general function type String->String, we can approximate it with the coproduct of the elements of the solution set. Here, the three components of the sum type correspond to the environments captured by our three functions.

data Kont = Done 
          | Next String Tree   Kont 
          | Conc String String Kont

The counit of the adjunction is approximated by a function from this sum type paired with a String, returning a String

apply :: Kont -> String -> String
apply Done s = s
apply (Next s rgt k) ls = show4 rgt (Conc ls s k)
apply (Conc ls s k) rs  = apply k (ls ++ s ++ rs)

Rather than passing one of the three functions to our higher-order CPS traversal, we can pass this sum type

show4 :: Tree -> Kont -> String
show4 (Leaf s) k = apply k s
show4 (Node lft s rgt) k = 
  show4 lft (Next s rgt k)

This is how we execute it

show t = show4 t Done

We have gotten rid of all higher-order functions by replacing their function arguments with a data type equipped with the apply function. There are several situations when this is advantageous. In procedural languages, defunctionalization may be used to replace recursion with loops. In fact, the Kont data structure can be seen as a user-defined stack, especially if it’s rewritten as a list.

type Kont = [(String, Either Tree String)]

Here, Done was replaced with an empty list and Next and Conc together correspond to pushing a value on the stack.

In Haskell, the compiler performs tail recursion optimization, but defunctionalization may still be useful in implementing distributed systems, or web servers. Any time we need to pass a function between a client and a server, we can replace it by a data type that can be easily serialized.

Bibliography

July 22, 2020

One of the tropes of detective movies is the almost miraculous ability to reconstruct an image from a blurry photograph. You just scan the picture, say “enhance!”, and voila, the face of the suspect or the registration number of their car appear on your computer screen.

bladerunner.jpeg?w=300&h=125

Computer, enhance!

With constant improvements in deep learning, we might eventually get there. In category theory, though, we do this all the time. We recover lost information. The procedure is based on the basic tenet of category theory: an object is defined by its interactions with the rest of the world. This is the basis of all universal constructions, the Yoneda lemma, Grothendieck fibration, Kan extensions, and practically everything else.

An iconic example is the construction of the left adjoint to a given functor, and that’s what we are going to study here. But first let me explain why I decided to pick this subject, and how it’s related to programming. I wanted to write a blog post about CPS (continuation passing style) and defunctionalization, and I stumbled upon an article in nLab that related defunctionalization to Freyd’s Adjoint Functor Theorem; in particular to the Solution Set Condition. Such an unexpected connection peaked my interest and I decided to dig deeper into it.

Adjunctions

Consider a functor R from some category \mathcal D to another category \mathcal C.

R \colon D \to C

A functor, in general, loses some data, so it’s normally impossible to invert it. It produces a “blurry” image of \mathcal D inside \mathcal C. Its left adjoint is a functor from \mathcal C to \mathcal D

L \colon C \to D

that attempts to reconstruct lost information, to the best of its ability. Often the functor R is forgetful, which means that it purposefully forgets some information. Its left adjoint is then called free, because it freely ad-libs the forgotten information.

Of course it’s not always possible, but under certain conditions such left adjoint exists. These conditions are spelled out in the Freyd’s General Adjoint Functor Theorem.

To understand them, we have to talk a little about size issues.

Size issues

A lot of interesting categories are large. It means that there are so many objects in the category that they don’t even form a set. The category of all sets, for instance, is large (there is no set of all sets). It’s also possible that morphisms between two objects don’t form a set.

A category in which objects form a set is called small, and a category in which hom-sets are sets is called locally small.

A lot of complexities in Freyd’s theorem are related to size issues, so it’s important to precisely spell out all the assumptions.

We assume that the source of the functor R, the category \mathcal D, is locally small. It must also be small-complete, that is, every small diagram in \mathcal D must have a limit. (A small diagram is a functor from a small category.) We also want the functor R to be continuous, that is, to preserve all small limits.

If it weren’t for size issues, this would be enough to guarantee the existence of the left adjoint, and we’ll first sketch the proof for this simplified case. In the general case, there is one more condition, the Solution Set Condition, which we’ll discuss later.

Left adjoint and the comma category

Here’s the problem we are trying to solve. We have a functor R that maps objects and morphisms from \mathcal D to \mathcal C. We want to define another functor L that goes in the opposite direction. We’re not looking for the inverse, so we’re not expecting the composition of this functor with R to be identity, but we want it to be related to identity by two natural transformations called unit and counit. Their components are, respectively:

\eta_c : c \to R L c

\epsilon_d : L R d \to d

and, as long as they satisfy some additional triangle identities, they will establish the adjunction L \dashv R.

We are going to define L point-wise, so let’s pick an object c in \mathcal C and try to propagate it back to \mathcal D. To do that, we have to gather as much information about c as possible. We will propagate all this information back to \mathcal D and find an object in \mathcal D that “looks the same.” Think of this as creating a hologram of c and shipping it back to \mathcal D.

All information about c is encoded in morphisms so, in order to generate our hologram, we’ll gather all morphisms that originate in c. These morphisms form a category called the coslice category c/C.

The objects in c/C are pairs (x, f \colon c \to x). In other words, these are all the arrows that emanate from c, indexed by their target objects x. But what really defines the structure of this category are morphisms between these arrows. A morphism in c/C from (x, f) to (y, g) is a morphism h \colon x \to y that makes the following triangle commute:

fig1.png?w=199&h=101

We now have complete information about c encoded in the slice category, but we have no way to propagate it back to \mathcal D. This is because, in general, the image of \mathcal D doesn’t cover the whole of \mathcal C. Even more importantly, not all morphisms in \mathcal C have corresponding morphisms in \mathcal D. We have to scale down our expectations, and define a partial hologram that does not capture all the information about c; only this part which can be back-propagated to \mathcal D using the functor R. Such partial hologram is called a comma category c/R.

The objects of c/R are pairs (d, f \colon c \to R d), where d is an object in \mathcal D. In other words, these are all the arrows emanating from c whose target is in the image of R. Again, the important structure is encoded in the morphisms of c/R. These are the arrows in \mathcal D, h \colon d \to d' that make the following diagram commute in \mathcal C

Notice an interesting fact: we can interpret these triangles as commutation conditions in a cone whose apex is c and whose base is formed by objects and morphisms in the image of R. But not all objects or morphism in the image of R are included. Only those morphisms that make the appropriate triangle commute–and these are exactly the morphisms that satisfy the cone condition. So the comma category builds a cone in \mathcal C.

fig3.png?w=300&h=162

Constructing the limit

We can now take all this information about c that’s been encoded in c/R and move it back to \mathcal D. We define a projection functor \pi_c \colon c/R \to D that maps (d, f) to d, thus forgetting the morphism f. What’s important, though, is that this functor keeps the information encoded in the morphisms of c/R, because these are morphisms in \mathcal D.

fig4.png?w=300&h=179

The image of \pi_c doesn’t necessarily cover the whole of \mathcal D, because not every R d has arrows coming from c. Similarly, only some morphisms, the ones that make the appropriate triangle in \mathcal C commute, are picked by \pi_c. But those objects and morphisms that are in the image of \pi_c form a diagram in \mathcal C. This diagram is our partial hologram, and we can use it to pick an object in \mathcal D that looks almost exactly like c. That object is the limit of this diagram. We pick the limit of this diagram as the definition of L c: the left adjoint of R acting on c.

Here’s the tricky part: we assumed that \mathcal D was small-complete, so every small diagram has a limit; but the diagram defined by \pi_c is not necessarily small. Let’s ignore this problem for a moment, and continue sketching the proof. We want to show that the mapping that assigns the limit of \pi_c to every c is left adjoint to R.

Let’s see if we can define the unit of the adjunction:

\eta_c : c \to R L c

Since we have defined L c as the limit of the diagram \pi_c and R preserves limits (small limits, really; but we are ignoring size problems for the moment) then R L c must be the limit of the diagram R \pi_c in \mathcal C. But, as we noted before, the diagram R \pi_c is exactly the base of the cone with the apex c that we used to define the comma category c/R. Since R L c is the limit of this diagram, there must be a unique morphism from any other cone to it. In particular there must be a morphism from c to it, because c is an apex of the cone defined by the comma category. And that’s the morphism we’ll chose as our \eta_c.

fig5.png?w=423&h=165

Incidentally, we can interpret \eta_c itself as an object of the comma category c/R, namely the one defined by the pair (Lc, \eta_c \colon c \to R L c). In fact, this is the initial object in that category. If you pick any other object, say, (d, g \colon c \to R d), you can always find a morphism h \colon L c \to d, which is just a leg, a projection, in the limiting cone that defines L c. It is automatically a morphism in c/R because the following triangle commutes:

fig6.png?w=137&h=161

This is the triangle that defines \eta_c as a morphism of cones, from the top cone with the apex c, to the bottom (limiting) cone with the apex R L c. We’ll use this interpretation later, when discussing the full version of the Freyd’s theorem.

We can also define the counit of the adjunction. Its component at c is a morphism

\epsilon_d : L R d \to d

First, we repeat our construction starting with c = R d. We define the comma category R d / R and use \pi_{R d} to create the diagram whose limit is L R d. We pick \epsilon_d to be a projection in the limiting cone. We are guaranteed that d is in the base of the cone, because it’s the image of (d, id \colon R d \to R d) under \pi_{R d}.

To complete this proof, one should show that the unit and counit are natural transformations and that they satisfy triangle identities.

End of a comma category

An interesting insight into this construction can be gained using the end calculus. In my previous post, I talked about (weighted) colimits as coends, but the same argument can be dualized to limits and ends. For instance, this is our comma category as a category of elements in the coend notation:

c/R \cong \mathcal{D} \int^d \mathcal{C} (c, R d)

The limit of of the projection functor \pi_c over the comma category can be written in the end notation as

\lim_{c/R} \pi_c \cong \int_{(d, f)\colon c/R} \pi_c (d, f) \cong \int_{(d, f)\colon c/R} d

This, in turn, can be rewritten as a weighted limit, with every d weighted by the set \mathcal{C}(c, R d):

\mbox{lim}^{\mathcal{C}(c, R -)} \mbox{Id} \cong \int_{d \colon \mathcal{D}} \mathcal{C}(c, R d) \pitchfork d

The pitchfork here is the power (cotensor) defined by the equation

\mathcal{D}\big(d', s \pitchfork d\big) \cong Set\big(s, \mathcal{D}(d', d)\big)

You may think of s \pitchfork d as the product of s copies of the object d, where s is a set. The name power conveys the idea of iterated multiplication. Or, since power is a special case of exponentiation, you may think of s \pitchfork d as a function object imitating mappings from s to d.

To continue, if the left adjoint L exists, the weighted limit in question can be replaced by

\int_{d \colon \mathcal{D}} \mathcal{D}(L c, d) \pitchfork d

which, using standard calculus of ends (see Appendix), can be shown to be isomorphic to L c. We end up with:

\lim_{c/R} \pi_c \cong L c

Solution set condition

So what about those pesky size issues? It’s one thing to demand the existence of all small limits, and a completely different thing to demand the existence of large limits (such requirement may narrow down the available categories to preorders). Since the comma category may be too large, maybe we can cut it down to size by carefully picking up a (small) set of objects out of all objects of \mathcal D. We may take some indexing set I and construct a family d_i of objects of \mathcal D indexed by elements of I. It doesn’t have to be one family for all—we may pick a different family for every object c for which we are performing our construction.

Instead of using the whole comma category c/R, we’ll limit ourselves to a set of arrows f_i \colon c \to R d_i. But in a comma category we also have morphisms between arrows. In fact they are the essential carriers of the structure of the comma category. Let’s have another look at these morphisms.

This commuting condition can be re-interpreted as a factorization of g through f. It so happens that every morphism g can be trivially factorized through some f by picking d = d' and h = id_d. But if we restrict the factors f to be members of the family f_i then not every g \colon c \to R d (for arbitrary d) can be automatically factorized. We have to demand it. That gives us the following:

Solution Set Condition: For every object c there exists a small set I with an I-indexed family of objects d_i in \mathcal D and a family of morphisms f_i \colon c \to R d_i, such that every morphism g \colon c \to R d can be factored through one of f_i. That is, there exists a morphism h \colon d_i \to d such that

g = R h \circ f_i

fig7.png?w=300&h=164

There is a shorthand for this statement: All comma categories c/R admit weakly initial families of objects. We’ll come back to it later.

Freyd’s theorem

We can now formulate:

Freyd’s Adjoint Functor Theorem: If \mathcal D is a locally small and small-complete category, and the functor R \colon D \to C is continuous (small-limit preserving), and it satisfies the solution set condition, then R has a left adjoint.

We’ve seen before that the key to defining the point-wise left adjoint was to find the initial object in the comma category c/R. The problem is that this comma category may be large. So the trick is to split the proof into two parts: first defining a weakly initial object, and then constructing the actual initial object using equalizers. A weakly initial object has morphisms to every object in the category but, unlike its strong version, these morphisms don’t have to be unique.

An even weaker notion is that of a weakly initial set of objects. These are objects that among themselves have arrows to every object in the category, but it’s possible that no individual object has all the arrows. The solution set in Freyd’s theorem is such a weakly initial set in the comma category c/R. Since we assumed that \mathcal C is small-complete, we can take a product of these objects and show that it’s weakly initial. The proof then proceeds with the construction of the initial object.

The details of the proof can be found in any category theory text or in nLab.

Next we’ll see the application of these results to the problem of defunctionalization of computer programs.

Appendix

To show that

\int_d \mathcal{D}(L c, d) \pitchfork d \cong L c

it’s enough to show that the hom-functors from an arbitrary object d' are isomorphic

\begin{aligned}  & \mathcal{D}\big(d', \int_d \mathcal{D}(L c, d) \pitchfork d\big) \\  \cong & \int_d \mathcal{D}\big(d', \mathcal{D}(L c, d) \pitchfork d\big) \\  \cong & \int_d Set\big( \mathcal{D}(L c, d), \mathcal{D}(d', d) \big) \\  \cong & \; \mathcal{D}(d', L c)  \end{aligned}

I used the continuity of the hom-functor, the definition of the power (cotensor) and the ninja Yoneda lemma.

July 20, 2020

It’s funny how similar ideas pop up in different branches of mathematics. Calculus, for instance, is built around metric spaces (or, more generally, Banach spaces) and measures. A limit of a sequence is defined by points getting closer and closer together. An integral is an area under a curve. In category theory, though, we don’t talk about distances or areas (except for Lawvere’s take on metric spaces), and yet we have the abstract notion of a limit, and we use integral notation for ends. The similarities are uncanny.

This blog post was inspired by my trying to understand the idea behind the Freyd’s adjoint functor theorem. It can be expressed as a colimit over a comma category, which is a special case of a Grothendieck fibration. To understand it, though, I had to get a better handle on weighted colimits which, as I learned, were even more general than Kan extensions.

Category of elements as coend

Grothendieck fibration is like splitting a category in two orthogonal directions, the base and the fiber. Fiber may vary from object to object (as in dependent types, which are indeed modeled as fibrations).

The simplest example of a Grothendieck fibration is the category of elements, in which fibers are simply sets. Of course, a set is also a category—a discrete category with no morphisms between elements, except for compulsory identity morphisms. A category of elements is built on top of a category \mathcal{C} using a functor

   F \colon \mathcal{C} \to Set

Such a functor is traditionally called a copresheaf (this construction works also on presheaves, \mathcal{C}^{op} \to Set). Objects in the category of elements are pairs (c, x) where c is an object in \mathcal{C}, and x \in F c is an element of a set.

A morphism from (c, x) to (c', x') is a morphism f \colon c \to c' in \mathcal{C}, such that (F f) x = x'.

There is an obvious projection functor that forgets the second component of the pair

   \Pi \colon (c, x) \mapsto c

(In fact, a general Grothendieck fibration starts with a projection functor.)

You will often see the category of elements written using integral notation. An integral, after all, is a gigantic sum of tiny slices. Similarly, objects of the category of elements form a gigantic sum (disjoint union) of sets F c. This is why you’ll see it written as an integral

   \int^{c \colon \mathcal{C}} F c

However, this notation conflicts with the one for conical colimits so, following Fosco Loregian, I’ll write the category of elements as

   \mathcal{C}\int^{c} F c

An interesting specialization of a category of elements is a comma category. It’s the category L/d of arrows originating in the image of the functor L \colon \mathcal{C} \to \mathcal{D} and terminating at a fixed object d in \mathcal{D}. The objects of L/d are pairs (c, f) where c is an object in \mathcal{C} and f \colon L c \to d is a morphism in \mathcal{D}. These morphisms are elements of the hom-set \mathcal{D}(L c , d), so the comma category is just a category of elements for the functor \mathcal{D}(L-, d) \colon \mathcal{C}^{op} \to Set

   L/d \cong \mathcal{C}\int^{c} \mathcal{D}(L c, d)

You’ll mostly see integral notation in the context of ends and coends. A coend of a profunctor is like a trace of a matrix: it’s a sum (a coproduct) of diagonal elements. But (co-)end notation may also be used for (co-)limits. Using the trace analogy, if you fill rows of a matrix with copies of the same vector, the trace will be the sum of the components of the vector. Similarly, you can construct a profunctor from a functor by repeating the same functor for every value of the first argument c':

   P(c', c) = F c

The coend over this profunctor is the colimit of the functor, a colimit being a generalization of the sum. By slight abuse of notation we write it as

   \mbox{colim}\, F = \int^{c \colon \mathcal{C}} F c

This kind of colimit is called conical, as opposed to what we are going to discuss next.

Weighted colimit as coend

A colimit is a universal cocone under a diagram. A diagram is a bunch of objects and morphisms in \mathcal{C} selected by a functor D \colon \mathcal{J} \to \mathcal{C} from some indexing category \mathcal{J}. The legs of a cocone are morphisms that connect the vertices of the diagram to the apex c of the cocone.

screen-shot-2020-07-20-at-12.06.27-pm.png?w=300&h=139

For any given indexing object j \colon \mathcal{J}, we select an element of the hom-set \mathcal{C}(D j, c), as a wire of the cocone. This is a selection of an element of a set (the hom-set) and, as such, can be described by a function from the singleton set *. In other words, a wire is a member of Set(*, \mathcal{C}(D j, c)). In fact, we can describe the whole cocone as a natural transformation between two functors, one of them being the constant functor 1 \colon j \mapsto *. The set of cocones is then the set of natural transformations:

   [\mathcal{J}^{op}, Set](1, \mathcal{C}(D -, c))

Here, [J^{op}, Set] is the category of presheaves, that is functors from \mathcal{J}^{op} to Set, with natural transformations as morphisms. As a bonus, we get the cocone triangle commuting conditions from naturality.

Using singleton sets to pick morphisms doesn’t generalize very well to enriched categories. For conical limits, we are building cocones from zero-thickness wires. What we need instead is what Max Kelly calls cylinders obtained by replacing the constant functor 1\colon \mathcal{J}^{op} \to Set with a more general functor W \colon \mathcal{J}^{op} \to Set. The result is a weighted colimit (or an indexed colimit, as Kelly calls it), \mbox{colim}^W D. The set of weighted cocones is defined by natural transformations

   [\mathcal{J}^{op}, Set](W, \mathcal{C}(D -, c))

and the weighted colimit is the universal one of these. This definition generalizes nicely to the enriched setting (which I won’t be discussing here).

Universality can be expressed as a natural isomorphism

   [\mathcal{J}^{op}, Set](W, \mathcal{C}(D -, c))  \cong  \mathcal{C}(\mbox{colim}^W D, c)

We interpret this formula as a one-to-one correspondence: for every weighted cocone with the apex c there is a unique morphism from the colimit to c. Naturality conditions guarantee that the appropriate triangles commute.

A weighted colimit can be expressed as a coend (see Appendix 1)

   \mbox{colim}^W D \cong \int^{j \colon \mathcal{J}} W j \cdot D j

The dot here stands for the tensor product of a set by an object. It’s defined by the formula

   \mathcal{C}(s \cdot c, c') \cong Set(s, \mathcal{C}(c, c'))

If you think of s \cdot c as the sum of s copies of the object c, then the above asserts that the set of functions from a sum (coproduct) is equivalent to a product of functions, one per element of the set s,

   (\coprod_s c) \to c' \cong \prod_s (c \to c')

Right adjoint as a colimit

A fibration is like a two-dimensional category. Or, if you’re familiar with bundles, it’s like a fiber bundle, which is locally isomorphic to a cartesian product of two spaces, the base and the fiber. In particular, the category of elements \mathcal{C} \int W is, roughly speaking, like a bundle whose base is the category \mathcal{C}, and the fiber is a (c-dependent) set W c.

We also have a projection functor on the category of elements \mathcal{C} \int W that ignores the W c component

   \Pi \colon (c, x) \mapsto c

The coend of this functor is the (conical) colimit

   \int^{(c, x) \colon \mathcal{C}\int W} \Pi (c, x) \cong \underset{\mathcal{C} \int W}{\mbox{colim}} \; \Pi

But this functor is constant along the fiber, so we can “integrate over it.” Since fibers depends on c, different objects end up weighted differently. The result is a coend over the base category, with objects c weighted by sets W c

   \int^{(c, x) \colon \mathcal{C}\int W} \Pi (c, x) \cong \int^{(c, x) \colon \mathcal{C}\int W} c  \cong   \int^{c \colon \mathcal{C}} W c \cdot c

Using a more traditional notation, this is the formula that relates a (conical) colimit over the category of elements and a weighted colimit of the identity functor

   \underset{\mathcal{C} \int W}{\mbox{colim}} \;  \Pi  \cong \mbox{colim}^W Id

There is a category of elements that will be of special interest to us when discussing adjunctions: the comma category for the functor L \colon \mathcal{C} \to \mathcal{D}, in which the weight functor is the hom-functor \mathcal{D}(L-, d)

   L/d \cong \mathcal{C}\int^{c} \mathcal{D}(L c, d)

If we plug it into the last formula, we get

   \underset{L/d}{\mbox{colim}} \;  \Pi  \cong \underset{C \int \mathcal{D}(L-, d)}{\mbox{colim}} \;  \Pi  \cong \int^{c \colon \mathcal{C}} \mathcal{D}(L c, d) \cdot c

If the functor L has a right adjoint

   \mathcal{D}(L c, d) \cong \mathcal{C}(c, R d)

we can rewrite this as

   \underset{L/d}{\mbox{colim}} \;  \Pi  \cong \int^{c \colon \mathcal{C}} \mathcal{C}(c, R d) \cdot c

and useing the ninja Yoneda lemma (see Appendix 2) we get a formula for the right adjoint in terms of a colimit of a comma category

   \underset{L/d}{\mbox{colim}} \; \Pi  \cong R d

Incidentally, this is the left Kan extension of the identity functor along L. (In fact, it can be used to define the right adjoint as long as it preserves the functor L.)

We’ll come back to this formula when discussing the Freyd’s adjoint functor theorem.

Appendix 1

I’m going to prove the following identity using some of the standard tricks of coend calculus

   \mbox{colim}^W D \cong \int^{j \colon \mathcal{J}} W j \cdot D j

To show that two objects are isomorphic, it’s enough to show that their hom-sets to any object c' are isomorphic (this follows from the Yoneda lemma)

   \begin{aligned}  \mathcal{C}(\mbox{colim}^W D, c') & \cong [\mathcal{J}^{op}, Set]\big(W-, \mathcal{C}(D -, c')\big) \\   &\cong \int_j Set \big(W j, \mathcal{C}(D j, c')\big) \\   &\cong \int_j \mathcal{C}(W j \cdot D j, c') \\   &\cong \mathcal{C}(\int^j W j \cdot D j, c')  \end{aligned}

I first used the universal property of the colimit, then rewrote the set of natural transformations as an end, used the definition of the tensor product of a set and an object, and replaced an end of a hom-set by a hom-set of a coend (continuity of the hom-set functor).

Appendix 2

The proof of

   \int^{c \colon \mathcal{C}} \mathcal{C}(c, R d) \cdot c \cong R d

follows the same pattern

   \begin{aligned}  &\mathcal{C}\Big( \big(\int^{c} \mathcal{C}(c, R d) \cdot c\big) , c'\Big)\\  \cong &\int_c \mathcal{C}\big( \mathcal{C}(c, R d) \cdot c , c'\big) \\  \cong &\int_c Set\big( \mathcal{C}(c, R d) , \mathcal{C}(c, c')\big) \\  \cong & \; \mathcal{C}(R d, c')   \end{aligned}

I used the fact that a hom-set from a coend is isomorphic to an end of a hom-set (continuity of hom-sets). Then I applied the definition of a tensor. Finally, I used the Yoneda lemma for contravariant functors, in which the set of natural transformations is written as an end.

   [ \mathcal{C}^{op}, Set]\big(\mathcal{C}(-, x), H \big) \cong \int_c Set \big( \mathcal{C}(c, x), H c \big) \cong H x

June 15, 2020

I have recently watched a talk by Gabriel Gonzalez about folds, which caught my attention because of my interest in both recursion schemes and optics. A Fold is an interesting abstraction. It encapsulates the idea of focusing on a monoidal contents of some data structure. Let me explain.

Suppose you have a data structure that contains, among other things, a bunch of values from some monoid. You might want to summarize the data by traversing the structure and accumulating the monoidal values in an accumulator. You may, for instance, concatenate strings, or add integers. Because we are dealing with a monoid, which is associative, we could even parallelize the accumulation.

In practice, however, data structures are rarely filled with monoidal values or, if they are, it’s not clear which monoid to use (e.g., in case of numbers, additive or multiplicative?). Usually monoidal values have to be extracted from the container. We need a way to convert the contents of the container to monoidal values, perform the accumulation, and then convert the result to some output type. This could be done, for instance by fist applying fmap, and then traversing the result to accumulate monoidal values. For performance reasons, we might prefer the two actions to be done in a single pass.

Here’s a data structure that combines two functions, one converting a to some monoidal value m and the other converting the final result to b. The traversal itself should not depend on what monoid is being used so, in Haskell, we use an existential type.

data Fold a b = forall m. Monoid m => Fold (a -> m) (m -> b)

The data constructor of Fold is polymorphic in m, so it can be instantiated for any monoid, but the client of Fold will have no idea what that monoid was. (In actual implementation, the client is secretly passed a table of functions: one to retrieve the unit of the monoid, and another to perform the mappend.)

The simplest container to traverse is a list and, indeed, we can use a Fold to fold a list. Here’s the less efficient, but easy to understand implementation

fold :: Fold a b -> [a] -> b
fold (Fold s g) = g . mconcat . fmap s

See Gabriel’s blog post for a more efficient implementation.

A Fold is a functor

instance Functor (Fold a) where
  fmap f (Fold scatter gather) = Fold scatter (f . gather)

In fact it’s a Monoidal functor (in category theory, it’s called a lax monoidal functor)

class Monoidal f where
  init :: f ()
  combine :: f a -> f b -> f (a, b)

You can visualize a monoidal functor as a container with two additional properties: you can initialize it with a unit, and you can coalesce a pair of containers into a container of pairs.
screen-shot-2020-06-05-at-10.06.25-pm.png?w=341&h=133

instance Monoidal (Fold a) where
  -- Fold a ()
  init = Fold bang id
  -- Fold a b -> Fold a c -> Fold a (b, c)
  combine (Fold s g) (Fold s' g') = Fold (tuple s s') (bimap g g')

where we used the following helper functions

bang :: a -> ()
bang _ = ()

tuple :: (c -> a) -> (c -> b) -> (c -> (a, b))
tuple f g = \c -> (f c, g c)

This property can be used to easily aggregate Folds.

In Haskell, a monoidal functor is equivalent to the more common applicative functor.

A list is the simplest example of a recursive data structure. The immediate question is, can we use Fold with other recursive data structures? The generalization of folding for recursively-defined data structures is called a catamorphism. What we need is a monoidal catamorphism.

Algebras and catamorphisms

Here’s a very short recap of simple recursion schemes (for more, see my blog). An algebra for a functor f with the carrier a is defined as

type Algebra f a = f a -> a

screen-shot-2020-06-05-at-10.07.14-pm.png?w=321&h=88
Think of the functor f as defining a node in a recursive data structure (often, this functor is defined as a sum type, so we have more than one type of node). An algebra extracts the contents of this node and summarizes it. The type a is called the carrier of the algebra.

A fixed point of a functor is the carrier of its initial algebra

newtype Fix f = Fix { unFix :: f (Fix f) }
screen-shot-2020-06-05-at-10.06.57-pm.png?w=381&h=92

Think of it as a node that contains other nodes, which contain nodes, and so on, recursively.

A catamorphism generalizes a fold

cata :: Functor f => Algebra f a -> Fix f -> a
cata alg = alg . fmap (cata alg) . unFix
screen-shot-2020-06-05-at-10.07.47-pm.png?w=510&h=252

It’s a recursively defined function. It’s first applied using fmap to all the children of the node. Then the node is evaluated using the algebra.

Monoidal algebras

We would like to use a Fold to fold an arbitrary recursive data structure. We are interested in data structures that store values of type a which can be converted to monoidal values. Such structures are generated by functors of two arguments (bifunctors).

class Bifunctor f where
  bimap :: (a -> a') -> (b -> b') -> f a b -> f a' b'

screen-shot-2020-06-05-at-10.08.12-pm.png?w=313&h=71
In our case, the first argument will be the payload and the second, the placeholder for recursion and the carrier for the algebra.

We start by defining a monoidal algebra for such a functor by assuming that it has a monoidal payload, and that the child nodes have already been evaluated to a monoidal value

type MAlgebra f = forall m. Monoid m => f m m -> m

A monoidal algebra is polymorphic in the monoid m reflecting the requirement that the evaluation should only be allowed to use monoidal unit and monoidal multiplication.

A bifunctor is automatically a functor in its second argument

instance Bifunctor f => Functor (f a) where
  fmap g = bimap id g

We can apply the fixed point to this functor to define a recursive data structure Fix (f a).

We can then use Fold to convert the payload of this data structure to monoidal values, and then apply a catamorphism to fold it

cat :: Bifunctor f => MAlgebra f -> Fold a b -> Fix (f a) -> b
cat malg (Fold s g) = g . cata alg
  where
    alg = malg . bimap s id

Here’s this process in more detail. This is the monoidal catamorphism that we are defining:

screen-shot-2020-06-05-at-10.08.54-pm.png?w=510&h=86

We first apply cat, recursively, to all the children. This replaces the children with monoidal values. We also convert the payload of the node to the same monoid using the first component of Fold. We can then use the monoidal algebra to combine the payload with the results of folding the children.

Finally, we convert the result to the target type.
screen-shot-2020-06-05-at-10.09.06-pm.png?w=510&h=68

We have factorized the original problem in three orthogonal directions: the monoidal algebra, the Fold, and the traversal of the particular recursive data structure.

Example

Here’s a simple example. We define a bifunctor that generates a binary tree with arbitrary payload a stored at the leaves

data TreeF a r = Leaf a | Node r r

It is indeed a bifunctor

instance Bifunctor TreeF where
  bimap f g (Leaf a) = Leaf (f a)
  bimap f g (Node r r') = Node (g r) (g r')

The recursive tree is generated as its fixed point

type Tree a = Fix (TreeF a)

Here’s an example of a tree

We define two smart constructors to simplify the construction of trees

leaf :: a -> Tree a
leaf a = Fix (Leaf a)

node :: Tree a -> Tree a -> Tree a
node t t' = Fix (Node t t')

We can define a monoidal algebra for this functor. Notice that it only uses monoidal operations (we don’t even need the monoidal unit here, since values are stored in the leaves). It will therefore work for any monoid

myAlg :: MAlgebra TreeF
myAlg (Leaf m) = m
myAlg (Node m m') = m <> m'

Separately, we define a Fold whose internal monoid is Sum Int. It converts Double values to this monoid using floor, and converts the result to a String using show

myFold :: Fold Double String
myFold = Fold floor' show'
  where
    floor' :: Double -> Sum Int
    floor' = Sum . floor
    show' :: Sum Int -> String
    show' = show . getSum

This Fold has no knowledge of the data structure we’ll be traversing. It’s only interested in its payload.

Here’s a small tree containing three Doubles

myTree :: Tree Double
myTree = node (node (leaf 2.3) (leaf 10.3)) (leaf 1.1)

screen-shot-2020-06-05-at-10.09.49-pm.png?w=198&h=162

We can monoidally fold this tree and display the resulting String

Notice that we can use the same monoidal catamorphism with any monoidal algebra and any Fold.

The following pragmas were used in this program

{-# language ExistentialQuantification #-}
{-# language RankNTypes #-}
{-# language FlexibleInstances #-}
{-# language IncoherentInstances #-}

Relation to Optics

A Fold can be seen as a form of optic. It takes a source type, extracts a monoidal value from it, and maps a monoidal value to the target type; all the while keeping the monoid existential. Existential types are represented in category theory as coends—here we are dealing with a coend over the category of monoids \mathbf{Mon}(\mathbf{C}) in some monoidal category \mathbf C. There is an obvious forgetful functor U that forgets the monoidal structure and produces an object of \mathbf C. Here’s the categorical formula that corresponds to Fold

\int^{m \in Mon(C)} C(s, U m)\times C(U m, t)

This coend is taken over a profunctor in the category of monoids

P n m = C(s, U m) \times C(U n, t)

The coend is defined as a disjoint union of sets P m m in which we identify some of the elements. Given a monoid homomorphism f \colon m \to n, and a pair of morphisms

u \colon s \to U m

v \colon U n \to t

we identify the pairs

((U f) \circ u, v) \sim (u, v \circ (U f))

This is exactly what we need to make our monoidal catamorphism work. This condition ensures that the following two scenarios are equivalent:

  • Use the function u to extract monoidal values, transform these values to another monoid using f, do the folding in the second monoid, and translate the result using v
  • Use the function u to extract monoidal values, do the folding in the first monoid, use f to transform the result to the second monoid, and translate the result using v

Since the monoidal catamorphism only uses monoidal operations and f is a monoid homomorphism, this condition is automatically satisfied.

June 11, 2020

Previously we discussed ninth chords, which are the first in a series of extension chords. Extensions are the notes that go beyond the first octave. Since we build chords by stacking thirds on top of each other, the next logical step, after the ninth chord, is the eleventh and the thirteenth chords. And that’s it: there is no fifteenth chord, because the fifteenth would be the same as the root (albeit two octaves higher).

This strange musical arithmetic is best understood if we translate all intervals into their semitone equivalents in equal temperament. Since we started by constructing the E major chord, let’s work with the E major scale, which consists of the following notes:

|E |  |F#|  |G#|A  |  |B |  |C#|  |D#|E |

Let’s chart the chord tones taking E as the root.

16a225e2-bb24-44df-9aa3-364791831d02_1_201_a.jpeg?w=510&h=127

We see the clash of several naming conventions. Letter names have their origin is the major diatonic scale, as implemented by the white keys on the piano starting from C.

|C |  |D |  |E |F |  |G |  |A |  |B |C |

They go in alphabetical order, wrapping around after G. On the guitar we don’t have white and black keys, so this convention seems rather arbitrary.

The names of intervals (here, marked by digits, with occasional accidental symbols) are also based on the diatonic scale. They essentially count the number of letters from the root (including the root). So the distance from E to B is 5, because you count E, F, G, A, B — five letters. For a mathematician this convention makes little sense, but it is what it is.

After 12 semitones, we wrap around, as far as note names are concerned. With intervals the situation is a bit more nuanced. The ninth can be, conceptually, identified with the second; the eleventh with the fourth; and the thirteenth with the sixth. But how we name the intervals depends on their harmonic function. For instance, the same note, C#, is called the sixth in the E6 chord, and the thirteenth in E13. The difference is that E13 also contains the (dominant) seventh and the ninth.

e6.png?w=510
e13.png?w=510

A full thirteenth chord contains seven notes (root, third, fifth, seventh, ninth, eleventh, and thirteenth), so it cannot possibly be voiced on a six-string guitar. We usually drop the eleventh (as you can see above). The ninth and the fifth can be omitted as well. The root is very important, since it defines the chord, but when you’re playing in a band, it can be taken over by the bass instrument. The third is important because it distinguishes between major and minor modes (but then again, you have power chords that skip the third). The seventh is somewhat important in defining the dominant role of the chord.

Notice that a thirteenth chord can be seen as two separate chords on top of each other. E13 can be decomposed into E7 with F#m on top (try to spot these two shapes in this grip). Seen this way, the major/minor clash is another argument to either drop the eleventh (which serves as the minor third of F#m) or sharp it.

Alternatively, one could decompose E13 into E with DΔ7 on top. The latter shape is also easily recognized in this grip.

I decided against listing eleventh chords because they are awkward to voice on the guitar and because they are rarely used. Thirteenth chords are more frequent, especially in jazz. You’ve seen E13, here’s G13:

g13-1.png?w=510

It skips the 11th and the 5th; and the 9th at the top is optional.

The Role of Harmonics

It might be worth explaining why omitting the fifth in G13 doesn’t change the character of the chord. The reason is that, when you play the root note, you are also producing harmonics. One of the strongest harmonics is the fifth, more precisely, the fifth over the octave. So, even if you don’t voice it, you can hear it. In fact, a lot of the quality of a given chord voicing depends on the way the harmonics interact with each other, especially in the bass. When you strum the E chord on the guitar, you get a strong root sound E, and the B on the next thickest string amplifies its harmonic fifth. Compare this with the G shape, which also starts with the root, but the next string voices the third, B, which sounds okay, but not great, so some people mute it.

Inverted chords, even though they contain the same notes (up to octave equivalence) may sound dissonant, depending on the context (in particular, voice leading in the bass). This is why we don’t usually play the lowest string in C and A shapes, or the two lowest strings in the D shape.

In the C shape, the third in the bass clashes with the root and is usually muted. That’s because the strongest harmonic of E is B, which makes C/E sound like CΔ7.

c.png?w=510

On the other hand, when you play the CΔ7 chord, the E in the bass sounds great, for exactly the same reason.
cmaj7.png?w=510

You can also play C with the fifth in the bass, as C/G, and it sounds good, probably because the harmonic D of G gives it the ninth flavor. This harmonic is an octave and a fifth above G, so it corresponds to the D that would be voiced on the third fret of the B string.
c-1.png?w=510

The same reasoning doesn’t quite work for the A shape. Firstly, because all four lower strings in A/E voice the very strong power chord (two of them open strings) drowning out the following third. Also the fifth above E is the B that’s just two semitones below the third C# voiced on the B string. (Theoretically, C/G has a third doubled on the thinest string but that doesn’t seem to clash as badly with the D harmonic of G. Again, the ear beats theory!)

a-1.png?w=510

Next: Altered chords.

June 1, 2020

We have already discussed several kinds of seventh chords. But if you can extend the chord by adding a third above it, why not top it with yet another third? This way we arrive at the ninth chord. But a ninth is one whole step above the octave. So far we’ve been identifying notes that cross the octave with their counterparts that are 12 semitones lower. A mathematician would say that we are doing arithmetic modulo 12. But this is just a useful fiction. A lot of things in music theory can be motivated using modular arithmetic, but ultimately, we have to admit that if something doesn’t sound right, it’s not right.

A ninth is 14 semitones above the root (if you don’t flat or sharp it), so it should be identified with the second, which is 2 semitones up from the root. That puts it smack in the middle between the root and the third: a pretty jarring dissonance. We’ve seen a second used in a chord before, but it was playing the role of a suspended third. In a ninth chord, you keep the third, and move the second to the next octave, where it becomes a ninth and cannot do as much damage. Instead it provides color and tension, making things more interesting.

To construct E9, we start with E7. It has the root duplicated on the thinnest string, so it’s easy to raise it by two semitones to produce the ninth.

e9.png?w=510

There are many variations of the ninth chord. There is a minor version, with the third lowered; the seventh can be raised to a major seventh; and the ninth itself can be flatted or sharped. We won’t cover all these.

Following the same pattern, C9 can be constructed from C7 by raising the root by two semitones.

c9.png?w=510

We get a highly movable shape, especially if we put the fifth on the thinnest string. In particular, it can be moved one fret towards the nut to produce B9–a slight modification of the B7 grip we’ve seen before.

If you look carefully at this shape, you might recognize parts of Gm in it (the three thinnest strings). This is no coincidence. The fifth, the seventh, and the ninth of any ninth chord form a minor triad.

Here is the E9 grip obtained by transposing C9 down the fretboard. It’s used a lot in funk:

e9-1.png?w=510

The same chord with a sharped ninth is called the Hendrix chord, after Jimi Hendrix who popularized it:

e9-2.png?w=510

The E9 shape is not only movable, but it’s also easy to mutate. This is the minor version:

em9-1.png?w=510

and this is the major seventh version:
emaj79.png?w=510

Such chords are quite common in Bossa Nova.

A9 is obtained by raising the root of A7 by two semitones:

a9-1.png?w=510

Can you spot the Dm shape raised by two frets?

Similarly, G9 is constructed from G7, and it conceals a Dm as part of it.

g9.png?w=510

Next: Extension chords.

May 29, 2020

Previously we talked about dominant seventh chords, which are constructed by adding a minor seventh to a chord. Adding a major seventh instead is a very “jazzy” thing. With it, you can jazz up any chord, not just the dominant.

A major seventh is one semitone below the octave, so it forms a highly dissonant minor second (a single semitone) against it. This adds a lot of tension but, unlike the dominant seven, major dominant seventh doesn’t have an obvious resolution, so it provides an element of excitement and unpredictability.

Major-seventh chords are usually voiced in such a way as to put distance between the seventh and the root. But you can try this slightly unusual grip, in which there is a semitone interval between the two highest strings (although the third of the triad is missing, so it’s a variation of a power chord).

e5maj7.png?w=510

The notation for major-seventh chords varies–in jazz, the major seventh is often marked with a triangle, as in \Delta 7. It’s also common to see Maj in front of 7.

You may think of major-seventh chords as constructed either by lowering the root by a semitone, or raising the seventh of the corresponding dominant seventh chord.

Here’s the E major-seventh grip, together with its less common minor version:

emaj7.png?w=510
emmaj7.png?w=510

When transposing these chords down the fretboard, we often skip the fifth in the bass as well as the root on the highest string. We either mute these strings or finger-pick the remaining four strings. Here’s the G major-seventh chord constructed this way:

gmaj7.png?w=510

You might be wondering at the resemblance of this grip to A minor. This is no coincidence–the major-seventh chord contains a minor triad. Check this out: there is a minor third between 3 and 5, and a major third between 5 and \Delta 7. In fact, every four-note chord contains two triads (the dominant seventh chord contained a diminished triad built inside a tritone, and the minor major-seventh chord contains an augmented triad).

Here are, similarly constructed, major-seventh versions of A chords. They are also easy to transpose down the fretboard. (Can you spot a flatted Dm shape in the first one?)

amaj7.png?w=510
ammaj7.png?w=510

And these are the D chords:

dmaj7.png?w=510
dmmaj7.png?w=510

C major-seventh is an odd one (that’s because there is an open string between the minor seventh and the root), but it’s very easy to grip:

cmaj7.png?w=510

If you squint hard enough, you can see the elements of E minor in it.

Here’s the open-string version of G major-seventh:

gmaj7-1.png?w=510

Squint again, and you can see the elements of B minor.

Next time: Adding the ninth.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK