sheaf, presheaf - Category Theory

Welcome to the Functional Programming Zulip Chat Archive. You can join the chat here.

Vincent L

Is there a simple explanation of what is a presheaf and what is a sheaf, with some example ?

Vincent L

I see the term used a lot but the wikipedia page on presheaf is a bit obscur for me atm

Fintan Halpenny

One example of a sheaf in 7 Sketches is the idea of a database instance. Your category describes the table(s). So objects are columns and arrows are relations between those columns. The sheaf is the functor from that category to set. So it describes how those columns get filled with actual data and relations between data. You can then do things like describe migrations using adjunct functors.

Reed Mullanix

Sheaves are a bit hard to motivate from a programmer perspective, but I'll do my very best!

I find it's best to start with spaces, as this is where the ideas first originated from. How comfortable are you with the idea of a topological space?

Vincent L

I'm OK with the idea. At least définition of open, closed, interior closure and continuous fonction

Vincent L

I know compacity and connexity but I'm less confortable with them. Also I've heard of prefilters but never really used them

Reed Mullanix

Great! We really only need open sets/continuous functions for this :)

Reed Mullanix

So let's consider some topological space XX, and a pair of open sets U,VU,V such that UVU \subseteq V. One question we might want to ask is "what is the set of functions that are continuous on some open set". Let's denote this set as F(U)F(U) for UU, and F(V)F(V) for VV.

Interestingly enough, as UVU \subseteq V, F(V)F(U)F(V) \subseteq F(U). That is to say, as the open set gets smaller, there are _more_ functions that are continuous on it! To see this, notice that if a function ff is continuous on VV, then it _has_ to be continuous on UU.

Reed Mullanix

Now, for the massive leap in generalization: The open sets of a space a form a partially ordered set, and therefore, also a category. When we start seeing things this way, we may also notice that the "set of continuous functions on an open set" map becomes a Functor, but a bit of an odd one. It flips the arrows around, IE: it is contravariant.

When we use this more generalized definition, we start to see contravariant functors into the category of sets absolutely everywhere. They come up so often, that we have given them a name: Presheaves

Georgi Lyubenov // googleson78

if I maybe butt in - what's the intuition behind the "(pre)sheaf" name?

Torsten Schmits

I think I saw that one on twitter and nobody knew

Reed Mullanix

Now, to see what a sheaf is, lets move back to our topological space. Notice that if we have a bunch of sets UiU_i that cover VV (IE: VUiV \subseteq \bigcup U_i), if two continuous functions f,gF(V)f, g \in F(V) are equal on each of the UiU_i, then they are equal on VV

Reed Mullanix

And IIRC sheaves came first, so a presheaf is like a sheaf w/o the extra conditions

Reed Mullanix

Anyways, intuitively, the idea here is that if two things are the same on a bunch of small regions, then they agree on a larger one

Reed Mullanix

The second condition is a bit more involved, but the idea is pretty straightforward when presented without any symbols. The basic idea is that we can take a bunch of continuous functions defined on an open cover UiU_i of VV, and turn them into a continuous function on VV if they agree on the overlaps of each of the UiU_i.

The wikipedia page has the full definition in all of the gory details, but this idea is what we are trying to encode

Reed Mullanix

Now, let's zoom about a bit, and replace our "set of continuous functions" thing with some functor F:OopSetF : \mathcal{O}^{op} \to Set (O\mathcal{O} is the category of open sets here, and the opop just tells us the functor is contravariant.) These two conditions listed still make sense, and we call such a functor a sheaf. So we might have the sheaf of continuous functions, the sheaf of differentiable functions, as well as some more exotic sheaves that I wont go into

Reed Mullanix

But all of this is just machinery that is used to capture this idea of "locally defined things that we can take apart and glue together"

Georgi Lyubenov // googleson78

this sounds awfully a lot like what you do to prove the compactness theorem (for FOL) using ultrafilters.. or maybe I just don't have enough topology/ct background to tell the difference

Reed Mullanix

Well it turns out that logic and topology are DEEPLY connected

Reed Mullanix

And sheaves/presheaves turn up all the time in logical contexts too. If you want to learn more, the canonical text is "Sheaves in Geometry and Logic" by MacLane and Moerdijk

Reed Mullanix

There are also some really cool ways to think about set-theoretic forcing from this perspective as well :smile: (the book I mentioned before covers this)

Reed Mullanix

Hopefully this all makes some semblance of sense though. I've been planning to write a blog post introducing sheaves for a while now, so this is more or less an unstructured brain dump

Fintan Halpenny

Interestingly enough, as UVU \subseteq V, F(V)F(U)F(V) \subseteq F(U). That is to say, as the open set gets smaller, there are _more_ functions that are continuous on it! To see this, notice that if a function ff is continuous on VV, then it _has_ to be continuous on UU.

Is there any intuition behind this? I'm still grokking sheafs and mostly followed until this point

Torsten Schmits

continuity is a property of an interval. like for y=xy = |x|, you have continuity on any interval as long as it doesn't contain 00

Reed Mullanix

So I think In this one I have VUV \subseteq U. Any cont. function on UU has to cont. on VV, as it is smaller than it (IE: Any discontinuities in VV would automatically make it discontinuous on UU)! However, a function may only have discontinuities outside of VV, which makes it cont. on VV but not on UU.

Reed Mullanix

The more high level intuition is that for smaller things, there are less places for things to go wrong. Another way of thinking about things is that "constraints liberate". The more constrained a thing is, the more we can say about it!

Fintan Halpenny

I think the intuition is there but my understanding isn't :big_smile: I'll stare at this a little longer

Reed Mullanix

No worries! If you have any questions feel free to ping me :smile:

Vincent L

Thanks for the explanations !! It really help

Vincent L

Are there "famous" sheaves outside of continuous/differential/...? I mean differential functions shares a lot with continuous functions. Is there an application on Haskell for instance ?

Vincent L

I suspect in homotopy type theory functions are sheaves with respect to equality, if 2 functions are equals on all subtype of a type (well maybe there the question of being a finite cover or not) then they are equals, and for the second condition, again subtypes are always distinct but it kindof make sense

Reed Mullanix

Presheaves tend to show up more often due to technical reasons in type theory/logic. (TL;DR The collection of sheaves doesn't itself form a sheaf). We can start to introduce more fancy gadgetry, but this bumping up against things I don't fully understand :smile:

However, Presheaves are _abundant_. For a routine example, consider predicates on a type: a -> Bool. This is a contravariant functor, and if we wave our hands and pretend like Hask is Set, it is a presheaf. Furthermore, the Yoneda embedding gives us a very nice way of, well, embedding, a category into the category of presheaves. This has some really useful practical benefits in Haskell (map fusion, giving functor instances to things that aren't functors because of Ord constraints, etc), and other flavors of it let us get better asymptotics for list appends (Cayleys Theorem + DList), and let us reassociate our monadic binds (Codensity).

Vincent L

Cayley theorem and dlist ?

Reed Mullanix

Yeah. I wrote a tiny blurb about the general idea behind cayleys theorem here: https://totbwf.github.io/posts/monoids.html

However, the full story with list requires some expanding. Imagine we have some function like this:

go :: someStuff -> [a] -> newStuff
go stuff xs =
   let newStuff = ...
   in go newStuff ([xs] ++ someOtherList

Now, recall how ++ works. It walks to the end of the first list, and replaces the [] constructor with the second list.
Now, imagine we have to do that a bunch of times. This means we have to re-walk the list over and over and over again, and Ooops! It's quadratic.

To boil the problem down further, the issue is this:

([1,2,3] ++ [4,5,6]) ++ [7,8,9]

We start by walking down to the end of [1,2,3] till we hit the nil constructor, then we attatch on [4,5,6]. We then have to append [7,8,9], which requires walking down the list AGAIN! This is bad.

If we could somehow reassociate things, the problem would be solved though. In our distilled example, it would look like

[1,2,3] ++ ([4,5,6] ++ [7,8,9])

In this one, we start by walking down [4,5,6] and appending on [7,8,9]. We then walk down [1,2,3] and append on [4,5,6,7,8,9] to get our final list. This one is not quadratic! Yay!

However, going back to our more "real world" example, it's not totally obvious how we would accomplish this. This is where Cayleys Theorem comes in. If we embed the monoid [a] into the monoid [a] -> [a], then our appends magically reassociate to do the right thing! This isn't totally obvious how it works, so let's walk through a teeny example, using the magic of equational reasoning

([1,2,3] ++) . ([4,5,6] ++)) . ([7,8,9] ++)                      -- By the definition of .
(\rest -> [1,2,3] ++ ([4,5,6] ++ rest)) . ([7,8,9] ++) -- By the definition of .
(\rest -> [1,2,3] ++ ([4,5,6] ++ ([7,8,9] ++ rest)))

See how the left associative append with . turns into a right associative ++ under the lambda! To get the final list out, all we have to do is apply [] to this function:

(\rest -> [1,2,3] ++ ([4,5,6] ++ ([7,8,9] ++ rest))) []
[1,2,3] ++ ([4,5,6] ++ ([7,8,9] ++ []))
[1,2,3] ++ ([4,5,6] ++ [7,8,9])
[1,2,3] ++ [4,5,6,7,8,9]
[1,2,3, 4,5,6,7,8,9]

The type [a] -> [a] is often called a Dlist or "Difference List", but it comes from this whole "we can embed any monoid on a into the monoid on a -> a", which is Cayleys Theorem :smile:

Now, by a series of contortions that I won't totally get into, this is a realization of the Yoneda Lemma, which itself is a result about embedding a category C\mathcal{C} into the category of presheaves CopSet\mathcal{C}^{op} \to Set.

Vincent L

I wrote something about Cayley theorem in the past http://pinkieduck.net/?p=145 but it's rather about the maths part, I didn't see how it could be useful in the context of Haskell

Vincent L

as far as I understand embedding a monoid a into a->a can be useful if one want to "reverse" associativity ?

Vincent L

Obviously if an operator is associative it gives the same result wheter it's computed via left or right associativity, but in term of performance it may change.

Reed Mullanix

Yeah, these sort of yoneda-flavored embeddings reassociate mappends/fmaps/binds to the right, which can be a lot faster when your operation is O(n)!