# sheaf, presheaf - Category Theory

Welcome to the Functional Programming Zulip Chat Archive. You can join the chat here. Is there a simple explanation of what is a presheaf and what is a sheaf, with some example ? I see the term used a lot but the wikipedia page on presheaf is a bit obscur for me atm One example of a sheaf in 7 Sketches is the idea of a database instance. Your category describes the table(s). So objects are columns and arrows are relations between those columns. The sheaf is the functor from that category to set. So it describes how those columns get filled with actual data and relations between data. You can then do things like describe migrations using adjunct functors. Sheaves are a bit hard to motivate from a programmer perspective, but I'll do my very best!

I find it's best to start with spaces, as this is where the ideas first originated from. How comfortable are you with the idea of a topological space? I'm OK with the idea. At least définition of open, closed, interior closure and continuous fonction I know compacity and connexity but I'm less confortable with them. Also I've heard of prefilters but never really used them So let's consider some topological space $X$, and a pair of open sets $U,V$ such that $U \subseteq V$. One question we might want to ask is "what is the set of functions that are continuous on some open set". Let's denote this set as $F(U)$ for $U$, and $F(V)$ for $V$.

Interestingly enough, as $U \subseteq V$, $F(V) \subseteq F(U)$. That is to say, as the open set gets smaller, there are _more_ functions that are continuous on it! To see this, notice that if a function $f$ is continuous on $V$, then it _has_ to be continuous on $U$. Now, for the massive leap in generalization: The open sets of a space a form a partially ordered set, and therefore, also a category. When we start seeing things this way, we may also notice that the "set of continuous functions on an open set" map becomes a Functor, but a bit of an odd one. It flips the arrows around, IE: it is contravariant.

When we use this more generalized definition, we start to see contravariant functors into the category of sets absolutely everywhere. They come up so often, that we have given them a name: Presheaves Now, to see what a sheaf is, lets move back to our topological space. Notice that if we have a bunch of sets $U_i$ that cover $V$ (IE: $V \subseteq \bigcup U_i$), if two continuous functions $f, g \in F(V)$ are equal on each of the $U_i$, then they are equal on $V$ Anyways, intuitively, the idea here is that if two things are the same on a bunch of small regions, then they agree on a larger one The second condition is a bit more involved, but the idea is pretty straightforward when presented without any symbols. The basic idea is that we can take a bunch of continuous functions defined on an open cover $U_i$ of $V$, and turn them into a continuous function on $V$ if they agree on the overlaps of each of the $U_i$.

The wikipedia page has the full definition in all of the gory details, but this idea is what we are trying to encode Now, let's zoom about a bit, and replace our "set of continuous functions" thing with some functor $F : \mathcal{O}^{op} \to Set$ ($\mathcal{O}$ is the category of open sets here, and the $op$ just tells us the functor is contravariant.) These two conditions listed still make sense, and we call such a functor a sheaf. So we might have the sheaf of continuous functions, the sheaf of differentiable functions, as well as some more exotic sheaves that I wont go into But all of this is just machinery that is used to capture this idea of "locally defined things that we can take apart and glue together" this sounds awfully a lot like what you do to prove the compactness theorem (for FOL) using ultrafilters.. or maybe I just don't have enough topology/ct background to tell the difference And sheaves/presheaves turn up all the time in logical contexts too. If you want to learn more, the canonical text is "Sheaves in Geometry and Logic" by MacLane and Moerdijk There are also some really cool ways to think about set-theoretic forcing from this perspective as well :smile: (the book I mentioned before covers this) Hopefully this all makes some semblance of sense though. I've been planning to write a blog post introducing sheaves for a while now, so this is more or less an unstructured brain dump Interestingly enough, as $U \subseteq V$, $F(V) \subseteq F(U)$. That is to say, as the open set gets smaller, there are _more_ functions that are continuous on it! To see this, notice that if a function $f$ is continuous on $V$, then it _has_ to be continuous on $U$.

Is there any intuition behind this? I'm still grokking sheafs and mostly followed until this point continuity is a property of an interval. like for $y = |x|$, you have continuity on any interval as long as it doesn't contain $0$ So I think In this one I have $V \subseteq U$. Any cont. function on $U$ has to cont. on $V$, as it is smaller than it (IE: Any discontinuities in $V$ would automatically make it discontinuous on $U$)! However, a function may only have discontinuities outside of $V$, which makes it cont. on $V$ but not on $U$. The more high level intuition is that for smaller things, there are less places for things to go wrong. Another way of thinking about things is that "constraints liberate". The more constrained a thing is, the more we can say about it! Are there "famous" sheaves outside of continuous/differential/...? I mean differential functions shares a lot with continuous functions. Is there an application on Haskell for instance ? I suspect in homotopy type theory functions are sheaves with respect to equality, if 2 functions are equals on all subtype of a type (well maybe there the question of being a finite cover or not) then they are equals, and for the second condition, again subtypes are always distinct but it kindof make sense Presheaves tend to show up more often due to technical reasons in type theory/logic. (TL;DR The collection of sheaves doesn't itself form a sheaf). We can start to introduce more fancy gadgetry, but this bumping up against things I don't fully understand :smile:

However, Presheaves are _abundant_. For a routine example, consider predicates on a type: a -> Bool. This is a contravariant functor, and if we wave our hands and pretend like Hask is Set, it is a presheaf. Furthermore, the Yoneda embedding gives us a very nice way of, well, embedding, a category into the category of presheaves. This has some really useful practical benefits in Haskell (map fusion, giving functor instances to things that aren't functors because of Ord constraints, etc), and other flavors of it let us get better asymptotics for list appends (Cayleys Theorem + DList), and let us reassociate our monadic binds (Codensity). Yeah. I wrote a tiny blurb about the general idea behind cayleys theorem here: https://totbwf.github.io/posts/monoids.html

However, the full story with list requires some expanding. Imagine we have some function like this:

go :: someStuff -> [a] -> newStuff
go stuff xs =
let newStuff = ...
in go newStuff ([xs] ++ someOtherList


Now, recall how ++ works. It walks to the end of the first list, and replaces the [] constructor with the second list.
Now, imagine we have to do that a bunch of times. This means we have to re-walk the list over and over and over again, and Ooops! It's quadratic.

To boil the problem down further, the issue is this:

([1,2,3] ++ [4,5,6]) ++ [7,8,9]


We start by walking down to the end of [1,2,3] till we hit the nil constructor, then we attatch on [4,5,6]. We then have to append [7,8,9], which requires walking down the list AGAIN! This is bad.

If we could somehow reassociate things, the problem would be solved though. In our distilled example, it would look like

[1,2,3] ++ ([4,5,6] ++ [7,8,9])


In this one, we start by walking down [4,5,6] and appending on [7,8,9]. We then walk down [1,2,3] and append on [4,5,6,7,8,9] to get our final list. This one is not quadratic! Yay!

However, going back to our more "real world" example, it's not totally obvious how we would accomplish this. This is where Cayleys Theorem comes in. If we embed the monoid [a] into the monoid [a] -> [a], then our appends magically reassociate to do the right thing! This isn't totally obvious how it works, so let's walk through a teeny example, using the magic of equational reasoning

([1,2,3] ++) . ([4,5,6] ++)) . ([7,8,9] ++)                      -- By the definition of .
(\rest -> [1,2,3] ++ ([4,5,6] ++ rest)) . ([7,8,9] ++) -- By the definition of .
(\rest -> [1,2,3] ++ ([4,5,6] ++ ([7,8,9] ++ rest)))


See how the left associative append with . turns into a right associative ++ under the lambda! To get the final list out, all we have to do is apply [] to this function:

(\rest -> [1,2,3] ++ ([4,5,6] ++ ([7,8,9] ++ rest))) []
[1,2,3] ++ ([4,5,6] ++ ([7,8,9] ++ []))
[1,2,3] ++ ([4,5,6] ++ [7,8,9])
[1,2,3] ++ [4,5,6,7,8,9]
[1,2,3, 4,5,6,7,8,9]


The type [a] -> [a] is often called a Dlist or "Difference List", but it comes from this whole "we can embed any monoid on a into the monoid on a -> a", which is Cayleys Theorem :smile:

Now, by a series of contortions that I won't totally get into, this is a realization of the Yoneda Lemma, which itself is a result about embedding a category $\mathcal{C}$ into the category of presheaves $\mathcal{C}^{op} \to Set$. I wrote something about Cayley theorem in the past http://pinkieduck.net/?p=145 but it's rather about the maths part, I didn't see how it could be useful in the context of Haskell as far as I understand embedding a monoid a into a->a can be useful if one want to "reverse" associativity ? Obviously if an operator is associative it gives the same result wheter it's computed via left or right associativity, but in term of performance it may change. Yeah, these sort of yoneda-flavored embeddings reassociate mappends/fmaps/binds to the right, which can be a lot faster when your operation is O(n)!