Fixed size containers - Haskell

Welcome to the Functional Programming Zulip Chat Archive. You can join the chat here.

Asad Saeeduddin

Can anyone think of a type for which we can lawfully implement the following, but for which the adjective "fixed size" seems unnatural?

zip :: Applicative f => (f a, f b) -> f (a, b)
zip = uncurry $ liftA2 (,)

husk :: Applicative f => () -> f ()
husk = pure

unzip :: Functor f => f (a, b) -> (f a, f b)
unzip fab = (fst <$> fab, snd <$> fab)

unhusk :: Functor f => f () -> ()
unhusk = const ()

class Applicative f => FixedSize f
-- Laws:
-- unzip . zip = id
-- zip . unzip = id
-- husk . unhusk = id
-- unhusk . husk = id -- this one is trivial, only possible function of this type is @const ()@

Some lawful examples for which the adjective does seem to be suitable are Vec (n :: Nat), (,) x, and infinite Streams

Simon Hudon

This intrigues me. I wonder if we can prove that the size of the objects f a is the same. Actually, I wonder how to define the size of those objects. For functors, we can define a support set (informally the set of all objects that can be found in x :: f a) but stating that the support set of f a and f b have the same size won't work because of the possibility of duplicates.

Asad Saeeduddin

@Simon Hudon I was thinking about something like demanding the partial derivative of the functor with respect to its parameter to be constant. Of course what exactly partial derivative means in this context is a little bit up in the air. Do we talk about the partial derivative of the "size functions"? Do we use the zipper notion of partial derivatives?

Asad Saeeduddin

I asked a question here in case you come up with a solution: https://stackoverflow.com/questions/60596097/are-all-fixed-size-containers-strong-monoidal-functors-and-vice-versa

The Applicative typeclass represents lax monoidal functors that preserve the cartesian monoidal structure on the category of typed functions. In other words, given the canonical isomorphisms witne...
Asad Saeeduddin

btw I don't quite understand your support set idea. isn't the set of all elements that can be found in f a just f a (to the limited extent that types are sets)?

Simon Hudon

I like your Strange example. I think it shows that some fixed size containers are not StrongApplicative.

Simon Hudon

Indeed :) in my defense, most of the words were there

Simon Hudon

Here is formally what I mean by support with dependent ypes:

support (x : f a) = ⋂ s : set a, ∃ y : f { y : a // y ∈ s }, val <$> y = x

Basically, for x : f a, it is the smallest set s such that you can construct a replica of x that contains all the same "stuff" as x and every object of type a it contains, is paired with a proof that that object is a member of s.

Simon Hudon

Note: { y : a // y ∈ s } is a dependent type "subtype", or, as I call it above, a pair of an a with the proof that that a is a member of s

Asad Saeeduddin

Unfortunately I'm not super familiar with this notation, so I have to ask some baby level questions. e.g. does set a mean the set of sets of elements of type a?

Simon Hudon

Almost: set a is the type of the sets of type a. : is Haskell's :: (I'm using Lean notation for my dependent types)

Simon Hudon

(also, don't worry about baby level questions)

Asad Saeeduddin

Is an intersection of a family of sets or is it forall?

Simon Hudon

It is an intersection of a family of set. When I said "the smallest set such as", that's what I was referring to

Asad Saeeduddin

does this include the empty set?

Simon Hudon

Only if it satisfies the condition. You'd have to build a value of type f { x : a // x ∈ ∅ } which you can only do if f can contain no objects.

Asad Saeeduddin

Now that I'm thinking about it maybe I'm also wrong about the writer applicative

Asad Saeeduddin

being StrongApplicative that is

Asad Saeeduddin

husk = (mempty,), so is it really true that husk . (const ()) = id?

Asad Saeeduddin

e.g. ("hello", ()) /= husk $ unhusk $ ("hello", ())

Simon Hudon

No, that does work. Consider your monoid is [a] and you apply const () to ([1,2,3],())

bradrn

You gave the example on Stack Overflow of (,) x. Does this mean that isn’t StrongApplicative either? If so, that would be interesting, because I suggested that as an example of a Representable functor which isn’t StrongApplicative.

Simon Hudon

I think we could say that, for each f that satisfies StrongApplicative, there must exist a type t such that f a and t -> a are in bijection with each other. Basically, no extra data and only objects mapped to some abstract position

bradrn

@Simon Hudon That’s the definition of Representable! So if that’s true, StrongApplicative is equivalent to Representable.

Asad Saeeduddin

@bradrn Yes, my example is wrong

Asad Saeeduddin

@bradrn Do you think there's a way to implement Representable in terms of StrongApplicative and vice versa?

bradrn

@Asad Saeeduddin I’m trying to figure that out now.

bradrn

I’ve already run into a problem: I have no idea how to define the associated type Rep f given only the knowledge that f is Applicative and OpApplicative. Any ideas?

Asad Saeeduddin

I can't think of a way to do that either

Asad Saeeduddin

you could maybe take the representation to be a natural number and walk over each position

bradrn

I was thinking along the same lines, but that doesn’t really lend itself to figuring out a concrete solution.

Simon Hudon

You'd need an instance of traversable or foldable and you would't get an answer for infinite streams

Asad Saeeduddin

aren't the natural numbers infinite?

bradrn

@Simon Hudon Why would Traversable or Foldable needed?

Simon Hudon

Otherwise, you don't have enough to pass a counter from one element to the next

Asad Saeeduddin

@bradrn I think the easier starting point might be the other direction. Maybe it is true that every Representable is StrongApplicative but not necessarily vice versa (or at least constructively vice versa)

Simon Hudon

I think that direction is easy to prove

bradrn

@Asad Saeeduddin I’ll try doing that then.

Asad Saeeduddin

Wouldn't it be fairly straightforward to convert to a function and then exploit its strong applicative?

Asad Saeeduddin

Assuming I haven't screwed that one up as well and functions really are StrongApplicative

bradrn

@Asad Saeeduddin Yes, I think that would work. Of course you’d still need to prove the laws…

bradrn

Implementations:

instance Representable f => Applicative f where
    zip (fa, fb) = tabulate $ zip (index fa, index fb)
    husk () = tabulate $ const ()

instance Representable f => OpApplicative f where
    unzip fab = let (fa, fb) = unzip (index fab) in (tabulate fa, tabulate fb)
    unhusk = const ()
bradrn

Now I’ll see if this satisfies the laws

bradrn

unhusk . husk = id is trivial

bradrn

Wait, I think my implementation of husk is wrong. It should be husk () = tabulate $ const $ return (), I think.

Asad Saeeduddin

isn't that what you have?

bradrn

Only because I just edited it. Before it was tabulate $ const ().

Asad Saeeduddin

is return the one for the function monad?

bradrn

Oh wait, I was right in the first place. tabulate :: (Rep f -> a) -> f a, so tabulate (const ()) :: f (). I’ll edit it back.

Asad Saeeduddin

more obvious it's right if we just write husk = tabulate . husk

bradrn

That makes sense as well. So does husk = tabulate (return ()), which works a bit better with the laws for Representable.

Asad Saeeduddin

does that actually typecheck?

bradrn

I think so… for functions, return = husk = const.

Asad Saeeduddin

well the codomain of tabulate is f a, but the type of husk is () -> f ()

Asad Saeeduddin

so you need an extra parameter in there at least

Asad Saeeduddin

if you did tabulate . return or husk _ = tabulate (return ()) i think it makes sense

bradrn

Oh yes, I forgot that. But you can see that I included it in my original Applicative instance above.

bradrn

But after thinking about it, I like husk = tabulate . return much better.

bradrn

Or even husk = tabulate . husk, which you suggested earlier.

Asad Saeeduddin

If we can show that tabulate and index form not just a bijection but an applicative preserving isomorphism, we will have automatically that your instance is lawful

bradrn

I’m not quite sure I understand this…

Asad Saeeduddin

the idea is that a function can be more than just a mere function, it can preserve some structure that exists on both its domain and codomain types

Asad Saeeduddin

for example you could have a monoid homomorphism f :: M -> N for two monoids M and N, that preserves the monoid structure in the following way:

f x <> f y = f $ x <> y
f mempty = mempty
bradrn

I understand now. But how would we prove that about tabulate and index?

Asad Saeeduddin

for tabulate we'd require:

zip (tabulate x, tabulate y) = tabulate $ zip (x, y)
tabulate . husk = husk
Asad Saeeduddin

for index :: f a -> (Rep f -> a), we'd require the same thing:

zip (index x, index y) = index $ zip (x, y)
index . husk = husk
bradrn

That makes sense, but I’m not sure how it would help… Recall that we constructed Applicative and OpApplicative by converting f a to r -> a with Representable, then using the (->) r instances of Applicative/OpApplicative to perform the ‘core’ computations, then use Representable to convert r -> a back to f a.

Asad Saeeduddin

I guess that doesn't really help us get anywhere, it just gives us an alternate proof burden. But assuming we can prove that we would have that the instance is lawful

Asad Saeeduddin

@bradrn Right. What I mean is because of the way you've constructed the Applicative instance out of the Representable isomorphism + the Applicative instance of (->) r, having a slightly stronger "applicative preserving" property about the Representable isomorphism's components gives us legality of the instance

bradrn

Do you have a proof that “applicative preservation” implies legality? I’d like to see how that works.

Asad Saeeduddin

Yes. Your instance is effectively this:

instance Representable f => Applicative f where
  zip = tabulate . zip . bimap index index
  husk = tabulate . husk

Let's assume that the the zip and husk for functions that we're using on the RHS is legal

Asad Saeeduddin

Now the thing about monoidal functors/applicatives is that they're actually just monoid objects in the functor category under Day convolution. And like in any monoidal category, you can have "monoid morphisms" that preserve the monoid structure

Asad Saeeduddin

here is the definition and relevant diagrams copy pasted from wikipedia

bradrn

I’m not sure I know enough category theory to understand this…

bradrn

But I do think I understand the idea of a monoid morphism. A function f :: (Monoid m, Monoid n) => m -> n is a monoid morphism iff f mempty == mempty and f (x <> y) == (f x) <> (f y), right?

Asad Saeeduddin

but that "such that" is actually bidirectional

bradrn

True. Probably ‘if and only if’ would have been better.

Asad Saeeduddin

in that if:

  • for some values mempty_n :: n and (<>_n) :: n -> n -> n
  • some monoid m
  • some monoid morphism f :: m -> n
  • it is the case that f mempty = mempty_n and f (x <> y) = f x <>_n f y

then n, mempty_n, <>_n is a monoid

Asad Saeeduddin

that's what I mean about the applicative thing. you already know (->) (Rep f) is an applicative. if you additionally show that the isomorphism tabulate :: (->) (Rep f) ~> f, index :: f ~> (->) (Rep f) preserves that applicative, you'll have that f is an applicative

bradrn

Yes, now I understand! Thank you for explaining!

bradrn

But how could we prove that tabulate and index are applicative-preserving? (And I think they need to be op-applicative preserving as well.)

Asad Saeeduddin

@TheMatten Do we have lambdabot here?

Asad Saeeduddin

@bradrn Sadly I'm not sure. What I'm hoping is that this mostly comes out of the free theorems of the types

bradrn

My implementation again for reference:

instance Representable f => Applicative f where
    zip (fa, fb) = tabulate $ zip (index fa, index fb)
    husk () = tabulate $ const ()

instance Representable f => OpApplicative f where
    unzip fab = let (fa, fb) = unzip (index fab) in (tabulate fa, tabulate fb)
    unhusk = const ()
bradrn

I’m thinking that it could potentially be easier to prove them directly from the implementation.

Asad Saeeduddin

Btw you don't need to write that OpApplicative by hand. (->) (Rep f)'s OpApplicative is no different from the trivial one for any other Functor

bradrn

Oh… somehow I missed that all Functors are OpApplicative!

Asad Saeeduddin

That said, yes if you can figure out a way to prove it by hand that could work. I don't see one off the top of my head :)

bradrn

To copy your implementation from SO:

instance Functor f => OpApplicative f
  where
  unzip fab = (fst <$> fab, snd <$> fab)
  unhusk = const ()
bradrn

husk . unhusk reduces to \x -> tabulate $ const () == \x -> tabulate $ husk (), so I don’t think there’s any way around showing that tabulate preserves applicatives.

Asad Saeeduddin

hmm. I wonder if that's just an irreducible law that you have to demand in order for your Representable to give you an Applicative, as opposed to something you can prove

bradrn

A law of Representable is tabulate . return == return. But return == husk for functions, so tabulate $ husk () == return (). Not quite sure how that helps though.

(In fact, that Representable law is quite weird: it essentially requires f to be an Applicative, but Representable only demands f to be a Functor!)

Asad Saeeduddin

Wait a sec, is the second return in tabulate . return = return of type a -> f a?

Asad Saeeduddin

well, that proves 1/2 things necessary to show that tabulate preserves applicatives, right?

Asad Saeeduddin

here's what we need for tabulate again:

zip (tabulate x, tabulate y) = tabulate $ zip (x, y)
tabulate . husk = husk
bradrn

Excellent point! I can’t believe I didn’t notice that…

Asad Saeeduddin

so the second one is satisfied. where did you find this theorem about Representable?

bradrn

But the first point could be tricker. Perhaps it may be necessary to assume it as a law.

bradrn

@Asad Saeeduddin I found it listed in the representable laws here:

Asad Saeeduddin

maybe we'll find the other law somewhere in there too

bradrn
tabulate . index  ≡ id
index . tabulate  ≡ id
tabulate . return ≡ return
Asad Saeeduddin

interesting. as far as I know the concept of a representable functor doesn't really involve the third law, but it seems they just demanded a bonus law for Representable

Asad Saeeduddin

which just happened to be the one we need

bradrn

Yes, I found that to be very strange as well. Especially since not all fs have return!

Asad Saeeduddin

i think this does point to the extra laws being irreducible. you just have to demand them somewhere (whether in Representable, or somewhere in the constraints for your Representable f => Applicative f declaration)

bradrn

I think I agree as well. Do we agree then that if f is Representable and its Representable instance preserves applicatives, then that implies that f is StrongApplicative?

Asad Saeeduddin

I think it indicates at least that it is Applicative. We can add an "strong applicative preserving" requirement for the components of the isomorphism, and then we'll get that f is StrongApplicative

bradrn

Alright then. That makes sense.

bradrn

I can’t help but feel though that we haven’t made much progress on your original question: is StrongApplicative equivalent to being of ‘fixed shape’? Or, in other words, are there any StrongApplicatives which aren’t of fixed shape?

Asad Saeeduddin

I guess determining some Representable functors are StrongApplicative is still focused on the other half of the question (which in the global case we've already answered in the negative)

Asad Saeeduddin

we can maybe moderate that negated conjecture and say that those fixed shape functors that are Representable, and whose representation preserves applicatives, are StrongApplicative

Asad Saeeduddin

and then that much (I think) is true

bradrn

I agree. But I think the much more interesting question is: are all StrongApplicative functors Representable? (even if not constructively)

Asad Saeeduddin

you're right in that we still didn't make any progress on discovering any variable shape containers that are StrongApplicative (or showing that all StrongApplicative containers are of fixed shape)

Asad Saeeduddin

I guess implementing Representable from StrongApplicative would conclusively settle that question

bradrn

I don’t see any way of implementing Representable until we can figure out how to get Rep f given f is StrongApplicative.

Asad Saeeduddin

are all StrongApplicative functors Representable? (even if not constructively)

I can't begin to imagine how to answer this in the positive (especially lacking the constructivity qualifier). If it is in fact false finding the counterexample might be easier

Asad Saeeduddin

So far what's left of the StrongApplicative list certainly seems to admit Representable instances for each item

bradrn

Honestly, I doubt there’s any easy answer to this.

bradrn

I have to go now, but I’ll keep on thinking about it — I’ll post here if I have any ideas.

bradrn

@Asad Saeeduddin I think I’ve found a proof that a StrongApplicative must be of fixed shape! I’ll write it up now.

bradrn

Proof: For a StrongApplicative, the law husk . unhusk == id must hold; that is, for any x :: f (), husk (unhusk x) == x. But in Haskell, unhusk == const (), so this law is equivalent to saying that for any x :: f (), husk () == x. By the transitivity of equality, this then means that there can only exist one distinct value of type f (): if x, y :: f (), then x == husk () and y == husk (), so x == y. But this is only possible if f has a fixed shape (if f has multiple possible shapes, then for each shape there is a corresponding value of type f ()). Thus husk . unhusk == id implies that there exists only one distinct value of type f (), and hence that f has a fixed shape.

Asad Saeeduddin

That's very interesting. I don't quite follow the "transitivity of equality" step though. What does that mean?

bradrn

All it means is that if x == husk () and y == husk (), then x == y.

Asad Saeeduddin

hmm. so essentially you are saying if f () is a singleton, then f is of fixed shape

Lysxia

sounds like a decent enough definition

bradrn

@Asad Saeeduddin Yes, that is what I’m saying. I believe this definition captures the idea of ‘fixed shape’: () is a singleton, so if we substitute a with () in f a, then f () can only be a non-singleton if there is variation in f.

Asad Saeeduddin

it's interesting, because this ties in to representability a bit. the Yoneda lemma implies that for any functor f, f () is isomorphic to forall x. (() -> x) -> f x, which I guess decomposes to:

fwd :: (forall x. x -> f x) -> f ()
bwd :: f () -> (forall x. x -> f x)

If f () is a singleton, then bwd (unique :: f ()) gives us something that looks a lot like pure :: forall x. x -> f x

Asad Saeeduddin

anyway I find that proof convincing @bradrn. If you add it as an answer I can accept it

bradrn

Interesting! That connection hadn’t occurred to me.

bradrn

I’ll add my proof as an answer then.

bradrn

@Asad Saeeduddin I’ve added it.

Asad Saeeduddin

I guess this connection isn't really surprising, I'm just restating that husk and unhusk form a bijection in a really roundabout way (one of the types involved being the singleton ())

bradrn

I have to go now @Asad Saeeduddin (it’s getting really late where I am), but thanks for discussing this with me — I’ve found it fascinating! I’m glad I’ve been able to help.

Asad Saeeduddin

Yup, likewise. Thanks for the help!

bradrn

You’re welcome!

Asad Saeeduddin

by composing some isomorphisms it turns out that f () being a singleton implies forall x. x -> f x being a singleton. so StrongApplicative functors are those for which you can only define pure in one way