Can anyone think of a type for which we can lawfully implement the following, but for which the adjective "fixed size" seems unnatural?

zip::Applicativef=>(fa,fb)->f(a,b)zip=uncurry$liftA2(,)husk::Applicativef=>()->f()husk=pureunzip::Functorf=>f(a,b)->(fa,fb)unzipfab=(fst<$>fab,snd<$>fab)unhusk::Functorf=>f()->()unhusk=const()classApplicativef=>FixedSizef-- Laws:-- unzip . zip = id-- zip . unzip = id-- husk . unhusk = id-- unhusk . husk = id -- this one is trivial, only possible function of this type is @const ()@

Some lawful examples for which the adjective does seem to be suitable are Vec (n :: Nat), (,) x, and infinite Streams

This intrigues me. I wonder if we can prove that the size of the objects f a is the same. Actually, I wonder how to define the size of those objects. For functors, we can define a support set (informally the set of all objects that can be found in x :: f a) but stating that the support set of f a and f b have the same size won't work because of the possibility of duplicates.

@Simon Hudon I was thinking about something like demanding the partial derivative of the functor with respect to its parameter to be constant. Of course what exactly partial derivative means in this context is a little bit up in the air. Do we talk about the partial derivative of the "size functions"? Do we use the zipper notion of partial derivatives?

The Applicative typeclass represents lax monoidal functors that preserve the cartesian monoidal structure on the category of typed functions.
In other words, given the canonical isomorphisms witne...

btw I don't quite understand your support set idea. isn't the set of all elements that can be found in f a just f a (to the limited extent that types are sets)?

Here is formally what I mean by support with dependent ypes:

support (x : f a) = ⋂ s : set a, ∃ y : f { y : a // y ∈ s }, val <$> y = x

Basically, for x : f a, it is the smallest set s such that you can construct a replica of x that contains all the same "stuff" as x and every object of type a it contains, is paired with a proof that that object is a member of s.

Unfortunately I'm not super familiar with this notation, so I have to ask some baby level questions. e.g. does set a mean the set of sets of elements of type a?

You gave the example on Stack Overflow of (,) x. Does this mean that isn’t StrongApplicative either? If so, that would be interesting, because I suggested that as an example of a Representable functor which isn’t StrongApplicative.

I think we could say that, for each f that satisfies StrongApplicative, there must exist a type t such that f a and t -> a are in bijection with each other. Basically, no extra data and only objects mapped to some abstract position

I’ve already run into a problem: I have no idea how to define the associated type Rep f given only the knowledge that f is Applicative and OpApplicative. Any ideas?

@bradrn I think the easier starting point might be the other direction. Maybe it is true that every Representable is StrongApplicative but not necessarily vice versa (or at least constructively vice versa)

instance Representable f => Applicative f where
zip (fa, fb) = tabulate $ zip (index fa, index fb)
husk () = tabulate $ const ()
instance Representable f => OpApplicative f where
unzip fab = let (fa, fb) = unzip (index fab) in (tabulate fa, tabulate fb)
unhusk = const ()

If we can show that tabulate and index form not just a bijection but an applicative preserving isomorphism, we will have automatically that your instance is lawful

That makes sense, but I’m not sure how it would help… Recall that we constructed Applicative and OpApplicative by converting f a to r -> a with Representable, then using the (->) r instances of Applicative/OpApplicative to perform the ‘core’ computations, then use Representable to convert r -> a back to f a.

I guess that doesn't really help us get anywhere, it just gives us an alternate proof burden. But assuming we can prove that we would have that the instance is lawful

@bradrn Right. What I mean is because of the way you've constructed the Applicative instance out of the Representable isomorphism + the Applicative instance of (->) r, having a slightly stronger "applicative preserving" property about the Representable isomorphism's components gives us legality of the instance

Now the thing about monoidal functors/applicatives is that they're actually just monoid objects in the functor category under Day convolution. And like in any monoidal category, you can have "monoid morphisms" that preserve the monoid structure

But I do think I understand the idea of a monoid morphism. A function f :: (Monoid m, Monoid n) => m -> n is a monoid morphism iff f mempty == mempty and f (x <> y) == (f x) <> (f y), right?

that's what I mean about the applicative thing. you already know (->) (Rep f) is an applicative. if you additionally show that the isomorphism tabulate :: (->) (Rep f) ~> f, index :: f ~> (->) (Rep f) preserves that applicative, you'll have that f is an applicative

instance Representable f => Applicative f where
zip (fa, fb) = tabulate $ zip (index fa, index fb)
husk () = tabulate $ const ()
instance Representable f => OpApplicative f where
unzip fab = let (fa, fb) = unzip (index fab) in (tabulate fa, tabulate fb)
unhusk = const ()

husk . unhusk reduces to \x -> tabulate $ const () == \x -> tabulate $ husk (), so I don’t think there’s any way around showing that tabulate preserves applicatives.

hmm. I wonder if that's just an irreducible law that you have to demand in order for your Representable to give you an Applicative, as opposed to something you can prove

A law of Representable is tabulate . return == return. But return == husk for functions, so tabulate $ husk () == return (). Not quite sure how that helps though.

(In fact, that Representable law is quite weird: it essentially requires f to be an Applicative, but Representable only demands f to be a Functor!)

interesting. as far as I know the concept of a representable functor doesn't really involve the third law, but it seems they just demanded a bonus law for Representable

i think this does point to the extra laws being irreducible. you just have to demand them somewhere (whether in Representable, or somewhere in the constraints for your Representable f => Applicative f declaration)

I think I agree as well. Do we agree then that if f is Representable and its Representable instance preserves applicatives, then that implies that f is StrongApplicative?

I think it indicates at least that it is Applicative. We can add an "strong applicative preserving" requirement for the components of the isomorphism, and then we'll get that f is StrongApplicative

I can’t help but feel though that we haven’t made much progress on your original question: is StrongApplicative equivalent to being of ‘fixed shape’? Or, in other words, are there any StrongApplicatives which aren’t of fixed shape?

I guess determining some Representable functors are StrongApplicative is still focused on the other half of the question (which in the global case we've already answered in the negative)

we can maybe moderate that negated conjecture and say that those fixed shape functors that are Representable, and whose representation preserves applicatives, are StrongApplicative

you're right in that we still didn't make any progress on discovering any variable shape containers that are StrongApplicative (or showing that all StrongApplicative containers are of fixed shape)

are all StrongApplicative functors Representable? (even if not constructively)

I can't begin to imagine how to answer this in the positive (especially lacking the constructivity qualifier). If it is in fact false finding the counterexample might be easier

Proof: For a StrongApplicative, the law husk . unhusk == id must hold; that is, for any x :: f (), husk (unhusk x) == x. But in Haskell, unhusk == const (), so this law is equivalent to saying that for any x :: f (), husk () == x. By the transitivity of equality, this then means that there can only exist one distinct value of type f (): if x, y :: f (), then x == husk () and y == husk (), so x == y. But this is only possible if f has a fixed shape (if f has multiple possible shapes, then for each shape there is a corresponding value of type f ()). Thus husk . unhusk == id implies that there exists only one distinct value of type f (), and hence that f has a fixed shape.

@Asad Saeeduddin Yes, that is what I’m saying. I believe this definition captures the idea of ‘fixed shape’: () is a singleton, so if we substitute a with () in f a, then f () can only be a non-singleton if there is variation in f.

it's interesting, because this ties in to representability a bit. the Yoneda lemma implies that for any functor f, f () is isomorphic to forall x. (() -> x) -> f x, which I guess decomposes to:

fwd :: (forall x. x -> f x) -> f ()
bwd :: f () -> (forall x. x -> f x)

If f () is a singleton, then bwd (unique :: f ()) gives us something that looks a lot like pure :: forall x. x -> f x

I guess this connection isn't really surprising, I'm just restating that husk and unhusk form a bijection in a really roundabout way (one of the types involved being the singleton ())

I have to go now @Asad Saeeduddin (it’s getting really late where I am), but thanks for discussing this with me — I’ve found it fascinating! I’m glad I’ve been able to help.

by composing some isomorphisms it turns out that f () being a singleton implies forall x. x -> f x being a singleton. so StrongApplicative functors are those for which you can only define pure in one way

Can anyone think of a type for which we can lawfully implement the following, but for which the adjective "fixed size" seems unnatural?

Some lawful examples for which the adjective does seem to be suitable are

`Vec (n :: Nat)`

,`(,) x`

, and infinite`Stream`

sThis intrigues me. I wonder if we can prove that the size of the objects

`f a`

is the same. Actually, I wonder how to define the size of those objects. For functors, we can define a support set (informally the set of all objects that can be found in`x :: f a`

) but stating that the support set of`f a`

and`f b`

have the same size won't work because of the possibility of duplicates.@Simon Hudon I was thinking about something like demanding the partial derivative of the functor with respect to its parameter to be constant. Of course what exactly partial derivative means in this context is a little bit up in the air. Do we talk about the partial derivative of the "size functions"? Do we use the zipper notion of partial derivatives?

I asked a question here in case you come up with a solution: https://stackoverflow.com/questions/60596097/are-all-fixed-size-containers-strong-monoidal-functors-and-vice-versa

btw I don't quite understand your support set idea. isn't the set of all elements that can be found in

`f a`

just`f a`

(to the limited extent that types are sets)?I like your

`Strange`

example. I think it shows that some fixed size containers are not`StrongApplicative`

.do you mean are not?

Indeed :) in my defense, most of the words were there

Here is formally what I mean by support with dependent ypes:

Basically, for

`x : f a`

, it is the smallest set`s`

such that you can construct a replica of`x`

that contains all the same "stuff" as`x`

and every object of type`a`

it contains, is paired with a proof that that object is a member of`s`

.Note:

`{ y : a // y ∈ s }`

is a dependent type "subtype", or, as I call it above, a pair of an`a`

with the proof that that`a`

is a member of`s`

Unfortunately I'm not super familiar with this notation, so I have to ask some baby level questions. e.g. does

`set a`

mean the set of sets of elements of type`a`

?Almost:

`set a`

is the type of the sets of type`a`

.`:`

is Haskell's`::`

(I'm using Lean notation for my dependent types)(also, don't worry about baby level questions)

Is

`⋂`

an intersection of a family of sets or is it forall?It is an intersection of a family of set. When I said "the smallest set such as", that's what I was referring to

does this include the empty set?

Only if it satisfies the condition. You'd have to build a value of type

`f { x : a // x ∈ ∅ }`

which you can only do if`f`

can contain no objects.Now that I'm thinking about it maybe I'm also wrong about the writer applicative

being

`StrongApplicative`

that is`husk = (mempty,)`

, so is it really true that`husk . (const ()) = id`

?e.g.

`("hello", ()) /= husk $ unhusk $ ("hello", ())`

No, that does work. Consider your monoid is

`[a]`

and you apply`const ()`

to`([1,2,3],())`

indeed

You gave the example on Stack Overflow of

`(,) x`

. Does this mean that isn’t`StrongApplicative`

either? If so, that would be interesting, because I suggested that as an example of a`Representable`

functor which isn’t`StrongApplicative`

.I think we could say that, for each

`f`

that satisfies`StrongApplicative`

, there must exist a type`t`

such that`f a`

and`t -> a`

are in bijection with each other. Basically, no extra data and only objects mapped to some abstract position@Simon Hudon That’s the definition of

`Representable`

! So if that’s true,`StrongApplicative`

is equivalent to`Representable`

.@bradrn Yes, my example is wrong

@bradrn Do you think there's a way to implement

`Representable`

in terms of`StrongApplicative`

and vice versa?@Asad Saeeduddin I’m trying to figure that out now.

I’ve already run into a problem: I have no idea how to define the associated type

`Rep f`

given only the knowledge that`f`

is`Applicative`

and`OpApplicative`

. Any ideas?I can't think of a way to do that either

you could maybe take the representation to be a natural number and walk over each position

I was thinking along the same lines, but that doesn’t really lend itself to figuring out a concrete solution.

You'd need an instance of traversable or foldable and you would't get an answer for infinite streams

aren't the natural numbers infinite?

@Simon Hudon Why would

`Traversable`

or`Foldable`

needed?Otherwise, you don't have enough to pass a counter from one element to the next

@bradrn I think the easier starting point might be the other direction. Maybe it is true that every

`Representable`

is`StrongApplicative`

but not necessarily vice versa (or at least constructively vice versa)I think that direction is easy to prove

@Asad Saeeduddin I’ll try doing that then.

Wouldn't it be fairly straightforward to convert to a function and then exploit its strong applicative?

Assuming I haven't screwed that one up as well and functions really are

`StrongApplicative`

@Asad Saeeduddin Yes, I think that would work. Of course you’d still need to prove the laws…

Implementations:

very nice

Now I’ll see if this satisfies the laws

`unhusk . husk = id`

is trivialWait, I think my implementation of

`husk`

is wrong. It should be`husk () = tabulate $ const $ return ()`

, I think.isn't that what you have?

Only because I just edited it. Before it was

`tabulate $ const ()`

.is

`return`

the one for the function monad?Oh wait, I was right in the first place.

`tabulate :: (Rep f -> a) -> f a`

, so`tabulate (const ()) :: f ()`

. I’ll edit it back.more obvious it's right if we just write

`husk = tabulate . husk`

That makes sense as well. So does

`husk = tabulate (return ())`

, which works a bit better with the laws for`Representable`

.does that actually typecheck?

I

thinkso… for functions,`return = husk = const`

.well the codomain of

`tabulate`

is`f a`

, but the type of`husk`

is`() -> f ()`

so you need an extra parameter in there at least

if you did

`tabulate . return`

or`husk _ = tabulate (return ())`

i think it makes senseOh yes, I forgot that. But you can see that I included it in my original

`Applicative`

instance above.But after thinking about it, I like

`husk = tabulate . return`

much better.Or even

`husk = tabulate . husk`

, which you suggested earlier.If we can show that

`tabulate`

and`index`

form not just a bijection but an applicative preserving isomorphism, we will have automatically that your instance is lawfulI’m not quite sure I understand this…

the idea is that a function can be more than just a mere function, it can preserve some structure that exists on both its domain and codomain types

for example you could have a monoid homomorphism

`f :: M -> N`

for two monoids`M`

and`N`

, that preserves the monoid structure in the following way:I understand now. But how would we prove that about

`tabulate`

and`index`

?for

`tabulate`

we'd require:for

`index :: f a -> (Rep f -> a)`

, we'd require the same thing:That makes sense, but I’m not sure how it would help… Recall that we constructed

`Applicative`

and`OpApplicative`

by converting`f a`

to`r -> a`

with`Representable`

, then using the`(->) r`

instances of`Applicative`

/`OpApplicative`

to perform the ‘core’ computations, then use`Representable`

to convert`r -> a`

back to`f a`

.I guess that doesn't really help us get anywhere, it just gives us an alternate proof burden. But assuming we can prove that we would have that the instance is lawful

@bradrn Right. What I mean is because of the way you've constructed the

`Applicative`

instance out of the`Representable`

isomorphism + the`Applicative`

instance of`(->) r`

, having a slightly stronger "applicative preserving" property about the`Representable`

isomorphism's components gives us legality of the instanceDo you have a proof that “applicative preservation” implies legality? I’d like to see how that works.

Yes. Your instance is effectively this:

Let's assume that the the

`zip`

and`husk`

for functions that we're using on the RHS is legalNow the thing about monoidal functors/applicatives is that they're actually just monoid objects in the functor category under Day convolution. And like in any monoidal category, you can have "monoid morphisms" that preserve the monoid structure

here is the definition and relevant diagrams copy pasted from wikipedia

image.png

I’m not sure I know enough category theory to understand this…

But I do think I understand the idea of a monoid morphism. A function

`f :: (Monoid m, Monoid n) => m -> n`

is a monoid morphism iff`f mempty == mempty`

and`f (x <> y) == (f x) <> (f y)`

, right?exactly

but that "such that" is actually bidirectional

True. Probably ‘if and only if’ would have been better.

in that if:

`mempty_n :: n`

and`(<>_n) :: n -> n -> n`

`m`

`f :: m -> n`

`f mempty = mempty_n`

and`f (x <> y) = f x <>_n f y`

then

`n, mempty_n, <>_n`

is a monoidthat's what I mean about the applicative thing. you already know

`(->) (Rep f)`

is an applicative. if you additionally show that the isomorphism`tabulate :: (->) (Rep f) ~> f`

,`index :: f ~> (->) (Rep f)`

preserves that applicative, you'll have that`f`

is an applicativeYes, now I understand! Thank you for explaining!

But how could we prove that

`tabulate`

and`index`

are applicative-preserving? (And I think they need to be op-applicative preserving as well.)@TheMatten Do we have lambdabot here?

@bradrn Sadly I'm not sure. What I'm hoping is that this mostly comes out of the free theorems of the types

My implementation again for reference:

I’m thinking that it could potentially be easier to prove them directly from the implementation.

Btw you don't need to write that

`OpApplicative`

by hand.`(->) (Rep f)`

's`OpApplicative`

is no different from the trivial one for any other`Functor`

Oh… somehow I missed that all

`Functor`

s are`OpApplicative`

!That said, yes if you can figure out a way to prove it by hand that could work. I don't see one off the top of my head :)

To copy your implementation from SO:

`husk . unhusk`

reduces to`\x -> tabulate $ const () == \x -> tabulate $ husk ()`

, so I don’t think there’s any way around showing that`tabulate`

preserves applicatives.hmm. I wonder if that's just an irreducible law that you have to demand in order for your

`Representable`

to give you an`Applicative`

, as opposed to something you can proveA law of

`Representable`

is`tabulate . return == return`

. But`return == husk`

for functions, so`tabulate $ husk () == return ()`

. Not quite sure how that helps though.(In fact, that

`Representable`

law is quite weird: it essentially requires`f`

to be an`Applicative`

, but`Representable`

only demands`f`

to be a`Functor`

!)Wait a sec, is the second

`return`

in`tabulate . return = return`

of type`a -> f a`

?Yes

well, that proves 1/2 things necessary to show that

`tabulate`

preserves applicatives, right?here's what we need for

`tabulate`

again:Excellent point! I can’t believe I didn’t notice that…

so the second one is satisfied. where did you find this theorem about

`Representable`

?But the first point could be tricker. Perhaps it may be necessary to assume it as a law.

@Asad Saeeduddin I found it listed in the representable laws here:

maybe we'll find the other law somewhere in there too

interesting. as far as I know the concept of a representable functor doesn't really involve the third law, but it seems they just demanded a bonus law for

`Representable`

which just happened to be the one we need

Yes, I found that to be very strange as well. Especially since not all

`f`

s have`return`

!i think this does point to the extra laws being irreducible. you just have to demand them somewhere (whether in

`Representable`

, or somewhere in the constraints for your`Representable f => Applicative f`

declaration)I think I agree as well. Do we agree then that if

`f`

is`Representable`

and its`Representable`

instance preserves applicatives, then that implies that`f`

is`StrongApplicative`

?I think it indicates at least that it is

`Applicative`

. We can add an "strong applicative preserving" requirement for the components of the isomorphism, and then we'll get that`f`

is`StrongApplicative`

Alright then. That makes sense.

I can’t help but feel though that we haven’t made much progress on your original question: is

`StrongApplicative`

equivalent to being of ‘fixed shape’? Or, in other words, are there any`StrongApplicative`

s which aren’t of fixed shape?I guess determining some

`Representable`

functors are`StrongApplicative`

is still focused on the other half of the question (which in the global case we've already answered in the negative)we can maybe moderate that negated conjecture and say that those fixed shape functors that are

`Representable`

, and whose representation preserves applicatives, are`StrongApplicative`

and then that much (I think) is true

I agree. But I think the much more interesting question is: are all

`StrongApplicative`

functors`Representable`

? (even if not constructively)you're right in that we still didn't make any progress on discovering any variable shape containers that are

`StrongApplicative`

(or showing that all`StrongApplicative`

containers are of fixed shape)I guess implementing

`Representable`

from`StrongApplicative`

would conclusively settle that questionI don’t see any way of implementing

`Representable`

until we can figure out how to get`Rep f`

given`f`

is`StrongApplicative`

.I can't begin to imagine how to answer this in the positive (especially lacking the constructivity qualifier). If it is in fact false finding the counterexample might be easier

So far what's left of the

`StrongApplicative`

list certainly seems to admit`Representable`

instances for each itemHonestly, I doubt there’s any easy answer to this.

I have to go now, but I’ll keep on thinking about it — I’ll post here if I have any ideas.

@Asad Saeeduddin I think I’ve found a proof that a

`StrongApplicative`

must be of fixed shape! I’ll write it up now.Proof: For a

`StrongApplicative`

, the law`husk . unhusk == id`

must hold; that is, for any`x :: f ()`

,`husk (unhusk x) == x`

. But in Haskell,`unhusk == const ()`

, so this law is equivalent to saying that for any`x :: f ()`

,`husk () == x`

. By the transitivity of equality, this then means that there can only exist one distinct value of type`f ()`

: if`x, y :: f ()`

, then`x == husk ()`

and`y == husk ()`

, so`x == y`

. But this is only possible if`f`

has a fixed shape (if`f`

has multiple possible shapes, then for each shape there is a corresponding value of type`f ()`

). Thus`husk . unhusk == id`

implies that there exists only one distinct value of type`f ()`

, and hence that`f`

has a fixed shape.That's very interesting. I don't quite follow the "transitivity of equality" step though. What does that mean?

All it means is that if

`x == husk ()`

and`y == husk ()`

, then`x == y`

.ah, ok

hmm. so essentially you are saying if

`f ()`

is a singleton, then`f`

is of fixed shapesounds like a decent enough definition

@Asad Saeeduddin Yes, that is what I’m saying. I believe this definition captures the idea of ‘fixed shape’:

`()`

is a singleton, so if we substitute`a`

with`()`

in`f a`

, then`f ()`

can only be a non-singleton if there is variation in`f`

.it's interesting, because this ties in to representability a bit. the Yoneda lemma implies that for any functor

`f`

,`f ()`

is isomorphic to`forall x. (() -> x) -> f x`

, which I guess decomposes to:If

`f ()`

is a singleton, then`bwd (unique :: f ())`

gives us something that looks a lot like`pure :: forall x. x -> f x`

anyway I find that proof convincing @bradrn. If you add it as an answer I can accept it

Interesting! That connection hadn’t occurred to me.

I’ll add my proof as an answer then.

@Asad Saeeduddin I’ve added it.

I guess this connection isn't really surprising, I'm just restating that

`husk`

and`unhusk`

form a bijection in a really roundabout way (one of the types involved being the singleton`()`

)I have to go now @Asad Saeeduddin (it’s getting really late where I am), but thanks for discussing this with me — I’ve found it fascinating! I’m glad I’ve been able to help.

Yup, likewise. Thanks for the help!

You’re welcome!

by composing some isomorphisms it turns out that

`f ()`

being a singleton implies`forall x. x -> f x`

being a singleton. so`StrongApplicative`

functors are those for which you can only define`pure`

in one way