Can anyone think of a type for which we can lawfully implement the following, but for which the adjective "fixed size" seems unnatural?
zip::Applicativef=>(fa,fb)->f(a,b)zip=uncurry$liftA2(,)husk::Applicativef=>()->f()husk=pureunzip::Functorf=>f(a,b)->(fa,fb)unzipfab=(fst<$>fab,snd<$>fab)unhusk::Functorf=>f()->()unhusk=const()classApplicativef=>FixedSizef-- Laws:-- unzip . zip = id-- zip . unzip = id-- husk . unhusk = id-- unhusk . husk = id -- this one is trivial, only possible function of this type is @const ()@
Some lawful examples for which the adjective does seem to be suitable are Vec (n :: Nat), (,) x, and infinite Streams
This intrigues me. I wonder if we can prove that the size of the objects f a is the same. Actually, I wonder how to define the size of those objects. For functors, we can define a support set (informally the set of all objects that can be found in x :: f a) but stating that the support set of f a and f b have the same size won't work because of the possibility of duplicates.
@Simon Hudon I was thinking about something like demanding the partial derivative of the functor with respect to its parameter to be constant. Of course what exactly partial derivative means in this context is a little bit up in the air. Do we talk about the partial derivative of the "size functions"? Do we use the zipper notion of partial derivatives?
The Applicative typeclass represents lax monoidal functors that preserve the cartesian monoidal structure on the category of typed functions.
In other words, given the canonical isomorphisms witne...
btw I don't quite understand your support set idea. isn't the set of all elements that can be found in f a just f a (to the limited extent that types are sets)?
Here is formally what I mean by support with dependent ypes:
support (x : f a) = ⋂ s : set a, ∃ y : f { y : a // y ∈ s }, val <$> y = x
Basically, for x : f a, it is the smallest set s such that you can construct a replica of x that contains all the same "stuff" as x and every object of type a it contains, is paired with a proof that that object is a member of s.
Unfortunately I'm not super familiar with this notation, so I have to ask some baby level questions. e.g. does set a mean the set of sets of elements of type a?
You gave the example on Stack Overflow of (,) x. Does this mean that isn’t StrongApplicative either? If so, that would be interesting, because I suggested that as an example of a Representable functor which isn’t StrongApplicative.
I think we could say that, for each f that satisfies StrongApplicative, there must exist a type t such that f a and t -> a are in bijection with each other. Basically, no extra data and only objects mapped to some abstract position
I’ve already run into a problem: I have no idea how to define the associated type Rep f given only the knowledge that f is Applicative and OpApplicative. Any ideas?
@bradrn I think the easier starting point might be the other direction. Maybe it is true that every Representable is StrongApplicative but not necessarily vice versa (or at least constructively vice versa)
instance Representable f => Applicative f where
zip (fa, fb) = tabulate $ zip (index fa, index fb)
husk () = tabulate $ const ()
instance Representable f => OpApplicative f where
unzip fab = let (fa, fb) = unzip (index fab) in (tabulate fa, tabulate fb)
unhusk = const ()
If we can show that tabulate and index form not just a bijection but an applicative preserving isomorphism, we will have automatically that your instance is lawful
That makes sense, but I’m not sure how it would help… Recall that we constructed Applicative and OpApplicative by converting f a to r -> a with Representable, then using the (->) r instances of Applicative/OpApplicative to perform the ‘core’ computations, then use Representable to convert r -> a back to f a.
I guess that doesn't really help us get anywhere, it just gives us an alternate proof burden. But assuming we can prove that we would have that the instance is lawful
@bradrn Right. What I mean is because of the way you've constructed the Applicative instance out of the Representable isomorphism + the Applicative instance of (->) r, having a slightly stronger "applicative preserving" property about the Representable isomorphism's components gives us legality of the instance
Now the thing about monoidal functors/applicatives is that they're actually just monoid objects in the functor category under Day convolution. And like in any monoidal category, you can have "monoid morphisms" that preserve the monoid structure
But I do think I understand the idea of a monoid morphism. A function f :: (Monoid m, Monoid n) => m -> n is a monoid morphism iff f mempty == mempty and f (x <> y) == (f x) <> (f y), right?
that's what I mean about the applicative thing. you already know (->) (Rep f) is an applicative. if you additionally show that the isomorphism tabulate :: (->) (Rep f) ~> f, index :: f ~> (->) (Rep f) preserves that applicative, you'll have that f is an applicative
instance Representable f => Applicative f where
zip (fa, fb) = tabulate $ zip (index fa, index fb)
husk () = tabulate $ const ()
instance Representable f => OpApplicative f where
unzip fab = let (fa, fb) = unzip (index fab) in (tabulate fa, tabulate fb)
unhusk = const ()
husk . unhusk reduces to \x -> tabulate $ const () == \x -> tabulate $ husk (), so I don’t think there’s any way around showing that tabulate preserves applicatives.
hmm. I wonder if that's just an irreducible law that you have to demand in order for your Representable to give you an Applicative, as opposed to something you can prove
A law of Representable is tabulate . return == return. But return == husk for functions, so tabulate $ husk () == return (). Not quite sure how that helps though.
(In fact, that Representable law is quite weird: it essentially requires f to be an Applicative, but Representable only demands f to be a Functor!)
interesting. as far as I know the concept of a representable functor doesn't really involve the third law, but it seems they just demanded a bonus law for Representable
i think this does point to the extra laws being irreducible. you just have to demand them somewhere (whether in Representable, or somewhere in the constraints for your Representable f => Applicative f declaration)
I think I agree as well. Do we agree then that if f is Representable and its Representable instance preserves applicatives, then that implies that f is StrongApplicative?
I think it indicates at least that it is Applicative. We can add an "strong applicative preserving" requirement for the components of the isomorphism, and then we'll get that f is StrongApplicative
I can’t help but feel though that we haven’t made much progress on your original question: is StrongApplicative equivalent to being of ‘fixed shape’? Or, in other words, are there any StrongApplicatives which aren’t of fixed shape?
I guess determining some Representable functors are StrongApplicative is still focused on the other half of the question (which in the global case we've already answered in the negative)
we can maybe moderate that negated conjecture and say that those fixed shape functors that are Representable, and whose representation preserves applicatives, are StrongApplicative
you're right in that we still didn't make any progress on discovering any variable shape containers that are StrongApplicative (or showing that all StrongApplicative containers are of fixed shape)
are all StrongApplicative functors Representable? (even if not constructively)
I can't begin to imagine how to answer this in the positive (especially lacking the constructivity qualifier). If it is in fact false finding the counterexample might be easier
Proof: For a StrongApplicative, the law husk . unhusk == id must hold; that is, for any x :: f (), husk (unhusk x) == x. But in Haskell, unhusk == const (), so this law is equivalent to saying that for any x :: f (), husk () == x. By the transitivity of equality, this then means that there can only exist one distinct value of type f (): if x, y :: f (), then x == husk () and y == husk (), so x == y. But this is only possible if f has a fixed shape (if f has multiple possible shapes, then for each shape there is a corresponding value of type f ()). Thus husk . unhusk == id implies that there exists only one distinct value of type f (), and hence that f has a fixed shape.
@Asad Saeeduddin Yes, that is what I’m saying. I believe this definition captures the idea of ‘fixed shape’: () is a singleton, so if we substitute a with () in f a, then f () can only be a non-singleton if there is variation in f.
it's interesting, because this ties in to representability a bit. the Yoneda lemma implies that for any functor f, f () is isomorphic to forall x. (() -> x) -> f x, which I guess decomposes to:
fwd :: (forall x. x -> f x) -> f ()
bwd :: f () -> (forall x. x -> f x)
If f () is a singleton, then bwd (unique :: f ()) gives us something that looks a lot like pure :: forall x. x -> f x
I guess this connection isn't really surprising, I'm just restating that husk and unhusk form a bijection in a really roundabout way (one of the types involved being the singleton ())
I have to go now @Asad Saeeduddin (it’s getting really late where I am), but thanks for discussing this with me — I’ve found it fascinating! I’m glad I’ve been able to help.
by composing some isomorphisms it turns out that f () being a singleton implies forall x. x -> f x being a singleton. so StrongApplicative functors are those for which you can only define pure in one way
Can anyone think of a type for which we can lawfully implement the following, but for which the adjective "fixed size" seems unnatural?
Some lawful examples for which the adjective does seem to be suitable are
Vec (n :: Nat)
,(,) x
, and infiniteStream
sThis intrigues me. I wonder if we can prove that the size of the objects
f a
is the same. Actually, I wonder how to define the size of those objects. For functors, we can define a support set (informally the set of all objects that can be found inx :: f a
) but stating that the support set off a
andf b
have the same size won't work because of the possibility of duplicates.@Simon Hudon I was thinking about something like demanding the partial derivative of the functor with respect to its parameter to be constant. Of course what exactly partial derivative means in this context is a little bit up in the air. Do we talk about the partial derivative of the "size functions"? Do we use the zipper notion of partial derivatives?
I asked a question here in case you come up with a solution: https://stackoverflow.com/questions/60596097/are-all-fixed-size-containers-strong-monoidal-functors-and-vice-versa
btw I don't quite understand your support set idea. isn't the set of all elements that can be found in
f a
justf a
(to the limited extent that types are sets)?I like your
Strange
example. I think it shows that some fixed size containers are notStrongApplicative
.do you mean are not?
Indeed :) in my defense, most of the words were there
Here is formally what I mean by support with dependent ypes:
Basically, for
x : f a
, it is the smallest sets
such that you can construct a replica ofx
that contains all the same "stuff" asx
and every object of typea
it contains, is paired with a proof that that object is a member ofs
.Note:
{ y : a // y ∈ s }
is a dependent type "subtype", or, as I call it above, a pair of ana
with the proof that thata
is a member ofs
Unfortunately I'm not super familiar with this notation, so I have to ask some baby level questions. e.g. does
set a
mean the set of sets of elements of typea
?Almost:
set a
is the type of the sets of typea
.:
is Haskell's::
(I'm using Lean notation for my dependent types)(also, don't worry about baby level questions)
Is
⋂
an intersection of a family of sets or is it forall?It is an intersection of a family of set. When I said "the smallest set such as", that's what I was referring to
does this include the empty set?
Only if it satisfies the condition. You'd have to build a value of type
f { x : a // x ∈ ∅ }
which you can only do iff
can contain no objects.Now that I'm thinking about it maybe I'm also wrong about the writer applicative
being
StrongApplicative
that ishusk = (mempty,)
, so is it really true thathusk . (const ()) = id
?e.g.
("hello", ()) /= husk $ unhusk $ ("hello", ())
No, that does work. Consider your monoid is
[a]
and you applyconst ()
to([1,2,3],())
indeed
You gave the example on Stack Overflow of
(,) x
. Does this mean that isn’tStrongApplicative
either? If so, that would be interesting, because I suggested that as an example of aRepresentable
functor which isn’tStrongApplicative
.I think we could say that, for each
f
that satisfiesStrongApplicative
, there must exist a typet
such thatf a
andt -> a
are in bijection with each other. Basically, no extra data and only objects mapped to some abstract position@Simon Hudon That’s the definition of
Representable
! So if that’s true,StrongApplicative
is equivalent toRepresentable
.@bradrn Yes, my example is wrong
@bradrn Do you think there's a way to implement
Representable
in terms ofStrongApplicative
and vice versa?@Asad Saeeduddin I’m trying to figure that out now.
I’ve already run into a problem: I have no idea how to define the associated type
Rep f
given only the knowledge thatf
isApplicative
andOpApplicative
. Any ideas?I can't think of a way to do that either
you could maybe take the representation to be a natural number and walk over each position
I was thinking along the same lines, but that doesn’t really lend itself to figuring out a concrete solution.
You'd need an instance of traversable or foldable and you would't get an answer for infinite streams
aren't the natural numbers infinite?
@Simon Hudon Why would
Traversable
orFoldable
needed?Otherwise, you don't have enough to pass a counter from one element to the next
@bradrn I think the easier starting point might be the other direction. Maybe it is true that every
Representable
isStrongApplicative
but not necessarily vice versa (or at least constructively vice versa)I think that direction is easy to prove
@Asad Saeeduddin I’ll try doing that then.
Wouldn't it be fairly straightforward to convert to a function and then exploit its strong applicative?
Assuming I haven't screwed that one up as well and functions really are
StrongApplicative
@Asad Saeeduddin Yes, I think that would work. Of course you’d still need to prove the laws…
Implementations:
very nice
Now I’ll see if this satisfies the laws
unhusk . husk = id
is trivialWait, I think my implementation of
husk
is wrong. It should behusk () = tabulate $ const $ return ()
, I think.isn't that what you have?
Only because I just edited it. Before it was
tabulate $ const ()
.is
return
the one for the function monad?Oh wait, I was right in the first place.
tabulate :: (Rep f -> a) -> f a
, sotabulate (const ()) :: f ()
. I’ll edit it back.more obvious it's right if we just write
husk = tabulate . husk
That makes sense as well. So does
husk = tabulate (return ())
, which works a bit better with the laws forRepresentable
.does that actually typecheck?
I think so… for functions,
return = husk = const
.well the codomain of
tabulate
isf a
, but the type ofhusk
is() -> f ()
so you need an extra parameter in there at least
if you did
tabulate . return
orhusk _ = tabulate (return ())
i think it makes senseOh yes, I forgot that. But you can see that I included it in my original
Applicative
instance above.But after thinking about it, I like
husk = tabulate . return
much better.Or even
husk = tabulate . husk
, which you suggested earlier.If we can show that
tabulate
andindex
form not just a bijection but an applicative preserving isomorphism, we will have automatically that your instance is lawfulI’m not quite sure I understand this…
the idea is that a function can be more than just a mere function, it can preserve some structure that exists on both its domain and codomain types
for example you could have a monoid homomorphism
f :: M -> N
for two monoidsM
andN
, that preserves the monoid structure in the following way:I understand now. But how would we prove that about
tabulate
andindex
?for
tabulate
we'd require:for
index :: f a -> (Rep f -> a)
, we'd require the same thing:That makes sense, but I’m not sure how it would help… Recall that we constructed
Applicative
andOpApplicative
by convertingf a
tor -> a
withRepresentable
, then using the(->) r
instances ofApplicative
/OpApplicative
to perform the ‘core’ computations, then useRepresentable
to convertr -> a
back tof a
.I guess that doesn't really help us get anywhere, it just gives us an alternate proof burden. But assuming we can prove that we would have that the instance is lawful
@bradrn Right. What I mean is because of the way you've constructed the
Applicative
instance out of theRepresentable
isomorphism + theApplicative
instance of(->) r
, having a slightly stronger "applicative preserving" property about theRepresentable
isomorphism's components gives us legality of the instanceDo you have a proof that “applicative preservation” implies legality? I’d like to see how that works.
Yes. Your instance is effectively this:
Let's assume that the the
zip
andhusk
for functions that we're using on the RHS is legalNow the thing about monoidal functors/applicatives is that they're actually just monoid objects in the functor category under Day convolution. And like in any monoidal category, you can have "monoid morphisms" that preserve the monoid structure
here is the definition and relevant diagrams copy pasted from wikipedia
image.png
I’m not sure I know enough category theory to understand this…
But I do think I understand the idea of a monoid morphism. A function
f :: (Monoid m, Monoid n) => m -> n
is a monoid morphism ifff mempty == mempty
andf (x <> y) == (f x) <> (f y)
, right?exactly
but that "such that" is actually bidirectional
True. Probably ‘if and only if’ would have been better.
in that if:
mempty_n :: n
and(<>_n) :: n -> n -> n
m
f :: m -> n
f mempty = mempty_n
andf (x <> y) = f x <>_n f y
then
n, mempty_n, <>_n
is a monoidthat's what I mean about the applicative thing. you already know
(->) (Rep f)
is an applicative. if you additionally show that the isomorphismtabulate :: (->) (Rep f) ~> f
,index :: f ~> (->) (Rep f)
preserves that applicative, you'll have thatf
is an applicativeYes, now I understand! Thank you for explaining!
But how could we prove that
tabulate
andindex
are applicative-preserving? (And I think they need to be op-applicative preserving as well.)@TheMatten Do we have lambdabot here?
@bradrn Sadly I'm not sure. What I'm hoping is that this mostly comes out of the free theorems of the types
My implementation again for reference:
I’m thinking that it could potentially be easier to prove them directly from the implementation.
Btw you don't need to write that
OpApplicative
by hand.(->) (Rep f)
'sOpApplicative
is no different from the trivial one for any otherFunctor
Oh… somehow I missed that all
Functor
s areOpApplicative
!That said, yes if you can figure out a way to prove it by hand that could work. I don't see one off the top of my head :)
To copy your implementation from SO:
husk . unhusk
reduces to\x -> tabulate $ const () == \x -> tabulate $ husk ()
, so I don’t think there’s any way around showing thattabulate
preserves applicatives.hmm. I wonder if that's just an irreducible law that you have to demand in order for your
Representable
to give you anApplicative
, as opposed to something you can proveA law of
Representable
istabulate . return == return
. Butreturn == husk
for functions, sotabulate $ husk () == return ()
. Not quite sure how that helps though.(In fact, that
Representable
law is quite weird: it essentially requiresf
to be anApplicative
, butRepresentable
only demandsf
to be aFunctor
!)Wait a sec, is the second
return
intabulate . return = return
of typea -> f a
?Yes
well, that proves 1/2 things necessary to show that
tabulate
preserves applicatives, right?here's what we need for
tabulate
again:Excellent point! I can’t believe I didn’t notice that…
so the second one is satisfied. where did you find this theorem about
Representable
?But the first point could be tricker. Perhaps it may be necessary to assume it as a law.
@Asad Saeeduddin I found it listed in the representable laws here:
maybe we'll find the other law somewhere in there too
interesting. as far as I know the concept of a representable functor doesn't really involve the third law, but it seems they just demanded a bonus law for
Representable
which just happened to be the one we need
Yes, I found that to be very strange as well. Especially since not all
f
s havereturn
!i think this does point to the extra laws being irreducible. you just have to demand them somewhere (whether in
Representable
, or somewhere in the constraints for yourRepresentable f => Applicative f
declaration)I think I agree as well. Do we agree then that if
f
isRepresentable
and itsRepresentable
instance preserves applicatives, then that implies thatf
isStrongApplicative
?I think it indicates at least that it is
Applicative
. We can add an "strong applicative preserving" requirement for the components of the isomorphism, and then we'll get thatf
isStrongApplicative
Alright then. That makes sense.
I can’t help but feel though that we haven’t made much progress on your original question: is
StrongApplicative
equivalent to being of ‘fixed shape’? Or, in other words, are there anyStrongApplicative
s which aren’t of fixed shape?I guess determining some
Representable
functors areStrongApplicative
is still focused on the other half of the question (which in the global case we've already answered in the negative)we can maybe moderate that negated conjecture and say that those fixed shape functors that are
Representable
, and whose representation preserves applicatives, areStrongApplicative
and then that much (I think) is true
I agree. But I think the much more interesting question is: are all
StrongApplicative
functorsRepresentable
? (even if not constructively)you're right in that we still didn't make any progress on discovering any variable shape containers that are
StrongApplicative
(or showing that allStrongApplicative
containers are of fixed shape)I guess implementing
Representable
fromStrongApplicative
would conclusively settle that questionI don’t see any way of implementing
Representable
until we can figure out how to getRep f
givenf
isStrongApplicative
.I can't begin to imagine how to answer this in the positive (especially lacking the constructivity qualifier). If it is in fact false finding the counterexample might be easier
So far what's left of the
StrongApplicative
list certainly seems to admitRepresentable
instances for each itemHonestly, I doubt there’s any easy answer to this.
I have to go now, but I’ll keep on thinking about it — I’ll post here if I have any ideas.
@Asad Saeeduddin I think I’ve found a proof that a
StrongApplicative
must be of fixed shape! I’ll write it up now.Proof: For a
StrongApplicative
, the lawhusk . unhusk == id
must hold; that is, for anyx :: f ()
,husk (unhusk x) == x
. But in Haskell,unhusk == const ()
, so this law is equivalent to saying that for anyx :: f ()
,husk () == x
. By the transitivity of equality, this then means that there can only exist one distinct value of typef ()
: ifx, y :: f ()
, thenx == husk ()
andy == husk ()
, sox == y
. But this is only possible iff
has a fixed shape (iff
has multiple possible shapes, then for each shape there is a corresponding value of typef ()
). Thushusk . unhusk == id
implies that there exists only one distinct value of typef ()
, and hence thatf
has a fixed shape.That's very interesting. I don't quite follow the "transitivity of equality" step though. What does that mean?
All it means is that if
x == husk ()
andy == husk ()
, thenx == y
.ah, ok
hmm. so essentially you are saying if
f ()
is a singleton, thenf
is of fixed shapesounds like a decent enough definition
@Asad Saeeduddin Yes, that is what I’m saying. I believe this definition captures the idea of ‘fixed shape’:
()
is a singleton, so if we substitutea
with()
inf a
, thenf ()
can only be a non-singleton if there is variation inf
.it's interesting, because this ties in to representability a bit. the Yoneda lemma implies that for any functor
f
,f ()
is isomorphic toforall x. (() -> x) -> f x
, which I guess decomposes to:If
f ()
is a singleton, thenbwd (unique :: f ())
gives us something that looks a lot likepure :: forall x. x -> f x
anyway I find that proof convincing @bradrn. If you add it as an answer I can accept it
Interesting! That connection hadn’t occurred to me.
I’ll add my proof as an answer then.
@Asad Saeeduddin I’ve added it.
I guess this connection isn't really surprising, I'm just restating that
husk
andunhusk
form a bijection in a really roundabout way (one of the types involved being the singleton()
)I have to go now @Asad Saeeduddin (it’s getting really late where I am), but thanks for discussing this with me — I’ve found it fascinating! I’m glad I’ve been able to help.
Yup, likewise. Thanks for the help!
You’re welcome!
by composing some isomorphisms it turns out that
f ()
being a singleton impliesforall x. x -> f x
being a singleton. soStrongApplicative
functors are those for which you can only definepure
in one way