It's just Unit -> a - so if you've ever used nullary function instead of it's result, you've used Lazy (as in PureScript, it's not real, sharing "laziness" as in Haskell)
I'm familiar with using at thunk to delay computation, but lazy is different. Using the a -> b instance, the two functions in the lazy documentations are
I mean, I guess it doesn't matter that often, but there's still a difference, and defer is sort of safeguard that makes sure that all the computation in function is deferred until application of last argument
There's no "once" in PS, only "later" - we're talking about simple functions here (JS functions at runtime), compared to actual laziness in Haskell, which includes sharing - Haskell "thunk" is literally computation that either computes it's value and writes in in place, or returns already computed value
In Haskell you can do stuff like building doubly linked lists or lazily streaming files - trying to do the same thing with Lazy in PS would result in infinite recursion and duplicated reading on parallel access respectively
Alright so I realized I could just look at the compiled JS and decided to do that since I wasn't getting it. First off, I think the g <<< h example actually was the exact same in both cases (as far as what gets computed when), but I found another simple example where it's not
If you look at f you can see what I meant by "once" (my intuition happened to be correct), bigComputation(1) is only executed once (to create f), whereas with e it only gets executed when e is called (but it does so every time)
okay you actually don't even need to use let and it's even simpler. And I think g <<< h just didn't work because of compiler optimization - it did g(h(x)) instead of using compose(g)(h), which would have demonstrated the difference.
Does anyone know of some good example uses of the Lazy functor? I'm having a hard time wrapping my head around how it ought to be used.
It's just
Unit -> a
- so if you've ever used nullary function instead of it's result, you've usedLazy
(as in PureScript, it's not real, sharing "laziness" as in Haskell)I'm familiar with using at thunk to delay computation, but lazy is different. Using the
a -> b
instance, the two functions in the lazy documentations areI don't understand the point of a
Unit -> a -> b
? how is that any more lazy than justa -> b
? In both casesb
gets computed after you give ita
.I'm probably just looking at it completely wrong because honestly, I don't even know what those functions are trying to do.
mind position of call to
f
- it's after applyinga
, and sodefer f
is actually less strict thanf unit
- e.g. inf unit a
evaluates<<<
after applyingunit
, butonly after applying
a
I mean, I guess it doesn't matter that often, but there's still a difference, and
defer
is sort of safeguard that makes sure that all the computation in function is deferred until application of last argumentso we could have
which evaluates
g <<< h
once and then does the computations involved in computingg <<< h $ a
every timef a
is calledvs.
which evaluates
g <<< h
along with computations involved in computingg <<< h $ a
every timef a
is called?There's no "once" in PS, only "later" - we're talking about simple functions here (JS functions at runtime), compared to actual laziness in Haskell, which includes sharing - Haskell "thunk" is literally computation that either computes it's value and writes in in place, or returns already computed value
So will always evaluate
<<<
and function it outputs, it's just thatdefer
can change when the former happensIn Haskell you can do stuff like building doubly linked lists or lazily streaming files - trying to do the same thing with
Lazy
in PS would result in infinite recursion and duplicated reading on parallel access respectivelyAlright so I realized I could just look at the compiled JS and decided to do that since I wasn't getting it. First off, I think the
g <<< h
example actually was the exact same in both cases (as far as what gets computed when), but I found another simple example where it's notcompiles to
If you look at
f
you can see what I meant by "once" (my intuition happened to be correct),bigComputation(1)
is only executed once (to createf
), whereas withe
it only gets executed whene
is called (but it does so every time)okay you actually don't even need to use
let
and it's even simpler. And I thinkg <<< h
just didn't work because of compiler optimization - it didg(h(x))
instead of usingcompose(g)(h)
, which would have demonstrated the difference.Ah, I see what you meant - yeah
I meant "once" as in on multiple access to same reference