Sem memory leaks - Polysemy

Welcome to the Functional Programming Zulip Chat Archive. You can join the chat here.

Torsten Schmits

I've noticed something that I might be using wrong.
This:

interpretFoo :: InterpreterFor Foo r
interpretFoo s = do
  tv <- newTVarIO def
  runAtomicStateTVar tv $
    interpretFooState $
    raiseUnder $
    s

main1 :: Sem r ()
main1 =
  runFinal $ interpretOthers $ interpretFoo prog

appears to perform much worse than

main2 :: Sem r ()
main2 = do
  tv <- newTVarIO def
  runFinal $ interpretOthers $ runAtomicStateTVar tv $ interpretFooState prog

by a factor of 50!

Is that me doing it wrong or is it intended to work that way?

TheMatten

Maybe Core could tell us something
BTW, thanks for investigating this! It seems like all of us are busy right now - I can try to look into this next week personally

Torsten Schmits

great!
then I shall look into the Core code.

TheMatten

Use -ddump-simpl -dsuppress-coercions -dsuppress-idinfo -dsuppress-module-prefixes -dsuppress-ticks -dsuppress-timestamps -dsuppress-type-applications -dsuppress-uniques flags

TheMatten

(Everything after -ddump-simpl is just to make output cleaner :big_smile: )

Torsten Schmits

so, I couldn't reproduce the TVar thing in a test case, but I took a glance at Applicative…this is current master, so the fix that's supposed to eliminate performance issues is included.

Setup:

progInputMonad ::
  Member (Input (Maybe ())) r =>
  Int ->
  Sem r ()
progInputMonad limit =
  go 0
  where
    go iteration = do
      a <- input @(Maybe ())
      when (iteration < limit) $
        case a of
          Just () -> go (iteration + 1)
          Nothing -> pure ()

progInputApplicative ::
  Member (Input (Maybe ())) r =>
  Int ->
  Sem r ()
progInputApplicative limit =
  go 0
  where
    go iteration = do
      a <- input @(Maybe ())
      when (iteration < limit) $
        traverse_ (\ () -> (go (iteration + 1))) a

10k iterations, printing current memory every second, taking about 10 seconds.
The first variant that does an explicit pattern match takes 70kB of memory and grows with about 300 bytes/s.
The variant that uses Applicative grows constantly with about 500kB/s, terminating with 5MB memory.

While traverse_ has an Applicative constraint, it doesn't show up in the Core. With traverse it does, but the performance is identical.
traverse_ also uses foldr, but traverse @Maybe just pattern matches.

The Core differs like this:

              let {
                lvl3 :: m1 ()
                lvl3 = pure ww2 () } in
              ww3
                (w4 lvl)
                (\ (z :: Maybe ()) ->
                   case lvl2 of {
                     False -> lvl3;
                     True ->
                       case z of {
                         Nothing -> lvl3;
                         Just ds -> case ds of { () -> lvl1 }
                       }

traverse_:

             let {
                lvl3 :: m1 ()
                lvl3 = pure ww2 () } in
              let {
                lvl4 :: () -> m1 ()
                lvl4 = \ _ -> lvl3 } in
              ww3
                (w4 lvl)
                (\ (z :: Maybe ()) ->
                   case lvl2 of {
                     False -> lvl3;
                     True ->
                       case z of {
                         Nothing -> lvl3;
                         Just x -> ww3 (case x of { () -> lvl1 }) lvl4
                       }

so while the Applicative variant gets desugared to the same pattern match, there is another bind and two matches on (), and everything else
is identical.

Any clue where the difference in the memory footprint comes from, @TheMatten ? Are the ()s building up on the stack?

TheMatten

Would be interesting to see with profiling enabled

Torsten Schmits

I tried that a few times before, couldn't make any sense of it. got any tips on how to get meaningful output for this case?

Torsten Schmits

@TheMatten :red_triangle_up:

TheMatten

You could maybe try hs-speedscope to get some readable output