Does polysemy's async from the Polysemy.Async package have the same gotchas as its equivalent in the async package itself?
The async package recommends to use withAsync to avoid thread leaks. Would this be a problem with polysemy's async function?
What's the main issues using withLowerToIO? I understand most of the polysemy team doesn't like this, and is removed in the v2 draft (as explained by googleson).
I don't doubt the team's decision on this, I simply want to understand what problems came with using the forklift effect.
I'm working on a backend server where I'm dealing with thread leaks after introducing withAsync to it.
Since I'm not sure what else to do, I tried using withLowerToIO and withAsync from the async package itself.
This cancels the threads properly now, but is there any obvious gotchas? This is my reason for asking about the issues that come with using withLowerToIO.
I'm using this as a band-aid solution for now until a proper fix comes out. Looking forward to your thoughts!
Edit: Nevermind looks like I'm already seeing another issue pop-up. Still interested in why withLowerToIO is bad, other than it using an extra thread
I think the answer usually isn't that you can't represent it using types, but that it would be ugly
Yesterday we ended up talking with Love about how delimited continuations could possibly be made safe in context of effect system with lowering as long as their scope was tracked and lowering would require passed in operation to be polymorphic over scope - basically, ST monad trick on steroids
I guess you could similarly parameterize your monad (or part of your effect stack) over scope of surrounding async call and provide it with scoped handles for sending messages to threads
But it would be bunch of additional type parameters to track
Does polysemy's
async
from thePolysemy.Async
package have the same gotchas as its equivalent in theasync
package itself?The
async
package recommends to usewithAsync
to avoid thread leaks. Would this be a problem with polysemy'sasync
function?yeah, it just wraps the base
async
. there are some similar combinators here: https://hackage.haskell.org/package/polysemy-conc-0.6.0.1/docs/Polysemy-Conc-Async.htmlWhat's the main issues using
withLowerToIO
? I understand most of the polysemy team doesn't like this, and is removed in the v2 draft (as explained by googleson).I don't doubt the team's decision on this, I simply want to understand what problems came with using the forklift effect.
I'm working on a backend server where I'm dealing with thread leaks after introducing
withAsync
to it.Since I'm not sure what else to do, I tried using
withLowerToIO
andwithAsync
from theasync
package itself.This cancels the threads properly now, but is there any obvious gotchas? This is my reason for asking about the issues that come with using
withLowerToIO
.I'm using this as a band-aid solution for now until a proper fix comes out. Looking forward to your thoughts!
Edit: Nevermind looks like I'm already seeing another issue pop-up. Still interested in why
withLowerToIO
is bad, other than it using an extra threadwithLowerToIO depends on a global thread that is guaranteed to never be locked, which isn't an invariant we can enforce :)
ugh concurrency with effects is so hacky
I guess that's the result of not having(?) the ability to express thread communication in the type system
but maybe I'm just not familiar enough with the topic
yeah that would be nice
I think the answer usually isn't that you can't represent it using types, but that it would be ugly
Yesterday we ended up talking with Love about how delimited continuations could possibly be made safe in context of effect system with lowering as long as their scope was tracked and lowering would require passed in operation to be polymorphic over scope - basically, ST monad trick on steroids
I guess you could similarly parameterize your monad (or part of your effect stack) over scope of surrounding
async
call and provide it with scoped handles for sending messages to threadsBut it would be bunch of additional type parameters to track
Does polysemy have a version of sequenceConcurrently that lets you control the max # of threads?
Edit: Found this thread https://github.com/polysemy-research/polysemy/issues/73