Destroy all resources in all stripes in the pool. Note that this will ignore any exceptions in the destroy function.
This function is useful when you detect that all resources in the pool are broken. For example after a database has been restarted all connections opened before the restart will be broken. In that case it's better to close those connections so that takeResource won't take a broken connection from the pool but will open a new connection instead.
Another use-case for this function is that when you know you are done with the pool you can destroy all idle resources immediately instead of waiting on the garbage collector to destroy them, thus freeing up those resources sooner.
When a query (especially long running) is interrupted while using a connection from a Pool, that connection is almost immediately returned to the pool even if the query is still running on that con...
Before this change, IO-based exceptions in the session (`act`)
caused the connection to "used" indefinitely; not returning capacity
to the pool, eventually depleting the pool.
Yes. The connection goes back into the pool even though a query is still running on it.
My latest theory is some weird mask/interrupt interaction must be happening since this issue does not happen with postgresql-simple.
And removing UnliftIO.catchAny in persistent and running my test code partially fixes Pool interaction so that Pool.destroyAllResoutces works.
Maybe masking is happening more that once or something... Not sure.
Today or next week I'm going to try and layer on UnLiftIO stuff to my postgresql-simple example like Persistent has until it breaks (assuming it will).
When a query (especially long running) is interrupted while using a connection from a Pool, that connection is almost immediately returned to the pool even if the query is still running on that con...
Most of the details are here:
https://github.com/yesodweb/persistent/issues/1199
The activity levels not as high there though, so I thought there might be useful conversation to be had here.
but it might be easiest just to look at the code:
https://github.com/codygman/persistent-postgresql-query-in-progress-repro/blob/958e66963c0ad9761149f7ef97fd5ec0d7322f73/src/Main.hs#L18
And reflect on how in the world
Pool.destroyAllResources
doesn't ensure a connection is fresh here:https://github.com/codygman/persistent-postgresql-query-in-progress-repro/blob/958e66963c0ad9761149f7ef97fd5ec0d7322f73/src/Main.hs#L29
Per the docs for destroyAllResources:
I guess hasql is struggling with issues along these lines too given this recent commit:
https://github.com/nikita-volkov/hasql-pool/commit/d25249f4bd4b8c2c46f61be7746eca0ed3cd8865
This seems promising: https://qnikst.brick.do/2020-12-28-resource-pool
so the issue is that the connection temporarily leaks?
Yes. The connection goes back into the pool even though a query is still running on it.
My latest theory is some weird mask/interrupt interaction must be happening since this issue does not happen with postgresql-simple.
And removing UnliftIO.catchAny in persistent and running my test code partially fixes Pool interaction so that Pool.destroyAllResoutces works.
Maybe masking is happening more that once or something... Not sure.
Today or next week I'm going to try and layer on UnLiftIO stuff to my postgresql-simple example like Persistent has until it breaks (assuming it will).
A solution: https://github.com/yesodweb/persistent/issues/1199#issuecomment-796289659