Boring Haskell /= Resenting Types - Haskell

Welcome to the Functional Programming Zulip Chat Archive. You can join the chat here.

Sridhar Ratnakumar

It seems to me that there is this perception that advocating for boring Haskell automatically includes resenting anything complex in regards to types of a Haskell code.

One should be careful to separate this resentment -- which is purely feeling based -- from encouraging an ecosystem of simpler ('boring') interfaces in Haskell. The former is cynical, and kills any sense of learning.

This tweet is a case in point: https://twitter.com/jkachmar/status/1257515468530814976

Sridhar Ratnakumar

More generally, when people talk about 'toxicism' in Haskell community (something I generally can't relate to), this is the kind of the attitude that would come to my mind--the creation and maintenance of a FUD-esque ambiance.

Torsten Schmits

is that Cale from obsidian?

Sridhar Ratnakumar

Yup. Cale is awesome; used to pair with him at Obsidian. You would learn a ton from him.

Torsten Schmits

any idea what the tweet is about? Since the library isn't for public use, did he just browse a random code repo and get mad at it?

Sridhar Ratnakumar

vessel is a part of “Incremental View” that Obsidian is working as a successor to rhyolite (which is used in Cerveau, for real-time communications): https://www.srid.ca/2012401.html

Here's the type in the tweet: https://github.com/obsidiansystems/vessel/blob/f0c55cbd03304bc37cafb0cf444e3f65ddb34ec9/src/Data/Vessel/Vessel.hs#L55-L68

Functor-parametric containers. Contribute to obsidiansystems/vessel development by creating an account on GitHub.
Torsten Schmits

yeah I've seen it before

Sridhar Ratnakumar

Joe hates Obsidian / Reflex. Even today he was saying that in #nix channel of fpslack. Something to keep in mind.

Torsten Schmits

for other reasons, or only because of the higher-kinded types?

Sridhar Ratnakumar

I'm just trying to warn people not to allow themselves to get infected by such FUD ambiance.

Sridhar Ratnakumar

I don't know; maybe he had an issue in one of his prior jobs, with reflex.

Torsten Schmits

that sounds like a reasonable assumption

Torsten Schmits

I've seen lots of people get mad at abstract code, mostly probably because they don't want to sacrifice their free time to learn how it works

Sridhar Ratnakumar

All I'm saying is - choose what sort of emotional contagion you would let in in your affective life https://www.pnas.org/content/111/24/8788

We show, via a massive ( N = 689,003) experiment on Facebook, that emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. We provide experimental evidence that emotional contagion occurs without direct interaction between people (exposure to a friend expressing an emotion is sufficient), and in the complete absence of nonverbal cues.
Fintan Halpenny

Emotional contagion is an interesting phrase. It's sort of resonating with me :thinking:

Torsten Schmits

makes me think of mass hysteria

Fintan Halpenny

Ya actually good analogy. I was just reflecting on how hearing someone's relationship with another might affect how _you_ then start seeing that person

Torsten Schmits

right, for some reason I now hate this Joe guy

Fintan Halpenny

I like Joe. Met him a few times

Fintan Halpenny

But I think shit talking tools and companies isn't helpful

Torsten Schmits

well, I guess it's just venting

Fintan Halpenny

Ya, I blame Twitter really

Torsten Schmits

that's what it's for :rolling_eyes:

Fintan Halpenny

Makes it too easy to vent to a wider audience

Fintan Halpenny

That's why I don't have one. I keep my random thoughts to my trusted circle of friends

Torsten Schmits

but considering that Cale is using the opportunity to build bridges, I guess it's not hopeless

Fintan Halpenny

Ya, that's something I really appreciated. Cale's response was excellent and it made me curious to see what their use case was :grinning_face_with_smiling_eyes:

Sridhar Ratnakumar

astute. yup, mass hysteria is directly related to this phenomenon. emotional contagion is ultimately based on this notion called 'affective vibes', if anybody wants to get deeper into it.

Sridhar Ratnakumar

knock yourself out. but ... "here be dragons"

http://www.actualfreedom.com.au/richard/catalogue/vibes.htm

(The "L" link expands on the topic)

Torsten Schmits

ah, had a hunch it would be related to actualism :wink:

Joel McCracken

Joe is fine, people are allowed to have opinions. I don't even think Joe would consider himself a proponent of "boring haskell" per se

Joel McCracken

real talk: I have heard a LOT of really negative stories about people (where people are already professional haskell devs!) who where entirely unable to maintain the code left over from Obsidian, and is causing companies who contracted with them to swear off of haskell entirely, full stop

Joel McCracken

this is the actual worst thing that can happen to the haskell community in terms of commercial adoption

Joel McCracken

I understand you had a great time with them Srid and I understand that multiple people can have different opinions, but I just want to be explicit about what other people are saying about their experiences

Joel McCracken

I have 0 real exposure to the obsidian stuff so I am mostly a neutral observer

Joel McCracken

anywya, it doesn't matter anymore because notice that that is from May -- Joe has basically left Haskell and is doing Rust

Joel McCracken

FWIW, I AM a proponent of "boring haskell", but my take is very nuanced: a huge portion of value is derived from the simple, boring parts of haskell, and those are actually fairly understandable from the POV of someone who is not constantly yak shaving their PLT

Joel McCracken

The complexity of introducing something needs to be proportional to value received, and TBH I don't feel like that is true in a lot of cases. I feel like you enountered this Srid when you removed the path library from Rib -- but perhaps you would attribute it to something else

Joel McCracken

(Please don't take this to mean I am against fancy types -- I have have spent a huge portion of my free time for the last like 4 years learning haskell and now finally have a Haskell Job, but this rule I apply is basically the formula for sane engineering. The same problems are felt elsewhere in the industry when people use Kubernetes when they really have no reason to use it and are not driving any value from it, and this is sinking projects. But, since K8s is not super "fringe" in the industry, it doesn't matter if places swear off of it, or if the project sinks they blame it something else.)

Joel McCracken

Also, one last thing to add, my personal tolerance for abstraction may be a little bit higher than some others, which is why I am still considering using reflex et al on some personal projects; everything is a tradeoff etc, and ghcjs does provide a lot of unique value. But I would really hesitate to use it in a profiessional setting where I am not in tight control over it, because of all kinds of issues

Vladimir Ciobanu

I wonder what happens to some Haskell folks, but it seems a trend that after having reached a certain level (different for each individual), they decide they know enough abstractions and using more than an arbitrary level of abstraction is 'fancy' or too complicated and nobody should be bothered learning.

Joel McCracken

That's interesting, I haven't seen that; do you have any examples? Or are you saying that about Joe? "Nobody should be bothered learning" is a very strong statement, especially for a group of people who actually enjoy learning these things. For example, I still learn these things, even though I actually think that its a safer bet to avoid a lot of them when doing production code. Job/production code is different from research/experimentation/learning code. The fact is that a lot of these "fancy" libraries are actually still research projects, and are by definition not well understood. What downsides are there to using a library? If you don't know what downsides, how can you be confident that its worth using it?

Anyway, I don't think that's what @Joe Kachmar was trying to say. Twitter is not the best place for nuanced discussions. Its necessary to interpet things from there with some charity. Its not the right forum for 10k essays on all the nuances of a topic.

ofc I don't know why he was looking at that library in the first place, I don't know what its being used for etc

TheMatten

From my personal experience, after few years of doing Haskell perception of available techniques can skew from "make everything typesafe" to "do bare minimum to keep things safe" - what we actually want I guess is weight features based on mental/size burden they introduce and avoid premature abstraction/specification

I think of using new features/techniques like trying to build continuous shape from discrete blocks, where you have to understand every block you use/touch - in some situations, to fully fill in available space, one would have to approach "infinite" amounts of more and more precise blocks (which you can't keep in your head), but usually you can fill in most of the space effectively with a few of them. If you value blocks over shape itself, you may end up building wrong approximation that isn't helpful at all, but if you're too shy about making use of available blocks, you approximation may end up being too rough to be helpful. If you don't understand your given shape, you can completely miss it and may have to rebuild your whole approximation.

TheMatten

Then, some features/extensions/techniques can be of varying "size" and difficulty - block that looks small at first glance, with high difficulty to understand may not usually be worth it, but it may happen that it precisely fits the empty space of high importance - then it suddenly becomes very useful

Vladimir Ciobanu

I would rather not call people out, but to be a bit more nuanced: the trend seems to be people figure out abstractions up to some arbitrary level, then they arbitrarily pick a different level (usually lower than the peak of their current understanding) and advocate against abstracting at a higher level than that. I've seen multiple people do this, usually by shaming or ridiculing more abstract code, or by advocating against premature abstraction. Joe is doing that in the twitter thread, but I've seen multiple people do this on Slack and twitter.

At the risk of being a complete fanboy, I am absolutely in love with @Sandy Maguire 's introduction in ADD. We always operate at some level of abstraction when writing and reasoning about code. However, if the abstraction is leaky, we will likely either introduce bugs, or we will have to mentally keep track of these leaks and dance around them (which makes these abstractions a lot less useful, if not even worthless). The key here is finding the right abstraction, regardless of how "abstract" it is. If it so happens that the particular software problem we are trying to solve is precisely described by multi-categories, then that's the thing that should be used. Even if it might take unsuspecting (the non-CT intialized) weeks to fully grok.

Ideally these things are commented and links to papers/references are added. Some level of awareness is good to have when introducing complex concepts, in order to make code more accessible. But that's pretty much it -- don't sacrifice the quality and precision of the code for the sake of perceived simplicity.

Math is giving us these amazing abstraction tools, and we're barely scratching the surface. I'm more and more confident that our job as developers is closer to "field mathematicians" than anything else: understand the requirements and figure out the correct abstraction.

Rizary

Cale is active in haskell's discord channel. And there is #reflex-frp channel (although it's not too active)

Fintan Halpenny

From my personal experience, I loved writing Haskell but it was writing Haskell at a company that I realised you can still have people write shit code that will be complex and/or unsafe. I've witnessed a load of singleton magic that was unecessary for the project and caused a lot of people and newcomers (myself included) headaches. I've seen people use fromJust and I would have to request changes.
I've also worked beside Obsidian folks as I was in Formation. Like every org, I've seen some good code from their individuals and some bad code :) I'm sure they could say the same for me :joy:
But I'll tell you what though, I've been writing Rust for the past year, everything is a traverse, and Iterators piss me off

Vladimir Ciobanu

Sure, there's plenty of bad abstractions in code, some of them are complex and others are simple. But just because some people pick the wrong (complex) abstractions, it doesn't mean we should all be avoiding it, or demonizing (complex) abstractions.

It's kind of like when some scientists are wrong, we decide we don't trust science anymore. I wonder if this is just Haskell's version of this particular global trend.

Fintan Halpenny

Oh ya. Sorry I wasn't advocating that abstraction is bad. I was just relaying my experience

Fintan Halpenny

I'm knee deep in 7 Sketches. Abstraction is the road to glory

Torsten Schmits

I've read the first half twice, but on the train, so much didn't really stick for application :smiley:

Fintan Halpenny

I haven't applied it (yet!)

Fintan Halpenny

Doing the exercises is definitely helping me

Fintan Halpenny

And having someone in my org help me out since he studied maths

Torsten Schmits

yeah I've gotta get myself one of those

Vladimir Ciobanu

Sorry Fintan, none of that is directed at you. I was just making sure I get the nuance of my point across.

Joel McCracken

Advocating against premature abstraction is not the same as advocating against abstraction.

Joel McCracken

The wrong abstraction is almost always worse than no abstraction at all, and that's the big thing

Joel McCracken

Haskell is (just) software engineering, it is not fundamentally somehow different

Joel McCracken

again, arguing that these people are against abstraction is to me totally nonsensical; these are haskell people. Language users are self-selecting; nobody is forced to learn Haskell. We've done the work to get to this point, and its not out of fear of abstraction, its out of curiosity. So saying that people are categorically against abstraction is just false, full-stop.

The pattern I keep seeing is that people who, like me, want to build things and see haskell as a success in industry are butting against code that we see as being unnecessarily complex and not providing value, and sinking projects and costing jobs. Formation is one company that apparently has sworn off of Haskell, because of the very reason I am talking about.

I find it very worrying that this seems to be such a hard point to get across. None of what I am trying to communicate is ground-breaking. This is (simply) software engineering.

Maybe I am just wasting my time with Haskell, I dunno

Joel McCracken

There is a very big tendency to abstract-abstract-abstract in haskell, which is basically the same notion as what Java folks did with FactoryFactoryFactory. My implicit understanding of what Joe is pointing out in that thread is that library is another example of such a thing. Perhaps it isn't and I'm wrong, and perhaps all of those abstractions are entirely essential and useful for what they do, but somehow I doubt it

Fintan Halpenny

So to understand, are you saying that people think there's an abstraction ceiling where people get fed up with how abstract it is? :thinking: Or am I misreading?

Joel McCracken

that is not what I was trying to say, huh

Joel McCracken

there is no ceiling; there is only abstractions, sometimes well-applied, sometimes misapplied

Fintan Halpenny

I was digging in on the bit about Abstraction abstraction abstraction and the analogy to factory factory factory

jkachmar

Abstractions form a lexicon within a community or other organization.

The failure of “fancy Haskell” is that the lexicon is not made accessible to people who need to interact with those abstractions at some point in time.

jkachmar

You don’t need to eschew fancy Haskell, but you do need to provide extensive documentation for the things that aren’t in that lexicon. Additionally, it’s good engineering practice to try and look ahead to anticipate what the failure mode may be for a particular set of design decisions you’re making today.

jkachmar

Failure of highly abstract Haskell code, IME, has almost always been a combination of:

(1) the author not being as well versed in the trade offs associated with the abstractions they’re employing
(2) there being an impedance mismatch between the author and audience for the particular set of abstractions being employed without the requisite documentation/teaching/onboarding materials
(3) the author employed a set of overly general abstractions to a problem prematurely, and later (typically after the author left) it becomes entirely unclear as to why that generality exists and what the original problem it solved was

Sandy Maguire

I think everyone is missing the forest for the trees here. The question isn't whether or not abstraction is warranted or not. It's not about should we put abstraction into our codebases. As far as I can tell, there are two different phenomena here that are being muddled.

1) Haskell allows a few orders of magnitude more abstraction than is feasible in any mainstream language. meaning that people when they come to hs get their first glimpse of what could be possible. Nobody is good at abstraction at birth; they need to train those muscles, and that involves writing code. This is like going to the gym and doing weird exercises that aren't useful in everyday life. No, but they train your mind and make you better at abstraction. Unfortunately, this leaves behind code artefacts.

2) We all have horror stories about "idiot" coworkers. Everybody writes terrible code all of the time. We're all learning every day, and are hopefully all embarassed by code we wrote six months ago. This is natural and the way of the world. Our ability to design systems well stems from having put in the effort and gone to the gym of doing unnecessarily abstract things just for practice. All of this is to say, you need to have gone through the insane abstraction tower in order to write abstractions that aren't insane. Unfortunately, like someone at the gym on their first day, anybody who hasn't put in that time isn't in a position to judge whether or not the abstractions are warranted. This is especially so of bystanders trying to dunk on you on twitter.

I think what we'd all like is for people just to do those abstraction exercises on their own time, in their own projects, with big disclaimers as to what it is. If you see the disclaimer, you're not allowed to get mad at someone for overabstracting, like you wouldn't yell at someone at the gym doing a hyperspecialized exercise. Followed up with a rule of thumb that you shouldn't use any technique at work that you've learned in the last six months, because we collectively agree that it takes at least six months to develop the wisdom about "when" after learning "how."

jkachmar

I think it would be nice if people didn’t have to do that sort of thing on their own time, but yeah generally I agree with the 6 month thing.

Joel McCracken

so here's a couple of recent examples I encountered in some libraries.

There was one time I saw where a library was using a typeclass when the typeclass instances were not going to be used polymorphically. Literally all that was necessary was passing a single a -> IO () type. I cannot think of any reason why the go-to would be "create a typeclass here" except for premature abstraction or a love of complexity for its own sake. This creates problems because I need to add a bunch of boilerplate instead of just passing in a funciton, and the whole typeclass itself is very confusing because what is the point of it? It just makes me think I am missing something, and then end up frustrated when it turns out that no, I was not missing something.

Another time is I saw someone using a GADT in a situation where it was entirely semantically equivalent to using a normal sum type. This is the same issue as above; what was the point of using it? Was i crazy or what?

On the flip side I have heard of people encountering many instances of like Either (Ether Foo Bar) Baz instead of a creating a boring sum type, which creates all kind of additional code complexity. This really provides no value and increases code complexity

jkachmar

I’d say it should be 6 months of using these sorts of things in some active capacity, and not “I learned about recursion schemes at a conference 6 months ago and now I’m putting them into practice on a critical path”

Sandy Maguire

Agreed. Most jobs are stupid and boring, and I suspect most of us are balancing a "what's best for the project" mindset against "what's best for my career" mindset, in which the learning part is always going to win.

Sandy Maguire

@Joel McCracken that first one sounds to me like it was written by someone new to Haskell who is still thinking in OO terms and thinks that haskell classes are how you do classes.

when you say GADT do you mean an actual GADT or just using GADT syntax?

Joel McCracken

and thinking that it was somehow adding magical typesafety

Joel McCracken

I like the syntax fine, i don't like the magical typesafety thinking

Sandy Maguire

hard to say more on the topic without any details :)

Joel McCracken

yeah I don't want to call anyone out. These are hard things to discuss

Sandy Maguire

I think this discussion would be SIGNIFICANTLY more valuable for everyone involved if everyone brought a few examples of perceived too-abstract grievances and we looked at them as a community, rather than just all talking past one another. I suspect everyone would be amazed by just how much people would agree on what's good and bad.

Sandy Maguire

Let's see if we can agree on the object level before working on the meta level. Because if not the entire argument is moot and unsettable and we're just wasting our time shouting into the abyss.

Joel McCracken

well, we have people in the haskell community who think commercial adoption is a worthwhile goal

Sandy Maguire

sorry, i fail to see the relevance of that statement

jkachmar

Healthy communities are dependent on the ability for there to be some _norm_ on topics such as these.

Sandy Maguire

what do you mean, joe?

jkachmar

I mean that requiring an object-level analysis about the barriers that certain abstractions pose towards adoption of Haskell in different settings is a potentially useful conversation, but the fact that we (the Haskell community) keep having it is a bad sign

jkachmar

it means that we have difficulty reaching consensus on what good “ground truths” are when using the language in different settings

Sandy Maguire

tbh i have never seen it. I just keep saying people saying "abstraction is bad" or "no, abstraction is good you idiot" without anyone actually pointing to examples of wtf they mean

jkachmar

Anyone saying “abstraction is bad” and “abstraction is good” outside of Twitter or some shit isn’t having a conversation worth participating in

Sandy Maguire

i mean, look at all the ink spilled in this thread

Sandy Maguire

this is exactly the conversation we're both participating in

jkachmar

Introducing novel abstractions to a setting/community/organization/codebase that needs to be developed on collaboratively over time, without adequate documentation, justification, or common ground along contributors is considered harmful.

Sandy Maguire

which i will take as a good opportunity to get off the internet and go do something more valuable with my day. love you all.

Joel McCracken

so i was referring to a statement early in the "boring haskell" discussion something like "i don't see why we should care about commercial adoption, I care about experimenting with things!" This is relevant because for such a person, arriving at a consensus re: when something is appropriate for a commercial setting is going to be nearly impossible

jkachmar

the thing that concerns me is people ending up in the setting I described in my previous message and coming away with the notion that abstraction is bad

Sandy Maguire

@Joel McCracken absolutely. but that means it is explicitly worth calling out so nobody is talking past one another

jkachmar

Go is my enemy here, not fancy Haskell.

Joel McCracken

Agreed Sandy; part of the problem is that I think a lot of things in this discussion are left unsaid and implicit. I've been meaning to write up something more thorough, but time and constraints exist. In the meantime I'm just trying to help others who may not have a fundamentally dissimilar point of view understand more of the overall discussion from my POV

jkachmar

Anyway, last bit I’ll comment on here and _only_ because it was mentioned upthread when Srid called me out

jkachmar

My frustration with Obsidian stems from the fact that I’ve witnessed their work directly lead to one company abandoning Haskell and Nix entirely and another company abandoning Nix and the developers fighting to keep Haskell, all of this due to poor engineering practice.

Beyond that, they overwork and underpay their employees to take advantage of the fact that people are generally happy to write Haskell, which really rubs me the wrong way in this industry.

TheMatten

Even though I think most of the drama related to simple haskell stems from misunderstanding and subjective experiences, I would still say there's some "design tension" between practical and academic part of community - the practical one wants stability and tested, well-defined feature set, the other one wants quick iteration and possibly feature redundancy that let's them explore new ideas. Problem is, while trying to have these in one language makes people fighting, splitting these into two communities/languages can make either of those drift further from some common goal of maintaining a good language.

jkachmar

I’ll defer to Bryan O’Sullivan on this, since he’s very well suited to straddle this academic/industrial divide: https://old.reddit.com/r/haskell/comments/2olrxn/what_is_an_intermediate_haskell_programmer/cmohivk/

This list is utterly bonkers. I agree with about 10% of it, and think the rest is shamefully bad advice. It mixes points that are almost trivial...
James King

What a long and interesting thread! Zulip for the win!

Sandy Maguire said:

2) We all have horror stories about "idiot" coworkers.

Dropping by to point out that these stories make me so mad. Do people who share these stories also believe that their farts don't stink?

It's good to remember that your co-workers are people who are doing their best, just as you are, and deserve every bit of respect as you do. If you're going to judge someone based on the code they write don't be surprised if you find someone calling you an idiot some day.

It's important to be constructive when working on a team. Resist the urge to blame the person and honestly demonstrate what the error is in the abstraction and how it could be improved. They might simply not have had the same experiences as you. Don't be stingy with your knowledge and offer them concrete suggestions on what they could change to improve that piece of code that makes you cringe.

And don't let yourself be put down or believe that you're an idiot simply because you hear grumpy people rant about their, "idiot co-workers."

James King

That's not to say that some people aren't simply mean or do things that harm us... but let's not let code be what divides us. :)

James King

And the original tweet... well that sounds like a personal decision.

I don't know that "complex" Haskell is what turns people away in droves. If that was all it took I don't think there would be many employed C++ programmers. :shrug:

James King

Though it would be nice if more people recognized that Haskell code is code and sometimes we can come up with patterns and misuse features like the best of them.

Sridhar Ratnakumar

Joel McCracken That's interesting, I haven't seen that; do you have any examples?

Myself LOL. A while ago I was working on a project that involved type level programming. I had never done type level programming, there was a belief in me that "I ain't good for it". I decided just do it anyway, and learn what's needed in the process.

Our own limiting beliefs prevent us from learning further.

Sridhar Ratnakumar

Joel McCracken said:

yeah I don't want to call anyone out. These are hard things to discuss

If that's from code I wrote, I won't get offended. Neuron uses GADTs, but that's only because you can't do that with ADTs.

Sridhar Ratnakumar

My frustration with Obsidian stems from the fact that I’ve witnessed their work directly lead to one company abandoning Haskell and Nix entirely and another company abandoning Nix and the developers fighting to keep Haskell, all of this due to poor engineering practice.

Just 2 companies? I consider that _success_ - as that means every other client of Obsidian is happy with Haskell!

James King

I've read about teams abandoning Lisp or C++ or X... it happens. There's a lot of interesting context around those decisions.

Sridhar Ratnakumar

I don't think I have the permission to talk about details, but I'll just state that teams decide to give up on $tech for all sorts of reasons. We can't appraise the reasons objectively without the full context, like you say. Otherwise it is just gossip and hearsay.

Joel McCracken

Srid I wasn't talking about you FWIW

Vladimir Ciobanu

@Joel McCracken , I think this is where we fundamentally disagree: "The wrong abstraction is almost always worse than no abstraction at all, and that's the big thing". There's no such thing as "no abstraction", since code is just language to represent _something_, so when it's no abstraction, it's actually a bad or leaky abstraction which is pretty much the same thing as 'wrong abstraction'. One could argue that complex wrong abstractions are worse than simple wrong abstractions, and I'd probably agree to that. But they're both wrong, and both have their cost/toll.

What I'm advocating for is "correct" abstractions, regardless of their complexity. What I perceive simple haskell to be about is a bias towards simpler abstractions even if they might be "a bit" leaky.

Joel McCracken

Really interesting point. Here's how I'd respond:

Abstractions are always built upon other abstractions. So in the choice of
having to deal with abstraction layer N or abstraction layer N + M, and
somewhere in N + M is a bad abstraction, then it is generally preferable to use
N, because what happens is that in order to figure out how to solve a problem,
you need to understand N, N + 1, N + 2 ... N + M layers, comprehend where the
actual issue exists, and then resolve it (often by improving one other
abstraction).

In engineering, it is very hard to know that you are at the "right"
abstraction. We are often discovering what the correct solution should be *as we
are implementing it*. So there is a dialog between the requirements and whatever
the "correct" implementation is. We built something, we show it to stake
holders. They give us feedback, and we iterate.
Hence it is not generally possible to predict what will be the right abstraction
and what will be the wrong one.

I think we've all gone down the path of implementing something we think is going
to be excellent, only to realize down the line that actually we misunderstood
what was necessary. However, sometimes we don't understand this until we've
already had the abstraction in our system for years and years, others have built
upon it, and its incredibly hard to change.

Because of the nature of the way we handle pure FP abstractions, generally we
are trying to limit the possiblities of what code can do. If you look at pure
code, evaluating it should not have side effects. If a functino looks like:
foo :: Monoid a => a -> a -> a, you know that there really is very few things
that foo could do. This is actually great because it lifts a huge burden off
of us as programmers, making it much easier to reason about our code. But, by
this nature, it also limits what we might need this function to do in the
future.

IME this is actually one of the things that is most likely to sink a software
project -- an abstraction gets embedded so deeply within a system that it is
very hard to change, and so eventualy the entire system needs to be replaced.

By choosing an abstraction, you are hoping that abstraction will be right in the
future. That may or may not be correct, but if it turns out to be incorrect, you
need to change the abstraction, possibly breaking something, or replace the
abstraction, understanding the requirements of all the code that used that
abstraction and changing that code that relied on the abstraction to handle the
new circumstance (Indeed, I have been in situations where
nobody really knew how a certain thing is supposed to behave, so it was
extremely hard to replace it!)

If I had a crystal ball and could see the entirety of the future of a project
and then pinpoint what the right abstraction is, I would do so, and use it.
Sadly, I don't, and can't.

In order to choose a good abstraction, one needs to first understand what
the possible abstractions are in a situation, what
are the relative weaknesses and strengths of each, what will become harder to
handle in the future if requirements change, what your team will understand/what
the cost will be for the team to learn it, etc. This is really tricky and
requires a good bit of wisdom and experience and ability to balance trade-offs.
And it requires a pretty good understanding of the problem you are solving.

Asad Saeeduddin

idk if this is a tangent that fits into the conversation, but I think we often talk about "the" abstraction that applies to a problem, as if there's only one perspective that is relevant to a given problem (e.g. a super duper general one vs a concrete one, or a "mathy" one vs a "practical" one, or this _one_ vs that _one_). what I wish there was more support for in languages was the ability to view a situation from multiple perspectives, i.e. to change your abstraction (and hence your abstraction level) seamlessly.

e.g. suppose you start with some simple, highly concrete type and write a program in it, and then you realize that if you only tweaked the type a little bit, it would be an instance of some existing typeclass and let you DRY up your code a bit. the problem is you can't just seamlessly do this without breaking the original program, even if it's morally a strict generalization. so every part of the code must be aware of the abstraction in all of its super duper general glory, or you must partition the code by what parts are wrapped in what newtypes. this is a problem if you want to seamlessly move between abstractions (maybe there is more than one useful way of looking at the problem!) and between levels of abstractions (maybe the user of your HTTP library doesn't need to know your get operation is implemented as the doogleberry of the floomenoid algebra!)

codygman

@Joel McCracken

It is not generally possible to figure out the right abstraction

I feel like this is kind of the core of the disagreement between camps.

It reads to me like you simply accept you'll likely never get the right abstraction, use N because it's replacable, and move on the the next problem?

That doesn't seem right to me and seems like one would never exercise their abstraction muscle.

codygman

@jkachmar

I’ll defer to Bryan O’Sullivan on this, since he’s very well suited to straddle this academic/industrial divide: https://old.reddit.com/r/haskell/comments/2olrxn/what_is_an_intermediate_haskell_programmer/cmohivk/

I can't really defer to a position of:

A noisy subsdt of silly people has collectively gone nuts for abstractions that provide little value other than making them feel special. It is most disappointing.

Unless I already agreed with it.

A statement like that should have at least a single example. What's more the disagreement that follows looks kinda similar to disagreements we still see.

I was tempted to agree with you, but then I reviewed the list again and I can't find anything "hard to master and hardly ever useful". - sclv

I feel hopeful about @TheMatten's bot and @Joel McCracken's attempts to have more concrete examples in this conversation about the value of abstraction :sweat_smile:

Sridhar Ratnakumar

Yea, +1 for concrete examples. Otherwise, how do we know that we are all on the same page when we are doing cough abstract cough talk.

Sridhar Ratnakumar

From that link (a thread made 5 years ago, the time before I learned Haskell):

I stay away from GADTs and rank N types on purpose in production code.

So here we have an example. Is this a wise approach? Why avoid GADTs, when they are so useful in some cases? There are legitimate cases where you can't use ADTs anymore, and have to reach out for GADTs for type safety. Concrete example is neuron's ZettelQuery type, where the result type of the query is unified with the query constructor.

Sridhar Ratnakumar

Now, it is perfectly okay to begin with ADTs in one's project. But there may come a point ADTs don't work anymore. What would someone do at this point?

Sridhar Ratnakumar

I felt like putting an artificial limit like "I won't use GADTs" will make the code look like Elm.

Sridhar Ratnakumar

Regardless, it would be interesting to see a case study on how a real-world code would look with ADT vs with GADT, comparing their pros / cons, so as to give weight to such an argument not to use GADTs in production. Otherwise, it borders on some FUDsque subjective opinion.

TheMatten

One great use case for GADTs I've recently encountered - queries in Halogen, where different queries expect different return types, but you still want to represent them by single datatype - PureScript doesn't have GADTs yet, so they instead use these funky Query :: (SomeType -> a) -> Query a constructors

Torsten Schmits

my impression, not only in Haskell, is that people are intimidated by unusual concepts. What's usual is grown organically and influenced by what is consensus or popular at the time when a given programmer has their basic knowledge and habits shaped.

Now this intimidation does not correlate with complexity at all, in my experience, even though people see it as such, and I think the purpose of that is to euphemize and give themselves a reason not to commit to changing their views in order to save on cost. Everyone has a limited amount of time in order to earn money and one needs to select what is productive to learn.

I think I experienced some of this earlier in my life, but that changed in the recent years.
For example, when I first discovered Polysemy, I found it to be incredibly intimidating, and not because there was a lot of abstract ceremony involved in order to use it, but because it just used semantics that I hadn't encountered yet and because it seemed so incredibly professional that I had a feeling of inadequacy.
However, since I had already gotten used to this intimidation barrier, I still tried it out and learned it pretty quickly, since it is enormously ergonomic and sensible, and I learned that it offered a kind of simplicity that is on a completely different level than how you would ascribe simplicity to more "basic" code. But if you do not understand that simplicity, it will be perceived as complexity.

This just as a concrete account of how this dissonance might be structured.

codygman

Sridhar Ratnakumar said:

Regardless, it would be interesting to see a case study on how a real-world code would look with ADT vs with GADT, comparing their pros / cons, so as to give weight to such an argument not to use GADTs in production. Otherwise, it borders on some FUDsque subjective opinion.

Okay, here's an "Algebraic data type" from Real-World Haskell:

type CardHolder = String
type CardNumber = String
type Address = [String]

data BillingInfo = CreditCard CardNumber CardHolder Address
                 | CashOnDelivery
                 | Invoice CustomerID
                   deriving (Show)

Here it is in GADT form I think I got right (but I'm actually on painkillers from surgery so lol sanity check me):

data BillingInfo a where
  CreditCard ::  CardNumber -> CardHolder -> Address -> BillingInfo a
  CashOnDelivery :: BillingInfo a
  Invoice :: CustomerID -> BillingInfo a

From just this I don't see any pros or cons, maybe I or someone else can add some pattern matching examples and it'll become obvious later.

Sridhar Ratnakumar

This sounds like a nice example to add to my notes!

For example, when I first discovered Polysemy, I found it to be incredibly intimidating, and not because there was a lot of abstract ceremony involved in order to use it, but because it just used semantics that I hadn't encountered yet and because it seemed so incredibly professional that I had a feeling of inadequacy.

https://www.srid.ca/852310bb.html

However, since I had already gotten used to this intimidation barrier, I still tried it out and learned it pretty quickly

https://www.srid.ca/47ee6284.html

People with a “growth mindset” believe that they can acquire any given ability provided they invest effort[1] as well as actively seek and welcome feedback (positive or negative) especially when stuck, as contrast to having a “fixed mindset”. [1] Effort alone is insufficient. See Carol Dweck Revisi
Georgi Lyubenov // googleson78

@codygman I think your example doesn't have any reason to use GADTs (other than nicer syntax, which is subjective)

Sridhar Ratnakumar

Yes, because a is not used (and nice syntax surely is never a reason to use GADTs?)

Torsten Schmits

Sridhar Ratnakumar said:

This sounds like a nice example to add to my notes!

For example, when I first discovered Polysemy, I found it to be incredibly intimidating, and not because there was a lot of abstract ceremony involved in order to use it, but because it just used semantics that I hadn't encountered yet and because it seemed so incredibly professional that I had a feeling of inadequacy.

https://www.srid.ca/852310bb.html

However, since I had already gotten used to this intimidation barrier, I still tried it out and learned it pretty quickly

https://www.srid.ca/47ee6284.html

yeah, you kind of have to balance the "sunk cost" protection impulse and the inadequacy against the desire to grow, I can empathize that this is hard

Torsten Schmits

Sridhar Ratnakumar said:

Yes, because a is not used (and nice syntax surely is never a reason to use GADTs?)

I think I've seen you get this answer in a github issue :sweat_smile:

Torsten Schmits

it was probably not a good decision to include ADT in the name, suggesting that they're just better replacements for normal ADTs

codygman

Torsten Schmits said:

my impression, not only in Haskell, is that people are intimidated by unusual concepts. What's usual is grown organically and influenced by what is consensus or popular at the time when a given programmer has their basic knowledge and habits shaped.

Now this intimidation does not correlate with complexity at all, in my experience, even though people see it as such, and I think the purpose of that is to euphemize and give themselves a reason not to commit to changing their views in order to save on cost. Everyone has a limited amount of time in order to earn money and one needs to select what is productive to learn.

You and I are on the same page here... the problem comes when someone else starts pruning things you've proven are valuable as "not productive to learn" and start spreading FUD about them not being productive to learn. I think that's one effect stemming out of these basic conversations and why I don't feel like the "boring Haskell" or "Simple Haskell" movements are "basically harmless".

If it were just a matter of "they want to program with that and say it's the best way" and it didn't negatively affect the view of what I'm beginning to refer to as "first principles thinking", software design in a style like Algebra Driven Design, or software that moves more invariants into the type system... I could simply ignore it.

I think though that others see the negative effect of those movements though and that's why there is a clash. None of this helps feel closer to a solution, it's just sort of writing out my thoughts though sadly.

I think I experienced some of this earlier in my life, but that changed in the recent years.
For example, when I first discovered Polysemy, I found it to be incredibly intimidating, and not because there was a lot of abstract ceremony involved in order to use it, but because it just used semantics that I hadn't encountered yet and because it seemed so incredibly professional that I had a feeling of inadequacy.
However, since I had already gotten used to this intimidation barrier, I still tried it out and learned it pretty quickly, since it is enormously ergonomic and sensible, and I learned that it offered a kind of simplicity that is on a completely different level than how you would ascribe simplicity to more "basic" code. But if you do not understand that simplicity, it will be perceived as complexity.

This just as a concrete account of how this dissonance might be structured.

Yes! Exactly. The kind of simplicity that seems to be sold in mainstream programming is very much simplicity in the small and super locally. Ironically, things like Polysemy focus on the more pragmatic real world concern of simplicity across the majority of the application. I have a blog post along these lines that'll "be done any day now" and one thing I might float or turn it into is "haskell from the beginning with Polysemy" and I think that could be dramatically simpler... I do wonder if this will end up on r/programmingcirclejerk though :D

I think there is more to discover and discuss about simplicity in software in general... I don't feel satisfied with current definitions.

codygman

Georgi Lyubenov // googleson78 said:

codygman I think your example doesn't have any reason to use GADTs (other than nicer syntax, which is subjective)

I think I agree, but I'm actually not sure :smile:

I was attempting to create a "pointless usage of GADT" like I thought Bryan O Sullivan might have been talking about or critics might take issue with. That way I could understand the comment "I stay away from GADTs and rank N types on purpose in production code.". I've also seen other unsupported "avoid GADTs" comments throughout the internet and I'd like to be able to understand their criticism.

Torsten Schmits

@codygman yeah, teaching that to beginners is probably an order of magnitude more efficient than to full-stack developers.

In any case, my experience with the Hackage ecosystem has suggested that what lots of libraries are missing is good UX. not only the docs, but how the integration points are designed. Not adressing this at Polysemy, since it has a pretty reasonable proportionality of age, accessibility and complexity, but maybe more basic stuff – for example the regex libraries are just impossible to use quickly.

Torsten Schmits

codygman said:

You and I are on the same page here... the problem comes when someone else starts pruning things you've proven are valuable as "not productive to learn" and start spreading FUD about them not being productive to learn. I think that's one effect stemming out of these basic conversations and why I don't feel like the "boring Haskell" or "Simple Haskell" movements are "basically harmless".

I totally agree, but I always have the impression that engaging with those discussions causes only aggravation of the situation.
At least I personally don't feel up to the task, and before I just try to fix conflicts without a good plan, I'd rather say nothing and try to improve my libraries

Torsten Schmits

reminds me of how news media keep on reporting on every single stupid thing that Trump says, without any effect whatsoever on his base

TheMatten

Just to add - that example above would work with GADTSyntax alone - at the same time, you can write GADTs without GADTSyntax using ExistentialQuantification:

data GADT a = a ~ Int => GADT a
TheMatten

What's interesting about this is that GADTs aren't really about some type-level trickery as much as they are about carrying constraints in constructors - which are just sort of implicit arguments at the end

TheMatten

From this perspective, it sounds strange to say that we should "avoid carrying some specific values in datatypes"

Joel McCracken

Georgi Lyubenov // googleson78 said:

codygman I think your example doesn't have any reason to use GADTs (other than nicer syntax, which is subjective)

Well, maybe not, but based upon the haskell principle of least power (don't use a Monad when a Functor will work, don't use IO when State could work), I think that GADT syntax is actually somewhat counterproductive, explicitly because it is more powerful, and hence makes a little harder to notice. Anwyay I think GADTs are generally fine, I don't care too much about them, actually the only thing that i wish they had was a single, obvious, clear, beginner-oriented tutorial to GADTs, explaining what they do, and when you might want them. Can anyone point me at one? I've looked several times in the past and couldn't find one.

To me the more problematic things are adopting whole-program abastractions which "encapsulate" your application and if there is later discovered some flaw, your whole program is in trouble. But this generally does speak to the problem of not speaking about concrete abstractions, so i have one ill bring up soon, but I want to finish reading what everyone said

Joel McCracken

Sridhar Ratnakumar said:

Regardless, it would be interesting to see a case study on how a real-world code would look with ADT vs with GADT, comparing their pros / cons, so as to give weight to such an argument not to use GADTs in production. Otherwise, it borders on some FUDsque subjective opinion.

The best example IMO is typesafe ASTs; try writing a PL in haskell you'll quickly run into issues where you need GADTs.

Sridhar Ratnakumar

GADTs are also useful in routes and request handling. Here is how the real-time websocket requests, and their specific response type, are modelled in Cerveau :
image.png

Sridhar Ratnakumar

Saivng a zettel, for example, must return the new blob sha from git. This sort of constraints ("unification"?) cannot be modelled by ADTs. Believe me, I tried hard lol as I didn't want to use rhyolite at the beginning.

Joel McCracken

So, one big issue that i see over and over again is that I have a different idea of what "boring haskell" means. To me, it means identifying a set of best practices and high-quality libraries to help make it easier to navigate the Haskell ecosystem, along with identifying which weaknesses exist in each. The goal being to have a "golden path" of haskell so that people can make successful projects.

Now, part of this meaning is also identifying the most highly-useful parts, which IMO are clearly the basics; you can get super super far with haskell 98, if the whole industry adopted it, this would transform the economy IMHO.

The benefits of such a thing would also make it easier for beginners to figure out how much they need to learn. I learned haskell from haskellbook.com and knowing that basically that book would get me to the point where I could get a job made it much easier to navigate my way into Haskell.

I believe it is fine to expect your coworkers to learn something new when there is a good reason for it. We all have to do that in this industry, this world moves very quickly. But, I strongly believe that in terms of production code, using an unusual technique should have a good reason, or at least a reason. Some folks use things just because they are curious about them (like graphql , for example), without doing the work to understand what the tradeoffs are inherent in the tech. Except, years down the line, you learn that the tradeoffs you made were bad, and suddenly your project is dead.

(of course, I am not making a statement about anyone's personal fun coding time -- go wild if you want to! If you expect others to contribute, realize that doing very unusual things is perhaps counter-productive, but again, personal time is play time is fun time is go wild time)

It seems that some other people think you should NEVER stray from e.g. haskell 98, which i do disagree with, simply because these features are useful at times and can make code a lot better. But, I can see their point, as a community we do for example need better training materials.

A Haskell book for beginners that works for non-programmers and experienced hackers alike.
Joel McCracken

codygman said:

Joel McCracken

It is not generally possible to figure out the right abstraction

I feel like this is kind of the core of the disagreement between camps.

It reads to me like you simply accept you'll likely never get the right abstraction, use N because it's replacable, and move on the the next problem?

That doesn't seem right to me and seems like one would never exercise their abstraction muscle.

When I said that I meant it in a way that you don't know when you will have the right one and when you won't. Its not that you can't EVER know, for example once a thing is stable, perhaps its time to add more type safety to prevent errors, add phantom types, etc.

I'm really just asking for an ounce of humility, and the ability to reflect on your past decisions and realize which ones are fruitful, and which aren't.

Joel McCracken

Since everyone is asking for concrete examples of problems I see, I will point out a few. Please understand that I just want the community to improve; it would really be easier for me to just shut up, which I have considered doing several times in this thread, but since the conversation is generally constructive I will continue because I'd like to be shown that either I am wrong, come to a shared understanding, etc. I really love Haskell and want to keep seeing the ecosystem grow in usefulness and make a larger impact on the world in general.

So, here is a concrete example: the performance of free monad systems. I've heard that folks have had free monads sink their apps because of performance issues. Then I've seen other people say "well actually that's FUD and here is a benchmark that proves it is" and well what is real?

If you have been paying attention, you probably know where this is going. Relatively recently the community "discovered" that typecalass instance function calls do not inline across module boundaries:

YouTube - Alexis King - “Effects for Less” @ ZuriHac 2020

Years and years of discussion about relative value of microbenchmarks etc and real-world stories about how free monad libs have real, whole-application performance problems are suddenly explained. It seems like based upon that talk the folks claiming "FUD!!" were wrong, and not only were they wrong, they hadn't bothered to try benchmarking a reasonably-complex program. I mean, how many real haskell applications only have a single module (or that is don't split things across modules like you would want to in any reasonably complex app that uses free monad libraries)?!

I cannot possibly understate how much that talk blew my mind, and it exposed IMO a deep problem in the community. Personally, I'm working hard to develop a Haskell presence in Pittsburgh, making the place I am currently use it at successful so we can have more people writing Haskell. I have invested a lot of my career into Haskell, avoiding other, likely higher-paying opportunities. Learning this and other things has only made me MORE conservative when it comes to choices because I DO NOT want my efforts to be wasted. So now for example I know to stick to concrete monad stacks and not adopt MTL-style typeclass constraints (except in circumstances where this problem won't matter).

For what its worth, I actually would LOVE to learn more about free monad libs. Its still on my todo list, because I think they are so elegant and easier to understand than monad transformers. But, I haven't gotten to them yet.

Am I wrong? Did I somehow misunderstand the main point of that talk? Is Alexis wrong?

Joel McCracken

Another example: I recently was working with the haskell paths library (and @Sridhar Ratnakumar this IS related to Rib). Now, I like the library. It makes a lot of sense. Track the "type" of a file path so you don't use absoulte file paths when a relative one is needed, and you don't use a filename when you want a directory, etc. Excellent.

Except, I found the actual library was missing really basic features. For example, abasename function. Soonish after I was working with it, apparenty a function was added that could have suited me: the splitExtension function added here:

https://hackage.haskell.org/package/path-0.7.0/changelog

But, at the time, that function did not exist. It did end up in me spending a lot of time scratching my head, trying to understand why I couldn't do this super simple normal thing. I just ended up using the toFilePath escape hatch and then using the function there, see https://gitlab.com/JoelMcCracken/joelmccrackencom/-/commit/53cd3c8301ddadeb015c183c76ba3059ca659632#a622bfabc24287a5a213ceae5010aef0b3de0ac4_9_6

Additionally, the types I was using were all monomorphic however (Path Rel File, basically), so I really never felt the benefit of the typesystem telling me that I made a mistake like using using a file path when only a directory would be valid.

I DID spend a lot of time trying to figure out how to do really basic other things, too, like here:
https://gitlab.com/JoelMcCracken/joelmccrackencom/-/commit/53cd3c8301ddadeb015c183c76ba3059ca659632#903e782aede1cf232dbcff3090f737e3bc276546_44_44

(the types of asbfile and relfile are QuasiQuoter, which are not so obvious when looking for how to use something; i did spent a good half hour probably stumbling on this)

I recall running into other problems using the library but only the few things I mentioned made it into the source, but overall I was really happy when Srid decided to remove Path and use FilePath, the resultant code was all a lot simpler and removed a big pain point

Torsten Schmits

Joel McCracken said:

Am I wrong? Did I somehow misunderstand the main point of that talk? Is Alexis wrong?

That's right, the module boundary thing is something that _should_ have been clear, pointing to some kind of problem that should be addressed, but I don't know what it is. However, Alexis has worked it out and in the future it will all be peachy :sweat_smile:

Joel McCracken said:

I recall running into other problems using the library but only the few things I mentioned made it into the source, but overall I was really happy when Srid decided to remove Path and use FilePath, the resultant code was all a lot simpler and removed a big pain point

path is certainly lacking in several ways, and its abstraction is quite notoriously questionable (see https://hackage.haskell.org/package/hpath ), but really a very tame specimen regarding its complexity.

Joel McCracken

yes, its certainly not the worst, i'm just trying to give an example of something that i experienced that went for some more abstract solution and I felt that was more of a pain than a benefit. But its interesting to see how things can be bad at such a minor level.

Sridhar Ratnakumar

I really never felt the benefit of the typesystem telling me that I made a mistake like using using a file path when only a directory would be valid.

Well, I still think this is a really useful thing to have type system check for us; however it definitely is not worth having at the cost of a sucky path interface, like you illustrated above. Which is why it was removed from rib. The costs were not worth it; and, as you say, filepath is not that bad.

codygman

Joel McCracken

It is not generally possible to figure out the right abstraction

I feel like this is kind of the core of the disagreement between camps.

It reads to me like you simply accept you'll likely never get the right abstraction, use N because it's replacable, and move on the the next problem?

That doesn't seem right to me and seems like one would never exercise their abstraction muscle.
When I said that I meant it in a way that you don't know when you will have the right one and when you won't. Its not that you can't EVER know, for example once a thing is stable, perhaps its time to add more type safety to prevent errors, add phantom types, etc.

Thanks for your comments and examples! I'll look at them again tomorrow when I'm less tired. I'll try addressing some parts of what you say to keep the conversation going and hopefully get other perspectives though :)

When the thing is stable, you likely won't have the time nor the motivation to add more type-safety, prevent type errors, or add phantom types. This method presumes that more type safety for instance slows down or otherwise impedes the development process and is merely a nice to have that should be done after the fact if at all.

Joel McCracken
I'm really just asking for an ounce of humility, and the ability to reflect on your past decisions and realize which ones are fruitful, and which aren't.

That sounds reasonable on the surface, but the problem is the methodology actively steers one away from ever concluding "I'm going to use type-safety up front here to speed up development mistakes".

I really want the ability to reflect on past decisions to figure out which are fruitful, the concern I have is the pattern you describe doesn't allow for ever considering anything but that which you're already convinced of.

I'm not claiming I have a better answer either by the way.... the only solution within these assumptions is "randomly prefer type-safety sometimes" to give it a chance of proving "it could be better".

I see two schools of thought in the Haskell community as I've been mentioning recently:

"Real-world thinking" vs "First-Principles thinking"

"Real-world thinking" absolutely dominates industry, and it seems simple Haskell advocates want it to dominate the Haskell industry too.

I think you might see "first-principles thinking" as I'm describing it as an unreleastic (pie-in-the-sky even) notion that you can figure out ideal abstractions in advance, or even that it's necessary to solve the subproblem (or more likely group of larger problems) at hand.

The problem (solution based on who you ask?) is things like static type-safety, finding super-powerful general abstractions, and other tools that make Haskell powerful isn't possible under a mostly or exclusively "real-world thinking" framework.

I think that the original "Boring Haskell" blog post by Michael Snoyman didn't say anything one way or the other about "first-principles thinking", but that any advocates of "boring Haskell" since then have utterly disregarded the value of anything but "real-world thinking".

Then the recent "What killed Haskell, could kill Rust, too" confusion seemed to be largely based on parsing valuing "first principles thinking" as "ignoring truths from the mainstream just because they came from that "dirty" mainstream" - source.

The point being that I now know I don't see as much of a problem with Boring Haskell if it comes with first principles thinking, but the boring Haskell that is being pushed for seems to come along with a "prefer only real-world thinking" like other languages in industry I find alarming, unjustified, and frankly pandering to a sort of idea of "hey we aren't that different" that ultimately will hurt more than help.

What killed Haskell, could kill Rust, too. GitHub Gist: instantly share code, notes, and snippets.
Sridhar Ratnakumar

I was going to use hpath on Rib, but I was afraid of having a repeat experience.

Sridhar Ratnakumar

Also the library was still WIP at that time

Torsten Schmits

in the case of path, I think that there are quite a few aspects of abstraction that need to be examined separately.
The absence of basic combinators that you mentioned seems to me not to be matching our current discussion, so since this was your main pain point, what can we say about it? How did the abstraction make it hard for you to implement an operation?

Some relevant aspects of the lib, as far as I can conjure up now, are:

  • having MonadThrow control errors, which is a slight improvement over System.FilePath in so far as the latter uses exceptions, which is pretty unergonomic, while in path you can just type the result as Maybeor Either. Since this is a straightforward situation, I think a concrete Eitherwould be better suited though, so I would consider this to be overabstracted, maybe?
  • having a type for Rel/Abs, which is basically a safety mechanism for passing a path to third party code. Not sure I would rank this as one of those "complex abstractions" either; and I consider it to be pretty helpful.
  • having a type for Dir/File, which is similar, but since the path is purely logical I don't really see that there is something substantial to back this up. I consider it more as a newtype replacement, intended to help you keep track. This is removed in hpath and it kinda makes sense to me
  • construction has a safety mechanism, to prevent "bad" paths with stuff like multiple slashes or something, I don't think it's very useful
  • using MonadIO, which should be obvious

So if you want to use path, you should have some requirements from that list to be satisfied. In your case, you were forced to use it, so how would you characterize your experience given my description?

Torsten Schmits

my opinion, though, is that the low quality of path's abstraction can be avoided by following design rules. As I said before, I consider this to be the main problem.

codygman

Joel McCracken said:

Please understand that I just want the community to improve; it would really be easier for me to just shut up, which I have considered doing several times in this thread, but since the conversation is generally constructive I will continue because I'd like to be shown that either I am wrong, come to a shared understanding, etc. I really love Haskell and want to keep seeing the ecosystem grow in usefulness and make a larger impact on the world in general.

:heart: :heart: 10000% agree here and that's all I want as well.

I want to let you know I value your input and the effort you put into things... those examples aren't easy to recover from memory and recount! Thank you!

I'm really enjoying the direction the conversation is going in, and it can't go in a very interesting direction with only one opinion.

As a heads up, I often feel it would be easier for me to just "shut up" as well... but I feel like at least everyone in this thread has the shared value of:

I really love Haskell and want to keep seeing the ecosystem grow in usefulness and make a larger impact on the world in general.

James King

One thing I've been thinking about lately is... what if boring Haskell (whatever subset of Haskell you mean) is not unique or interesting enough among programming languages to for late adopters to switch? Do late adopters move to Haskell because it's as boring as Java? Java already has lambda functions and is getting pattern matching. C# is getting pattern matching, eliminating nulls, getting algebraic data types, etc. C++ isn't far off either. And they're only getting better with bigger communities and bigger budgets and many, many more people invested in their continued success.

I was thinking more about Gabriel's marketing talk and how does Haskell bridge the gap, so to speak, within an ecosystem where it's most easily understood features are already common place in more established languages? Is it the RTS and lazy evaluation and handful of acceptable "boring" libraries that bridges the gap to the late adopter?

I do hope that boring Haskell finds a way to weaken the dichotomy. Calling it "Simple Haskell" and "Fancy Haskell," creates tribal boundaries. The C++ community goes through this at least once a decade where one group decries the new features and tells everyone to adopt their subset of the language. It's not bad to want to use and promote a subset, even if you think it's a better one, but does it have to come at the cost of making sure that people who use a different subset are in some fancy out-group?

Joel McCracken

codygman said:

When the thing is stable, you likely won't have the time nor the motivation to
add more type-safety, prevent type errors, or add phantom types. This method
presumes that more type safety for instance slows down or otherwise impedes the
development process and is merely a nice to have that should be done after the
fact if at all.

So by stable I mean "you know what your program needs to do",
or you are confident that
you are going to be able to address whatever downsides/risks are
involved. I don't mean that you should do this once "this thing is no longer
being worked on".

A great example is RDBMs. At this point, even though I generally really dislike
SQL (for all sorts of reasons), they work well and are a great "backbone" data
store. Use them for most things, and if for some reason they don't work well in
a given situation, THEN examine tradeoffs.

... the concern I have is the pattern you describe doesn't allow for ever
considering anything but that which you're already convinced of...

Perhaps you're right, and I am too conservative. But I think there is a whole
range of what is acceptable, and I myself am not extremely conservative in my
habits, for example since I am a big proponent of Haskell, I am sure my
non-haskell programmer friends think I am extreme. But let me tell you a
few stories about hype, belief, and disbelief.

  • A friend of mine from college (who has since become an extremely-well-known
    person in the programming community).
    But I remember talking with him and somehow he mentioned really liking Mongo,
    which was the new hotness. I remember seeing articles about it but it didn't
    seem "ready" to me yet. I asked "how does mongo deal with transactions,
    record locking, etc?" his answer was basically "it doesn't worry about them"
    (which I believe was true at the time, circa 2009).

    This threw up alarm bells in my mind. How can a community replace something
    which has these sorts of critical functionality without addressing them?
    For this and other reasons (which I think of in my mind as "magical thinking"),
    I was wise enough to avoid the Mongo hype train, and watch it as it pulled
    into the station and exploded, killing all the projects on board.

  • Yet, in another situation, I drank the koolaid and it made me sick.
    I believed some other mistaken people about mockist-style testing, and it
    significantly impacted a project I was working on and the timelines for it.
    Basically, I bought into: test each class individually, carefully mocking out
    the interactions it has with other classes.

    In my experience, this style of testing leads to worthless tests. You ended up
    having meaningless tests, becuase what you care about is how something works
    witin the context of business requirements and behavior, but that is not
    reflected in the tests. Each class was too small, and any interesting
    business behavior actually an interaction between many classes. So, testing each
    individually was basically wasting time.

    In hindsight, I realise this style never made much sense to me, because I
    couldn't connect the value
    between mocking out each interaction and how it would make your code easier to
    modify/know you didn't break the business requirements.
    But I trusted it, as as it was being promoted as the "one true way" at
    the time.

    What happened? I think I was too inexperiencd and did not have enough
    context with writing tests, so I kept going on the "promised" right way, and
    just kept pushing. I also did not have enough confidence in myself to "trust
    my gut", being afraid of showing that I didn't really "get" testing.
    Today, I wish I asked more questions and not acted like I was the expert that
    I wasn't. I think perhaps if I had, I wouldn't have made those mistakes.

    FWIW, if you are intrested in the results of what I setteled on with testing,
    this talk was a very effective distillation of the lessons I learned over the years:
    YouTube - 🚀 DevTernity 2017: Ian Cooper - TDD, Where Did It All Go Wrong
    I found myself "amen"ing this talk continually as I watched it.

  • Since then, I did gain some additional wisdom regarding things when another
    fad came around, this time it was the microservices fad.

    Microservices never really made sense to me as a technique for anything but
    large, hundred-or-thousand-person teams. I think the downsides are fairly well
    known now, though many are still using them inappropriately. I'll speak to
    this if anyone cares but the entire idea is just extremely complex and this a
    huge topic. But, one of the biggest advocates for microservices has recanted:
    https://changelog.com/posts/monoliths-are-the-future

    What blows my mind is that most of the problems that microservices introduce
    are fairly obvious.

I'm not claiming I have a better answer either by the way.... the only solution
within these assumptions is "randomly prefer type-safety sometimes" to give it a
chance of proving "it could be better".

Every situation is different. When making any decision to use some technology,
ask yourself "if this is wrong, how hard will it be to fix?" If fixing it
means "replace a few lines of code", then fine, use it. If its "throw out the
entire program", perhaps you should think about how to mitigate the risks, or
just not use it in this project. Your professional reputation is at stake, after
all.

I see two schools of thought in the Haskell community as I've been mentioning
recently: "Real-world thinking" vs "First-Principles thinking"

"Real-world thinking" absolutely dominates industry, and it seems simple Haskell
advocates want it to dominate the Haskell industry too.

I think you might see "first-principles thinking" as I'm describing it as an
unreleastic (pie-in-the-sky even) notion that you can figure out ideal
abstractions in advance, or even that it's necessary to solve the subproblem (or
more likely group of larger problems) at hand.

I actually don't think of myself in either way. IMHO, all I am talking about is
a cost/benefit/risk analaysis. This applies equally to "real world" thinking or
"first principles" application.

I tend to find that folks who are "real world" thinkers tend to make a lot of
mistakes, like the legions of microsoft people who say "well open source is
fine, but in the Real World we do Real Work and solve problems for Enterprise".

This causes you to fall back on whatever the norms of our culture are
and resist any change at all, even if what you are doing is obviously worse.
Stating "in the real world we use mockist testing to get things done" is not in
any way good.

The right way to go, in terms of everything, is somewhere in between. Or,
perhaps, my preference is to develop confidence that a "first principle" system
will work out when put in a "real world" situation. Or at least mentally examine
the upsides and downsides before proceding, and considering if it is worthwhile
or not.

The problem (solution based on who you ask?) is things like static type-safety,
finding super-powerful general abstractions, and other tools that make Haskell
powerful isn't possible under a mostly or exclusively "real-world thinking"
framework.

I actually really agree and think these are very valuable. Remember, I'm a
Haskell programmer, not a golang dev trolling everyone. I have accepted the
value of static types and abstractions. I just want to ensure they work
before I "bet the farm" on them.

...but that any advocates of "boring Haskell" since then have utterly
disregarded the value of anything but "real-world thinking".

This is so far from my actual feelings on this that its clear I have failed in
some way to communicate what my acutal opinions are.

Lets look at one example, that is, pure functions. I bet if you asked most
programmers, I believe they would say that restricting IO to the
outermost layer of your program is entirely disconnected from the "real world".
But, I was open to the idea, and proved it to myself enough that actually it
works. In fact, not only does it work, it works well that taking a big
chunk of my free time to learn Haskell well enough to get a job was clearly
worth it.

Then there are abstractions that seem "complicated", but if you want proof of
how powerful they are, I think looking at Traversable is a wonderful example.
Take a few simpler abstractions (Functor, Foldable, and Applicative or
Monad), and combine them so that you are able to do something that would
otherwise be a lot of manual, error-prone code, and would have to be implemented
for every pair of types.

"What killed Haskell, could kill Rust, too"

I don't have much of an opinion on this piece beyond what I've already stated.
Except
in the comments, Alexis took his post to being rude/disrespectful, and I just
want to make it clear that I value all this cool stuff people are building. I
really want to use it. That's why I am here. If I didn't want to use it,
I'd go write golang for more money. I just want to make sure projects don't
fail because of technical choices that should have been obvious, such as the
one dealing with persistent, which I mention later in this post.

Unpopular opinion! Monoliths are the future because the problem people are trying to solve with microservices doesn’t really line up with reality. Just to be honest - and I’ve done this before, gone from microservices to monoliths and back again. Both directions.
Joel McCracken

The point being that I now know I don't see as much of a problem with Boring
Haskell if it comes with first principles thinking, but the boring Haskell that
is being pushed for seems to come along with a "prefer only real-world thinking"
like other languages in industry I find alarming, unjustified, and frankly
pandering to a sort of idea of "hey we aren't that different" that ultimately
will hurt more than help.

I keep hearing this kind of opinion, and I really don't understand it. What
haskell programmers are advocating for this kind of thing? We are a
self-selected group of people who took our own free time to learn something that
probably isn't going to improve our job prospects or our pay.

Ultimately, haskell isn't that different, at lest not as different as many
seem to make it out to be. Maybe you think that's pandering, I
honestly don't really care. Haskell is still software engineering.
Haskell projects can fail for the same reasons golang projects can fail. It is
imporant to recognize this and account for it. For example, the reasons for
KISS, DRY, etc are still applicable in the world of Haskell.
Of course, haskell is different, and so you need to adjust your principles,
but most still apply.

Another story time. A couple of weeks back I had a twitter thread with another
haskeller who worked
at one of the companies who has since abandoned Haskell. He blamed persistent
for the problems they had. I was curious because I generally find Persistent
adequate and not-very-problematic, so I asked them about it.

It seemed to me that the crux of their problem was that the data model used in
the database was being shared with the frontend code via GHCJS, and they had a
bunch of problems with scaling their Postgres database.

From here it is EASY to fill in the blanks with what happened, because elsewhere
in the industry the same idea has been tried, and the same problems arose. I
have encountered the idea before.
If you reuse your db data model in your frontend, you probably will save some time
upfront, but you are also very tightly coupling these two very different things.
You WILL have a problem if you ever get into a situation where the frontend and
the backend data models need to change. For example, if the frontent needs to
have data organized a little differently because the browser freezes for 5
seconds while rendering happens, or perhaps because your database schema is not
scaling well and you need to reorganize how the tables are stored, perhaps even
splitting some specific thing into a NoSQL data store.

And, this is what happened to them. Their application was tightly coupled to
these few data models and they weren't able to make changes to their database to
make it scale better.

To reiterate, tightly coupling very disparate things
can be faster if those things happen to mirror each other, but once they don't,
you are potentially in for a lot of pain. This is, like, known, man. So why do
people think that this wouldn't apply to Haskell? This is an example of magical
thinking.

As a side point, if someone had done this kind of thing in JavaScript, I think
management would have said "oh that specific thing was a mistake". But since
Haskell is "unusual", Haskell itself was seen as being the problem.

If you want to convince me that Haskell is different and that that the same
rules don't apply, please tell me how I am wrong here.

I am not trying to "pander" to people who come in wanting to write Java in
Haskell. I am trying to help more project succeed by not getting hamstrug with
things that have some fundamental problems.


Torsten Schmits said:

in the case of path, I think that there are quite a few aspects of abstraction that need to be examined separately.
The absence of basic combinators that you mentioned seems to me not to be matching our current discussion, so since this was your main pain point, what can we say about it? How did the abstraction make it hard for you to implement an operation?
...
So if you want to use path, you should have some requirements from that list to be satisfied. In your case, you were forced to use it, so how would you characterize your experience given my description?

I'm not really sure what you're asking me for. But my main point is that I want
the Haskell community to use things that are more-fully baked in production
code. Before picking something up that is supposed to be more type-safe, make
sure they satisfy basic
requirements. That's the thing about path. Folks are using it because its
more typesafe, but meanwhile from my POV it didn't bring any benefit (to what
I was working on), and it had downsides that were not-nothing. To be clear, its
not that I want to use FilePath. I want path to be better!

"Too abstract" is just one way in which a project might have problems. I don't
think path suffers from that, except IMO it is not obvious what the value
proposition is from looking at its hackage page (or what else I should look at
instead!), but that's still a matter of documentation, probably.

If you're writing code that other professional Haskell programmers would have
problems with, perhaps consider not doing this.
This is just normal engineering.
What will be confusing for others is a function of cultural norms.
I guarantee many in here have mocked FactoryFactoryFactory abstractions in
Java for being too complex and devoid of value.
Additional complexity is worthwhile in a project if it has accompanying value.
Could another team mate jump in to this code and
make contributions? Could you hire a Jr dev and have them productive in a
reasonable amount of time? All of this is part of being a profoessional software
developer. Writing a lot of code that only YOU understand is not something that
is considered acceptable in any other area of software engineering. Haskell
isn't magically different.

A lot of this discussion comes down to cost/benefit/risk analysis. If
you deem that the cost/risk is low enough and the benefit high enough, fine, use
it. Just realize what might happen if you are wrong.

Now, back to path: As I said before, I like a lot of things about path.
I'd also like to try hpath, perhaps. I don't know how that would work out,
but I probably wouldn't
intentionally use them on a production codebase without more investigation.
The problem with path is that it is not "fully baked" in
several ways, as you can see from the diff I linked to:

  • deficiency of documentation, specifically I stumbled for a while trying to
    find out how to make a path literal.

  • deficiency of combinators; if I were writing a library like this, personally I
    would look at the existing FilePath abstraction and add comparable combinators
    for each of them. Since it didn't, I don't have a ton of confidence that
    others standard tools would be missing.

I do quite dislike that FilePath has partial functions. But it also actually
did what I needed it to.

I generally have a very bad memory when it comes to certain things, especially
the details of what I did. People are always telling me "remember when you..."
and no, actually, I don't remember. This literally just happened today. So it is
challenging to remember these things, and while path was not a perfect example
by any means, it did satisfy certain criteria. But the problems with free monads
are much, much, more significant.


James King said:

...but does it have to come at the cost of making sure that people who use a different subset are in some fancy out-group?

If I have at all made anyone feel like they are part of an out group, I
apologize. I really want to establish a good set of practices that people can
use and be reasonably sure their projects will not have massive problems because
of them. I want to minimize Haskell job erasure.

You can always program some other way if you want. If you have different values
than I do, that's fine too. I just see certain problems again and again, and I
think that I've discussed that I think some basic software engineering lessons
are missing from people's thought processes.

Personally, since I am really hoping that Haskell will succeed at my employer, I
am going to keep using the techniques that I have a reasonable confidence will
yield a successful project. I will mess around on my own time. If, at some
point, I believe these techniques I have learned are responsible to use as a
professional, I will adopt them when I have a good reason to, but not before.


I kinda feel like I am repeating myself over and over, and this is absurdly
long. It feels like my point isn't really coming across clearly, so I added
some personal stories, for example, which added to the length.
But I feel like reiterating what I have already said, so I don't know how much
more there is to discuss.

Joel McCracken

Sorry for how long these are, i'm trying hard to make things clear, and it seems like I haven't been too successful previously in this thread...

Torsten Schmits

@Joel McCracken the purpose of my question was to connect your criticism of the lib to the concept of Boring Haskell, since it seems to me that you're just stating that the library has overall low quality, which I agree with, but not that it is hard to understand how it works. I'm not trying to make a point, just an attempt at discussing concrete examples of what is too fancy. If that doesn't apply here, no worries!

codygman

Joel McCracken said:

Sorry for how long these are, i'm trying hard to make things clear, and it seems like I haven't been too successful previously in this thread...

I've read through, but not yet understood your responses yet :) I'll try with more energy tomorrow.

Don't be sorry for how long these are, I think this is a valuable conversation with lots of moving parts that makes it hard to communicate.

Your effort in making things clear is very appreciated, please don't take any of my challenges or disagreements to mean you weren't successfully communicating... I could still just disagree (whether I should or not :big_smile: ).

Joel McCracken

Gotcha, thanks for clarifying. Boring Haskell is not just about easy to understand stuff, it’s also identifying the high-quality libraries etc so that it’s easier to build successful projects

Torsten Schmits

well, I guess there just isn't one for file paths :shrug:

Joel McCracken

I actually think that’s a super valuable observation, and the question becomes “how do we fix it”

Torsten Schmits

I think you just volunteered to build a better one :sweat_smile:

Torsten Schmits

tbh, I wanted to do that a few times when dealing with path, but obviously it isn't done in a day

Joel McCracken

Perhaps hpath is it now?

Torsten Schmits

afaict it's just a shallow fork of path

Torsten Schmits

@Joel McCracken just a note, after reading the entirety of your essay, I have the impression that you're suffering more from haskellers in the industry knowing too little about non-haskell topics in software development, like large-scale architecture design, and not from too-advanced code.
I can't judge that situation, but that's what it reads like to me.

Joel McCracken

I think I agree, but overly complex abstracted code may also cause problems. People need to be able to maintain your code after you're gone.

Rizary

@Joel McCracken If someone coupling their db (in this case Persistent) data model via GHCJS and the project is shutdown, instead of blaming Persistent and GHCJS, would this boring haskell initiative people find the solution to adhere the problem? Like using another library beside Persistent? (I ask this for my knowledge).

I feel that people behind this initiative is just group of library maintainer that really wish haskeller or beginner haskeller use their library, and then we are depend on them a lot. And I know some people that always bashing another library that is similar type with his. Like instead Persistent, why not postgresql-simple? instead of stack why not promoting cabal? if people decided to join boring haskell and he prefers cabal, is he not a member of boring haskell anymore?

Joel McCracken

I think most people would consider Persistent to be "simple haskell" I think; the issue is less the library and more the way it was used. You really can't prevent someone from using your library in a bad way, or at least I don't know how you'd be able to do that.

The right way to do it IMO is you would have your persistent types, and then you have "view model" types with ToJSON/FromJSON instances which are sent down to the client and then parsed by the client. IMO the client/server would be able to share these "view models" and I don't think it would be too terrible, but I'm not totally sure and it would be something i would watch. Its quite likely that the client might need to do additional work to convert it to types that work well for it, and the view models are just used for data exchange. You need to be aware of this coupling and when you start to feel the pain, come up with ways to address it. I don't know the specifics of what happened so I can't speak to what went wrong, but sometimes if you let a problem get too bad without addressing it, it becomes essentially impossible to fix it; too many things are coupled, too many things behave weirdly to work around this other thing that is not quite perfect, etc.

The reason stack is promoted is because it solves a lot of problems that trip beginners up. For example, it installs the correct version of GHC. A beginner can install stack and then git clone a project and just run stack build and then be ready to go. Cabal used to not have certain features, such as sandboxes, which made certain things not work correctly, and IIRC basically the invention of stack made cabal evolve better features. I don't have a big stake in this debate, I just know that stack has always worked for me and never really caused me any grief, whereas I find working with cabal directly to be generally quite confusing. But that's just me.

"Boring haskell" isnt like a club. Its just an idea that a lot of people have been thinking. Its just a name for an idea. People aren't part or not part of boring haskell; they just like or dislike the ideas. and re: people just promoting their own libraries, well, you can feel that if you want. Perhaps someone is urging others to their library in bad faith; i don't know. I don't have any haskell libraries so that criticism wouldnt apply to me

Joel McCracken

anyway there isn't any reason there couldn't be multiple libraries that do a similar thing that are both considered "boring haskell". The idea is basically say "the haskell community has said these libraries are good enough that you can use them without worrying that you are the first person who actually used this library in a significant way". At least, that's how I feel.

Torsten Schmits

Joel McCracken said:

I think I agree, but overly complex abstracted code may also cause problems. People need to be able to maintain your code after you're gone.

So to bring this back to the OP, do you think that it might be possible that the overly complex part is a direct consequence of the lack of general software engineering skills, and basically orthogonal to abstraction?

Joel McCracken

Absolutely it is possible; its a skill to know how to write code that is consumable by other engineers

Torsten Schmits

so I guess the question is: is there a correlation between the use of abstraction and lack of accessiblity skill, or is it rather between academics/hobbyists/etc. and that lack, or something else? and how does that translate demographically to the community?

James King

I don't know that it does. It's the same old tribal hostilities you see elsewhere in programming communities. When I hear people describe what they do as, "real world programming," I'm turned off. Qualifying how others behave or think is a sign that they're gate-keeping their in group.

I'm down, as much as anyone else, for higher levels of adoption of Haskell in industry. I disagree that, "Boring Haskell," is the way. C++ had similar movements as well and it's different which subset of C++ you end up with depending on which team you talk to. Game engine developers are famous for writing their own template libraries; others only use C with templates. It's okay to use a subset at a company... Haskell is a big language! But I think it's harmful to bless a particular subset. It's great for debates but not for progress and adoption. I think we could benefit from uniting as a whole community and focusing our efforts on understanding markets better if adoption is what we want to see.

I realize there are stories of "teams falling apart due to inappropriate use of Haskell feature X," or because the team lacked the skills to maintain a "fancy" Haskell code base... but is that Haskell's fault or the context of the business? What if those developers aren't given the appropriate time and space to learn because they were always expected to be shipping new features? What if their manager wanted the project to fail because they didn't believe in Haskell to begin with? I don't think I can recall a project failing because of a programming language feature. It's usually because the social structures of the business were already falling apart or poor to begin with. A strong team committed to their goals and working on the same page can make things work.

Joel McCracken

Can you point at any message that indicates that "boring haskell" people are claiming that only they do "real world programming"?

Can you point at any message that indicates someone is trying to establish an in-community?

At this point I don't think you are arguing in good faith. I have made it clear that none of those things you say are what

Also, I think it is rude to say:

I realize there are stories of "teams falling apart due to inappropriate use of Haskell feature X," or because the team lacked the skills to maintain a "fancy" Haskell code base... but is that Haskell's fault or the context of the business?

Is it so impossible to believe that these people are actually right? the are after all talking about their experience.

Joel McCracken

If you have someone from one of these teams who says differently, and blames their dumb coworkers for not understanding their code and their bad management for not letting them put the effort in, ok, at least we would have someone who did it. But i don't think i've seen anyone ever make a counterclaim lik ehtat

James King

Sure, from https://www.simplehaskell.org

Commercial software is a team endeavor. Fancy Haskell is costly to teams because it usually takes more time to understand and limits the pool of people who can effectively contribute.

I'm not saying it's an extreme problem but it's not an uncommon opinion in my experience.

And no I can't say definitely that a software project collapsed because someone used GADTs but I have heard stories from people who blamed language features, and this isn't unique to Haskell, but they leave out the context of everything else that might have been going on.

I'm not trying to argue on bad faith or anything -- I'm just trying to point out that adoption is more complicated than a subset of language features.

I myself tend to stick with the most simple features I can get away with before I reach for more interesting/powerful things. :)

Joel McCracken

I don't think GADTs are an issue; i honestly think they would probably be blessed by such an initiative because they are so easily comprehensible and clearly useful and it would be very hard to imagine a person walling themselves into a box with GADTs alone. Really, the only thing they need is a small, easily comprehensible blog post or wiki page, IMHO

Joel McCracken

gotcha, sorry i reacted a bit out of anger

James King

It's okay! I wasn't trying to be rude and would like to avoid being rude. :)

Joel McCracken

so can i say first of all that i really dislike that simple haskell site, I thought it was very poorly worded

Joel McCracken

it is NOT anything that any of the people from the snoyman mailing group would have written IMHO, I acutally don't know who is behind it

Joel McCracken

so maybe now i understand where the resentment is coming from

Torsten Schmits

James King said:

I realize there are stories of "teams falling apart due to inappropriate use of Haskell feature X," or because the team lacked the skills to maintain a "fancy" Haskell code base... but is that Haskell's fault or the context of the business? What if those developers aren't given the appropriate time and space to learn because they were always expected to be shipping new features? What if their manager wanted the project to fail because they didn't believe in Haskell to begin with? I don't think I can recall a project failing because of a programming language feature. It's usually because the social structures of the business were already falling apart or poor to begin with. A strong team committed to their goals and working on the same page can make things work.

To me, this is the best point made so far. Management in companies is notorious for this kind of effect.

James King

Sorry for getting you upset! I think a lot of the spirit of "Boring Haskell," makes a lot of sense.

Joel McCracken

Maybe I'm wrong but i generally think its your job as an engineer to try to sense what management expects and work within those confines. It is certainly possible to have impossible expectations though, so. But in terms of the one case re: the postgres db being unable to scale, it does sound like it was a technical problem, or at least that person perceived that having their DB model coupled tightly throughout their codebase (including their frontend) was a big factor in making it "impossible to scale"

Joel McCracken

i'm trying to find the tweet atm just so yall have context

Joel McCracken

there was something someone chimed in with about some service being deployed in production in a tmux session

Joel McCracken

https://mobile.twitter.com/rossabaker/status/1296542252433641495?prefetchtimestamp=1599664608257

Joel McCracken

this whole thread is worth a read

James King

I think I've seen bits of that. It's hard to see what factors led up to that. I'd have so many questions!

Joel McCracken

ditto; i'm not sure how much everyone is willing to discuss that stuff though

Georgi Lyubenov // googleson78

all the issues that Kris has in the thread seem to be unrelated to persistent (and how complex or simple it is) to me, as other people have pointed out in the thread itself (using db types as domain types, every db lib forces you into IO at some point)

Joel McCracken

yep. I agree with Kris fwiw, its better to have your business logic in pure types, and have a monad transformer stack "at the edges". I'd like to keep PersistT out of things as much as possible just because of the principle of least power!

James King

I did something similar in Rosby with separating the serialization types/primitives from the domain types/objects. Separation of concerns is a nice idea. :)

Torsten Schmits

thought at first you were referring to Ross Baker (tweet above) with Rosby

James King

Haha, no it's the name I gave the database I'm building on my stream: https://github.com/agentultra/rosby

A simple, reliable key key-value database. Contribute to agentultra/rosby development by creating an account on GitHub.
James King

It's a reference to Lord Gyles Rosby in A Song of Ice And Fire: https://awoiaf.westeros.org/index.php/Keeper_of_the_Keys

Joel McCracken

@Sandy Maguire posted a quote in the FP chat slack that I thought perfect captures my feelings on this whole topic:

nice quote from hamming that I think applies well to our community
"In science if you know what you are doing you should not be doing it.
In engineering if you do not know what you are doing you should not be doing it."

codygman

@Georgi Lyubenov // googleson78 I'm becoming increasingly suspicious of db types as domain types.

I recently did a refactor of many functions in a system passing around tons of arguments because it feels like the simple thing to do each time because the db type was very large.

These parameters could be meaningful seperated into 2-4 sensible types across these functions.

As a first pass I've just passed along the large db type at each step, which alone simplified things.

It also made it obvious what fields encoded as Maybe were being used for very important domain logic and would have been better described as a sum type.

Most of the fields in the large db type had to become optional, which made calling code needlessly difficult.

As a next pass I'll create more purposeful types and refactor the functions through the system to use those.

Then in the read/write parts just use toDBType/fromDBType functions.

This entire exercise on a medium sized piece of code was way harder and took way longer than stopping for a second to think up meaningful types would have been.

My key takeaway though is how "db type as domain type" effectively took this more ideal option off the table and discouraged even thinking in the direction of better abstractions.

Torsten Schmits

I would say that using very large data types, especially when you already know that significant parts are optional for a subprogram, is an antipattern in itself

Torsten Schmits

optimally, use one datatype per function :upside_down:

Georgi Lyubenov // googleson78

yeah I think it's not that you can't change them, but rather that the simple act of using the different types with a restriction on them limits the "way you think"

Georgi Lyubenov // googleson78

kind of like how people in other languages often don't use types to encode more correctness, even though they theoretically could, just because they're more cumbersome

Rizary

I hate slack! I have linked in one of my thread that have the XtoY function using persistent.

anyway, @codygman if you have any of your effort in the open, I would love to look into the code to learn more!

Magnus Therning

Thanks @codygman for the terms "first-principles thinking" and "real-world thinking". I have a feeling these are at the heart of some discussions I've had with the Product Owner at work regarding hiring. I've never quite been able to put into words what it is that rubs me the wrong way with her arguments, but these terms might just have unlocked that.