It's time for this terrible "Objects in the large, functions in the small" idea to be put to bed.
FP can do side-effects, asynchrony etc just fine, it's just about controlling them so you can still reason about your program and compose its elements easily.
Kay-style message-sending object graphs are neither modular nor composable; state & effects leak transitively by observation, and they cannot be recombined in self-similar patterns. Local reasoning is impossible.
We can talk about specific situations where say, Actors models might have pros and cons, but a blanket recommendation that "it's probably time for an object layer" is rubbish.
Why not design for composability and modularity top to bottom?
I went into more verbal detail in the actual presentation; the argument is something like:
1) Programming is maths
2) Programming is about abstraction and finding patterns
3) CT describes abstract patterns in maths, and is therefore directly applicable to everyday programming.
4) (Exposition of Categories, Functors, Monoids)
5) Comparison of the mathematical concepts to equivalent programming concepts
6) Composability is a huge advantage for software in the large and the small; categories and monoids capture the essence of composable patterns.
7) Abstraction is hugely important for software; it allows to only know what we need, and no more. Category Theory gives us deeper, and dare I say, simpler abstractions. We can find underlying patterns and commonalities that we might have missed otherwise.
Firstly, thanks for all your great stuff on Math/Programming, it's been a great source of inspiration!
I would have liked to include Universal Properties, but I decided it was just too much to grasp for one session.
The audience were professional programmers, so there was a strong underlying message of "this is directly applicable to what you do". UMPs have got something important to say about abstraction -- what it means to have only what you need and no more. It just would have popped heads by the time I'd built up to it.
There's certainly great value in reading application source code, but it has nothing to do with physical Architecture. I do wish people would let go of this tired and completely wrong metaphor.
Personally, I think the metaphor is quite apt. Obviously it is a metaphor so there isn't a complete mapping, but where do you feel the impedance mismatch is between the traditional architecture metaphor and software architecture? I'm genuinely curious to know, because I have found the metaphor to be strong enough that I read traditional architecture books to help me understand architecture and design in general in software development.
Other than the top 3, these seem to be differences between software and buildings rather than software designers and architects. In addition, I don't think the first three are true. Lastly, these are distinctions without any actual difference in the context of the metaphor, where "architect" could have been replaced by 'novelist' or 'cabinetmaker' and no information would have been lost.
edit: to be more specific about the top three; the first rests on the word "minute" which can be as large or small as you want, depending on what you're trying to prove. The second may be true now, but that's largely because we lack a specific language of high-level software abstraction, so the only way to learn it is to build a lot of things (the general point of the original passage, btw.) The third is just wrong - plenty of people are useful for building parts of software who would have no ability to design a large application. I suspect that those people are a majority of the industry.
The vast differences between software and buildings correspond to the vast differences in designing them.
In context here, "minute detail" is obviously a relative term comparing the requirements of software and architecture design.
You just made up the "specific language" thing. The reason we don't get unicorns to write software for us is they don't exist either.
There are many incompetent software devs out there, but I don't see how anyone can possibly build any amount of software _well_ without having an appreciation of how to design it. This is why I used the word "capable".
The thing is, even if some of these things were similar to architecture, it would be by accident. They are, on the surface, totally different fields. On a deeper level, they're still totally different. The onus is on you to show the linkage, if you believe it to be applicable.
Architecture is about designing buildings that not only serve their function, but are beautiful to look at. I think that's exactly why some people like "software architect." But, to me, "software architect" evokes grandiose, ornate software design, which serves no purpose, because no one sees it. Users see the software's UI, and the UI ought to be beautiful, but trying to make the internal design beautiful is not only unnecessary, it's counterproductive, because it gets in the way.
it's a shockingly bad metaphor for software development.
I used to work for a top5 consulting engineer - architects some times just do the very high level design the consulting engineers actually turn this into a practicable design which is then built by the contractors and the navvy's.
And I am sure that my dept boss Dr Shair (one of the pioneers of the cable stay bridge design) would consider him self an engineer and not an architect
"Weak" and "strong" are not well-defined terms, and you'll find different defs around the place.
I deliberately used "weak" in the title as a loose umbrella term for the sundry malpractices detailed below. The article is hopefully otherwise quite specific.
In particular, by "types", I always mean "static types" or "propositions". What you mean mean by "dynamic types", I call "tags"; they are not the topic of the article. These usages are standard, if not universally adhered to; I hope I made this clear in the article.
Hi Manicdee! The point is that with any unityped representation you can come up with, the name tells you nothing certain; any conclusions you think you can draw from it are pixie-dust and moonshine. Even with nonsense names, the argument and return types give you solid proof not only about what the method does, but importantly what it _doesn't_ do.
There's no criticism implied of my colleagues at all -- I'm as culpable as anyone. The point is though, we can make big improvements with types, without paying a big cost.
My apologies, I didn't intend to suggest that laziness and sloppiness were attributes of the programmers, but evils imposed by time pressure.
Do I fight management to get the two weeks I'll need to write feature X correctly, or just take the obvious shortcuts to get it done in 1 week?
Of course my laziness shows through because I haven't been in management's ear for long enough beforehand that they decided to give me one week without consulting with me ;)
I display this "lack of bottom-up management" failure mode consistently, I'm more interested in writing code :\
As for my OOD teachers, I've had mountains of bad Perl and PHP code to wade through, and the benefit of Stack Overflow and the Django Project to guide my thinking on the matter.
Type systems do provide some compiler-level assistance in the march towards coherent, well designed software, but they won't solve problems like
def position_sprite( top_left: point, sprite: sprite)
When you provide the top_left of the wrong element (e.g.: confusing the top left of the drawing space with the top left of the window or display area.
I believe phantom type should help with your example. At it's most naive, you need a type for the points in window space, and a second type for the points in drawing space. If you give the sprite a window_point, it will complain about only accepting drawing_points. Now such errors are confined to a conversion function.
With phantom types, a the Point "type" would be a function of types to types. It helps when some operation work on all kinds of points: they can be polymorphic with respect to the additional type.
data Point a = Point Int Int
draw_sprite :: Point Drawing -> sprite -> Io ()
draw_button :: Point Window -> sprite -> Io ()
translate_point :: Point a -> Point b -> Point a
translate_point (Point x y) (Point xt yt) =
Point (x + xt) (y + yt)
A superset of ... what the? Why do you keep trotting out this garbage thread after thread?
Every language takes influences from others, if that's what you meant. Scala is influenced by Java, C# and Haskell, in that order. Kotlin is influenced by Scala and Java, etc.
The "superset" and "Lisp, Haskell and Javascript in the same compiler" comments are unhinged from the reality of any language I recognise.
FP can do side-effects, asynchrony etc just fine, it's just about controlling them so you can still reason about your program and compose its elements easily.
Kay-style message-sending object graphs are neither modular nor composable; state & effects leak transitively by observation, and they cannot be recombined in self-similar patterns. Local reasoning is impossible.
We can talk about specific situations where say, Actors models might have pros and cons, but a blanket recommendation that "it's probably time for an object layer" is rubbish.
Why not design for composability and modularity top to bottom?