Probably not necessarily on topic - but I really want to say that this guy has been the #1 source of pain in my developer career so far. There've been so many projects where I had to implement some solution some way because the "architect" said "that's how Fowler recommends it, so that's how we're doing it", even if the solution was so bad that everyone saw it coming back to bite us from miles away. I read his books and found nothing impressive. To me the guy is just a glorified salesman who I'd hire to sell BS to my agency's clients.
Just like Uncle Bob. I am dismayed by the fame of these two.
If you want to actually learn what they call "clean coding" (aka proper design, these guys are great at creating buzzwords, and great public speakers), the way to go is "Systematic Program Design" (in youtube [1], same material as edx's How To Code), which is based on the Htdp book ("how to design programs"), followed by MIT's OCW 6.005 [2] ("Software Construction").
Both are free. But who knows this? There is no one to hype them, no "movement" behind what are tried and trusted techniques, no flashy CQRS SOLID acronyms.
These two courses, instead, teach you timeless concepts and techniques that will survive all fads.
You can't blame Fowler because your software architect decided to implement CQRS, here's some of the warnings in the article he wrote about CQRS:
* "CQRS is useful in some places, but not in others"
* "Beware that for most systems CQRS adds risky complexity"
* "CQRS is a significant mental leap for all concerned, so shouldn't be tackled unless the benefit is worth the jump."
* "So far the majority of cases I've run into have not been so good, with CQRS seen as a significant force for getting a software system into serious difficulties."
* "CQRS should only be used on specific portions of a system and not the system as a whole"
* "Suitability for CQRS is very much the minority case"
* "Using CQRS on a domain that doesn't match it will add complexity, thus reducing productivity and increasing risk."
* "You should be very cautious about using CQRS"
* "Adding CQRS to such a system can add significant complexity"
* "I've certainly seen cases where it's made a significant drag on productivity, adding an unwarranted amount of risk to the project, even in the hands of a capable team"
My main point was that we should focus first on learning and applying well ageless fundamentals and mature solutions, rather than on shoehorning the latest fad. Only in extreme cases should that new idea from a FAANG be adopted.
Martin Fowler is dangerous in this regard, there isn't a bandwagon he doesn't jump on making it sound like it is the new normal, and, although, as you say, he does mention counterindications, his presentations are still very unbalanced, hyping what is still inmature and extremelly risky. He is an excellent popularizer, I actually enjoy listening to him, but too many buy it uncritically.
I meantioned CQRS, the wider context is microservices. Thank you for presenting so much evidence, you are right, but the thing is, the wider context where it was said, all the hype, is still missing.
Here is one example from a few years ago in youtube [1], at the height of the NoSQL craze this time. He proudly mentioned how his mates at The Guardian adopted it, because "a news article is a natural aggregate", or something like that.
He does indeed add later, in passing, that "some NoSQL databases are immature , we don't have the tools, the experience, the knowledge to work with them well; we've got decades of experience with sql databases".
It turned out that they were burnt at The Guardian, and wrote about their painful migration to a RDBMS and to safety [2].
Now, one can argue, rightly, that this is just anecdotal evidence against NoSQL. The problem is that he used it as evidence for NoSQL. Anecdotal too. The difference is that the former evidence benefits from hindsight, whereas the latter was premature, a space he regularly finds himself in.
> It turned out that they [The Guardian] were burnt, and wrote about their painful migration to a RDBMS and to safety
It's not really fair to judge technical decisions retroactively.
PostgreSQL in 2012 wasn't the same as in 2018. PostgreSQL was uncommon back then, there was a limited talent pool, it was painful to operate/failover/scale/shard, and it wasn't yet available on RDS. JSONB support only arrived to Postgres in Dec/2014.
The alternatives (e.g. MySQL) were also problematic, e.g. locking tables every time a new column was added was likely a deal-breaker.
They only migrated to PostgreSQL because they didn't want the pain and downtime of self-hosting MongoDB, but there didn't seem to be any major architectural issues. In this case it seemed like specific issues with MongoDB in production rather than NoSQL in general.
He briefly touches on the downsides during the talk, but the purpose of a talk is to excite and pique somebody's interest.
In the book he uses more nuanced language and warns that it's not something to be used on every occasion.
* "It’s essential to test your expectations about programmer productivity and/or performance before committing to using a NoSQL technology."
* "Most applications, particularly nonstrategic ones, should stick with relational technology—at
least until the NoSQL ecosystem becomes more mature."
* "But we also realize that there are many cases, indeed the
majority of cases, where you’re better off sticking with the default option of a relational database."
* "There’s no shame in doing the assessments for
programmability and performance, finding no clear advantage, and staying with the relational option."
> I meantioned CQRS, the wider context is microservices.
Regarding microservices, he states that you shouldn't start with complex distributed architecture even if you're confident you'll need it in the future.
> Martin Fowler is dangerous in this regard, there isn't a bandwagon he doesn't jump on making it sound like it is the new normal
While he talked about hyped technologies, I don't think you can blame him for bad decisions other people have made after watching his talks.
> My main point was that we should focus first on learning and applying well ageless fundamentals and mature solutions, rather than on shoehorning the latest fad. Only in extreme cases should that new idea from a FAANG be adopted.
Indeed, nowadays I usually choose boring technology instead of the latest fads
I really appreciate your counter, I think that we have both made our points.
My intention wasn't really to bash on Martin Fowler, the problem is much wider than that, and he is certainly not the worst example, just happens to be the OP's subject.
Let me end by quoting MIT software engineering professor Daniel Jackson, I think that he pinpoints the essence of the problem beautifully (in his book "Design by concept", where, BTW, he credits Martin Fowler's book "Analysis Patterns" influence):
In my work as a consultant, I've been involved in discussions about future products and strategic directions, usually under the rubric of "digital transformation". Companies may be keenly aware of what they're trying to achieve (better customer experience, increased customer engagement, differentiation from competitors, etc), but much less certain of how to do it, and especially how to explore new posibilities and get results and feedback quickly. Too often, the options are cast in terms of technology adoption (selecting from the latest shiny new things, whether mobile, cloud, blockchain, machine learning, Internet of Things, etc). These technologies may have great potential, but they are only platforms, and choosing one with the hope that it will transform your business is no more plausible than expecting such an impact from the adoption of a new programming language or web application framework. A better approach is to focus on functionality, which is the source of real value.
Postgres definitely wasn't uncommon in 2012. It wasn't even a weird choice in 2003, when Arbor shipped products on it. Maybe there's subtext I'm missing, like, Postgres was uncommon for sites like The Guardian?
Fowler's article that made the most impact on me is the one about Monolith First, and how you shouldn't start with a complex distributed architecture, even if you're confident you'll need it in the future.
Fowler shares ideas of architecture, but in a nuanced way, and clearly states when it might be appropriate, and what are the costs and complexities involved.
The problem in your case seems to be more of "Architecture Astronauts".
What's remarkable to me is that Fowler is telling an interesting story about a technical decision, includes an entertaining diagram, but in the end doesn't provide any proof either way.
There's not a single piece of hard data in that article, not even how many of his colleagues are making the specific recommendation.
What bugs me most about Fowler is his reception, not his writing. If we take his writing in the spirit you've outlined (an experienced developer sharing his anecdotes), there's no problem. But that just isn't how he's seen in the industry. There is a whole class of developers who take his writings as gospel, without the critical sense to wonder whether Fowler's experience/insight actually matches the project at hand.
What data? Each project is building a different product with a different team. Even if he had data like "Teams that use monoliths complete there projects 20% faster", it'd still be useless because of the confounds.
But let imagine through some miracle he did have data that conclusively proved 75% of the time it was better to build a monolith. This might suggest you should more seriously consider building a monolith first, but still this doesn't make the decision for your project. You have to decide based on your project, teams, goals, and budget if a microservice is right for you.
I think you're 100% right that it's the reception. Our industry is filled with people who take the practice of software as a matter of dogma. But we do that with everything, we'll turn any technique into a cult TDD, microservices, unit testing, immutability, functional programming, BDD, etc..
If he provided the data you think he lacks, you could still complain about architects who didn’t apply “critical sense” to see whether their project’s data matched his projects’.
I'd expect to see numbers: roughly how many projects he was using to draw his conclusions, how many people he talked to and general details about their level of experience and domain of activity. Additionally I'd be curious to read what the concrete consequences of the monolith vs. microservices decisions were for those projects in delays, budget overruns, personnel churn, etc. Finally I believe that the types of projects are highly relevant - industry, domain, budget and so on.
I would need the above to make my mind precisely because I can't find any indication that Fowler is a technical expert. The fact that they're a SW book author that I can't attribute any well-known software to is a big red flag for me. Likewise that they've been working as a consultant for a very long time.
You suffer from bad architects. If there wasn't Fowler, they would just misinterpret somebody else.
> that's how Fowler recommends it, so that's how we're doing it
Fowler would be the first to point out that there are no hard & fast rules / recommendations and each case should be evaluated individually since there's just too many variables.
I echo this sentiment. For some reason his writing is very popular among so-called "software architect", which is especially painful if they are in a position of power in the organization and start forcing it down everyone's throat.
As is always the case, I feel the exact opposite. The #1 source of pain in my career is people not understanding that certain concepts / principles are really the key to surviving in a complex system.
But, we probably don't build the same kinds of software. Shooting him down because he isn't talking about the kind of software you build is pretty ignorant.
I think the problem is people who aren't building these kinds of systems learn these "best practices" without the context and then you get frustration like OP, and you can often blame the people selling these approaches (I mean understandably they want to promote their work, no doubt it works in their scenario)
But I've seen this in every direction possible at this point :
- people talking about horizontal and vertical scaling, citing Google and Facebook research - on an unreleased MVP with 0 users
- people using the quick and dirty approach with stuff like RoR and then 5 years down the line getting stuck in an mess that's hard to reuse and hard to refactor and being unable to react to quick market changes fast enough
- people implementing a distributed lock system to synchronise some state - meanwhile the entire system is built on top of one DB instance and if the DB is down the system is down - so use the DB locking
.... so many "best practices" or "good approaches" without context
Yes, I must say, him and Eric Evans. It's not their fault, but still ;)
I remember it well, starting 2016 suddenly everyone wants to do DDD, CQRS etc. Let's shoehorn new paradigms in our framework of choice. Boilerplate, lots of boilerplate...
And then Martin handwaves the project's failure away, claiming he wasn't that involved, and that his methodology is blameless: https://www.martinfowler.com/bliki/C3.html
100% agree. The fact he still exists is surprising. I have never came across a single article from this guy which is useful... even so, when he belabors on an existing tech, I have always found articles from other bloggers which are way more precise and to-the-point. He is just a salesman under a tech garb.
I sympathize (I like saying I don't do BDD, Buzzword-Driven Development), but I'm not terribly up on Fowler's work -- can you provide some concrete examples of things he recommends that were over-engineered or just plain bad for your use cases?
(Disclosure: I'm at Thoughtworks, but my opinions are my own. In fact, we're so broadly dispersed across the world that we can't claim to represent a "Thoughtworks" opinion).
he peddles whatever is the fad du jour. Back in 2001/2003 when the RUP was at its peak cycle Fowler made a number of posts and talks praising its virtues.
Everybody reads Fowler and does horrible things thereafter for a while. Hopefully, you come out of that phase and learn what is worth bothering with and what is not, without too many years and without too many scars and too many failed projects.
Some never do though...
The phase of my career where I drank the SOLID kool-aid and studied and tried to apply the hip architectural patterns is one of the darkest and most regrettable. I'm still dealing with bad decisions made in code bases from that period.
> The phase of my career where I drank the SOLID kool-aid and studied and tried to apply the hip architectural patterns is one of the darkest and most regrettable. I'm still dealing with bad decisions made in code bases from that period.
does this come from the architectural patterns themselves, or from applying them everywhere (including where they dont fit)?
or maybe something else?
(i have my own experiences, but im really keen to hear from others)
I think the best use in studying patterns and approaches is thinking about where they are applicable and where they aren't and reflecting that on what you've done.
You can't really use architectural patterns as shortcuts before you've accumulated enough experience to internalise their requirements and caveats, but just knowing about them gives your brain more to work with when you design systems, allowing you to think more broadly.
One of the largest financials institutions in the world based some of their main trading systems in Event Sourcing with serialized object storage. It's a massive pain. But the dogma/mantra and sunk cost fallacy pull make any attempt at moving away from it being outright mocked.
Want to see the current state of a trade? Open and deserialize 50 objects and then manually make these classes pick the merging of all those fields.
Not like there are multiple database products out there with temporal support. Let's reinvent the wheel, baby!
Yes, I once said at work that "the sad testament to the sorry state of our industry is that Fowler can still make good money rather than be chased with pitchforks". It did not go over well with the rest of the architects. I still stand by that.
Martin, if you are reading this, here's something you could do:
Revisit REST (and the Richardson Maturity Model[1]) and explain why REST has failed so spectacularly in the JSON API world.
It's worth doing because REST-ful hypermedia systems are using an interesting and distinctive network model, and most younger web engineers don't know much about it and end up using the client-server approaches that have become so common for modern web applications.
You are in a great (perhaps the best) position to respark interest in hypermedia and the REST-ful approach.
I have some thoughts on the matter[2][3][4], and would be happy to discuss.
He can start by actually developing a major (non-toy) application using HATEOAS and then once we all marvel at his success, we might consider doing things this way.
To me, HATEOAS always seemed like it wants me to basically re-invent the web browser. You know, the fact that basically no one does things this way may give you a hint on how actually useful it is.
HATEOAS doesn't want you to reinvent the web browser.
HATEOAS is describing how the browser and HTML works.
It was the JSON API folks who (inappropriately) appropriated REST terminology that made folks feel like they had to shoehorn that concept into what is really RPC.
Yes, but there were some people that were pushing HATEOAS for APIs. I think the fact that pretty much no one does it is good enough evidence to see that it's just not a good fit. HATEOAS is good for web sites and human operators, but not for APIs being executed by software.
Essentially no-one implements REST APIs as defined by Fielding and Fowler because what software developers and architects actually find valuable is an HTTP CRUD API. The widespread adoption of microsystems architectures required the selection and standardization of a format for the transfer and modification of resource representations between services, and CRUD over HTTPs serves that purpose pretty well.
Unfortunately, REST is the buzzword under which HTTP CRUD APIs were proposed and promulgated, so now we have a schism between theorists who consider HATEOAS a requirement but can't explain any benefit to the common real-world use case, or give any practical examples, and engineers who now just roll their eyes when the theorists say they're not really building REST APIs.
We know we're not, but if we tell our managers that we've implemented HTTP CRUD APIs for the backend services, they'll ask us why we're didn't choose REST...
Someone please help here, im reading all the links here and still drawing a blank on what is and is not REST! Can someone link to an API that’s truly RESTful? And perhaps one that’s barely not?
REST has 6 constraints, the 6th constraint has 4 elements, the last of those four is a useless thing called HATEOAS.
If you call your API REST, some zealots will open up your API and look if your beautiful response is polluted with any "links". If it is instead pristine, they will claim your API is not REST.
It's really ridiculous, the links system is only barely useful in some GUI data exploration tools, that are not even that comfortable to use.
If you use a JSON API that feels weirdly antiquated because of the superfluous fields, this is why. If you disregard HATEOAS basically every JSON API on the web is REST.
I think HATEOAS is basically requiring a client that looks a lot like a web browser. Which is no surprise, since Fielding is basically describing the way the web works. This is not necessarily a good prescription for APIs manipulated by code and not humans.
Unfortunately, a second idea appeared, that embraced the label "REST" but used it to mean something else (roughly, a set of conventions for how to organize web pages and the edits one makes to web pages). And the second idea was written and shared with a much larger audience than the first (in addition to being an "easier" idea), so it grabbed a super majority of the mind share.
There are very few JSON-based hypermedia APIs being built and the few that were are almost never used as a uniform interface. They are treated as a normal, RTFM API.
In REST-ful terms (real REST, not the "REST is JSON with hierarchical end points, kinda" sense), REST has failed spectacularly in the JSON world.
I wonder about this too. I regularly read that REST is bad/failed. But it appears to have had been widely adopted in a soft way, beating any competing ideas.
I think this is the main thing that prevents HATEOAS to be implemented. I'm very interested in it, but many "web apps" now needs more polished interaction and network resilience. I'm not saying it can't be done with an HTML-render centric codebase, but it's still uncommon.
CRUD covers all possible interactions with data, and, in that sense, this description is correct.
“CRUD application” is sometimes used to refer to an application which presents only the most simple operations data operations to users, and saying REST is for CRUD applications in that sense would be the opposite of the truth. In fact, its the opposite, its a mechanism which facilitates creating applications of arbitrary complexity from components exposing simple, self-describing interfaces.
This is a category error. REST is a novel network architecture, the most distinct aspect of which is the uniform interface[1], and has nothing to do, conceptually, with CRUD apps.
It may be associated with web 1.0 CRUD apps, but the actual REST/hypermedia model generalizes and HTML, with a bit of help[2], can produce sophisticated UIs well beyond "CRUD".
> I’ve always been nagged by my conviction that I’m not working as diligently or effectively as I ought to be. Sadly I’m not getting any better at not letting that bug me.
Maybe this has been discussed in depth elsewhere, but my question in response to this is: how does one/how do we end up feeling like this? And how can you work toward not feeling this way?
> how does one/how do we end up feeling like this? And how can you work toward not feeling this way?
It's a broad question because there are many underlying causes.
Some people have unrealistically high expectations for what can be accomplished. As engineers, it's easy for us to visualize the big picture of how things would look when they're finished, but forget all of the details and difficulties that must be overcome along the way.
Some people spread themselves too thin across too many projects, such that none of them see any rapid progress. I know a lot of brilliant people who could accomplish great things if they'd just make the choice to stop all of their side projects, learn to say "no" to new requests, and pick one initiative to be their main focus.
Some people think they want to deliver results and finish things, but when it comes down to it they actually prefer the craft part of the process rather than the finishing part. They'd rather write and re-write the same code 20 times until they've perfected it and generalized it for every possible situation, rather than ship an imperfect version that gets things done.
And some people genuinely aren't working diligently or effectively, usually because they haven't yet learned how. I see this a lot in younger people I've mentored: They aren't getting much work done, but they also can't figure out where all of their time is going every day. In one case, I worked with someone who couldn't figure out why they were so unproductive and exhausted every day until they enabled a time tracker on their phone and discovered that they were losing over 8 hours every day to playing on their phone screen. The worst part was that they weren't really enjoying it, either. They just got into the habit of wasting time on social media and it spiraled out of control. Needless to say, getting a handle on that was life-changing in terms of what they could accomplish everyone else in their lives.
I have workaholic tendencies. It bothers me a lot less than it used to.
I've resolved a lot of personal issues that are not supposed to be fixable and that makes it easier to feel like "Whatever I accomplish today, it's enough." I set goals for what I need to do next and I try to be realistic about how much I can actually do and I have made my peace with the fact that I'm only one person, I can only do so much.
Either the road rises up to meet me or it doesn't. There are parts of it I have no control over and if the world just doesn't want me to succeed at certain things, I simply won't through no fault of my own.
There has to be fertile ground for a thing. Thinking otherwise is often an oversized ego talking or some kind of baggage from broken mental models people get fed.
For me, the reason is that things which are straightforward to conceptualize often take a surprising amount of work to actually implement (many nitty-gritty details, unforeseen complexities, suboptimal tools). In addition, in hindsight one often realizes that there was a quicker or more effective way.
As a result there tends to be a mismatch between the expected and actual efficiency of the work. Even if the complexities are not one’s fault, overly optimistic expectations are. Furthermore, it’s simply frustrating that implementing seemingly straightforward ideas often takes so long. There is a conviction that there must be a better, more efficient way, and that one could obviously do better.
I suspect that there is a correlation with an affinity for abstract thinking, because abstracting from the details causes underestimating the required effort, perpetuating the mismatch.
Embracing the mismatch and giving up the expectations also means giving up one’s aspirations ("I should be able to do this") to a certain extent, which is hard if they are an integral part of one’s self-image.
I get there when I feel unsatisfied with the problems I'm solving. Maybe I ended up fighting fires for a day or two, say, and fighting fires doesn't bring me pleasure while my main body of work does. So, even though the fire fighting has immense value to those who benefit from the fires being put out, and I'm really good at doing what it takes to put them out quickly, I didn't start the fires myself and I didn't get my work done. I feel like a rescuer rather than an effective contributor and then I have to start trying to compensate for it.
This is also touching on burnout. There's just a kind of programming work that is mentally draining and unfulfilling because the underlying problems or mistakes are entirely self-inflicted. Spending time fixing shit that didn't have to be broken.
I have been working on a large side project which is no longer just a side project. I am not getting as many hours of work in as I would like, but at the same time I know if I tried to do more, I wouldn't be productive. In fact I'd do negative work. Years ago I would push through and work anyway (I am a real goal oriented person.) Knowing what I do know now, all I can do is shrug my shoulders. It's like distance running - for all the desire to work harder and do better, speeding up won't get you to the finish line faster.
If you have some sense of your limitations, you no longer can really feel guilty about being lazy for this. It may be worse though, because then you can feel bad about your capability. Hopefully that is something you can just accept.
I had a discussion with a friend in college about this. We were both doing dual bachelors in Music and Physics, graduation was approaching, and it was coming time for each of us to choose whether we'd pursue research, art, or other.
He was perhaps smarter than me and worked harder. I asked him what it was that motivated him, and he said that it was a sense of guilt that had been instilled in him by his parents. I told him that guilt is a destructive motivator, at least in the long run. He replied that it actually worked well for him and he was glad to have it.
I also am sometimes motivated by guilt. Does it bother me because I'm more aware of it, or am I right in thinking that there are better motivations within reach?
You make different choices during the day and you do it against your own will at first. You won't feel like it, and your mind will tell you tomorrow, but since you expected that, you do it today.
You have to ignore both societal and organizational pressure. These come in sometimes subtle ways both internally from your own psychology but also in the form of stated and unstated expectations from coworkers management and partners.
Time alone not working helps this, meditating helps, anything that gets you away from the fracas of life. But also the best way is to develop things you care deeply about that have nothing to do with your job. May be a sick family member, could be a community organization, could be time to play the piano.
Don't live by other people's expectations. Always live by your own, and always reassess your expectations. One day at a time. Do something, or don't, no biggie.
For me personally, this feeling arises when I become overly focused on an end-state. My mitigation is focusing more on the process and trusting that, over time, the end-goal becomes a natural outcome of an effective process.
Maybe it's just that he notices, as any introspective person might, that you just start doing the same damn thing over and over after 20/30/40 years in harness.
My personal issue was undiagnosed adult ADHD. Learning about it and adjusting my life around that fact has been immensely useful so far. Though I’m far from being done with that process (and likely to never be done…).
I was blown away that everyone suddenly offered so much support. We have people filling out notion docs (http://wiki.tensorfork.com/) and an active Google Chat chatroom with 12 interesting people in it. And to top it all off, Nat Friedman (Github CEO) DM'ed me last night and I'll be talking to him in approximately three hours.
You'd think I'd feel thrilled and like I'm extremely effective. I am thrilled, but all I feel is that I haven't done as much as I could. Talking and walking are two different things, and now that we've talked, it's time to walk the walk. But the truth is, I felt guilty sleeping most of today while everyone else has been talking excitedly. I felt that if I pushed myself a little harder, I could've been more effective, or more diligent, or lived up to my responsibilities more.
But that path leads to burnout. Good work is a marathon, and things take time. (Unfortunately it sometimes seems ~impossible to internalize that, and the temptation to sprint is always there.)
If you're looking for a mental hack to make the feeling go away, the only thing that helps me is to remind myself that even small deltas have value. My personal goal is to make imagenet2012 accessible to every programmer. It's a small, obscure itch that I am scratching for myself. But it's mine, and I can fix it, which is good enough.
So just do what you can, where you can, and be authentic. No one can know how things will play out. Doing your best is all you can ask of yourself.
I find there's a perpetual equilibrium between feeling confident in your output / yourself, and recognizing that much of what you do will likely be incomplete, messy, sub-optimal. This seems to be a fundamental part of being human as it extends across all types of creation.
Authors have stream-of-consciousness to overcome writers' block. Athletes get the 'yips'. Developers have probably heard "if you're not ashamed of your MVP, you took too long to ship." (does anyone know who originally said this?)
Doubt is a massively powerful and underrated tool that tends to signify the presence of many new variables. It's like a beacon saying "hey this is unprecedented" and the pursuit of any challenge will naturally yield unprecedented outcomes. In your context, you went from "hey let's do a thing" to "the Github CEO wants to call me and talk about this thing and there are multiple people that want to do this thing" and that is pretty damn unprecedented.
I've experienced >3 truly remarkable situations like you, and wandered between inflated ego, utter confusion, guilt, and pure existentialism. Honestly, I'm glad I felt such a spectrum of emotions because in retrospect those lenses provide me a lot of clarity when I invariably encounter another entirely baffling scenario.
I think your self-assessment is as balanced as it can be, and I'm certain that as you continue, you'll adjust the weights (heh) and become more comfortable... or maybe not. Maybe a hallmark of being a creator, founder, leader, is to double down on the discomfort, and rest later. Kind of like a marathon.
I like The Agile Manifesto. I feel that the jury is still out, on how effective it is, in practice; not because it's bad, but because I think it has problems in application, in the regular schlub world.
Note that I like the Manifesto, but have been underwhelmed by what I've seen, in its expression. I feel that Mr. Fowler, and his compatriots came out with a great "vision statement," that may not be applied particularly well.
Maybe work on "paving the bare spots"[0] on the Manifesto might be useful. I really want to see it work. I know that when I designed a fairly major infrastructure system, it took ten years, before it was really good enough for worldwide consumption (that might be a bit of an overstatement, but it took at least eight). The software, and the accompanying "vision," were ready by Version 2.0 (about 18 months in), but it took all that additional time to figure out how to get skeptical and stubborn people to accept it, win over friends, and make some embarrassing mistakes.
One of the big issues with coming up with "A New Way™," is that it usually means that "the way we've always done it" is implied to be wrong, and that's a real hard sell. I know that it was the biggest issue that I faced, and dealing with that was tricky. In some cases, I needed to make adjustments, and walk back my rhetoric, to avoid causing people to get their hackles up. That kind of thing is very important, and often neglected.
I've always wished the Manifesto folks had followed through with the shepherding process. If his experience is anything like mine, it will be a long, frustrating, but ultimately, incredibly rewarding endeavor.
This resonates a lot, I like this idea of a "shepherding process" and digger deeper into what it takes to make big projects successful over time.
That said, I think distilling down this hard-won experience tends to lead to cargo-culting. Just look at "Agile" itself and how it's been embraced and then twisted by consultants into the exact opposite of what was intended by The Agile Manifesto to begin with. This stuff sells because people want clarity of what to do on a daily basis, they don't want to confront the reality that is no guaranteed path to make large scale software successful. Certainly some process and structure are helpful as long as they are specific to the problem at hand, and focused on the needs of the people actually doing the solving and giving them whatever air cover and cross-pollination needed for them to get the reps on the actual software.
Typically these types of projects happen in large orgs where there are incentives for middle management to avoid culpability for failure rather than to do what it takes to succeed (especially if it's a hard problem). In this kind of environment, process is often used as a cudgel for one group to protect itself by claiming "I did my job".
So to your question: how effective is The Agile Manifesto in practice? I think of it sort of in the necessary-but-not-sufficient bucket. Basically if you have buy-in to give a team this flexibility and autonomy you have a crack at solving a hard problem. Now it depends on the team and the problem: is the problem ultimately solvable and does this team have the chops?
> how effective is The Agile Manifesto in practice?
That's the rub. I feel as if it works extremely well, in my own work, but I currently work mostly alone (I am part of a loosely-coupled team, but I completely "own" my own part of the project).
Getting it to work for a team, is not so easy.
Organizations run on Process (note the capital "P"). I feel as if true Agile is something that we need to personally inculcate into our own lives, as Practice, Habit and Understanding. We need to be constantly working within our methodology, which, in my case, is equal parts Habit, and Experience. I don't think it's something that can be taught. It needs to be learned. The reason that I separate these two words that seem to mean the same thing, is that "teaching," is an outward-facing practice, while "learning" is an inward-facing one. I feel that we must -each and every one of us- make the effort to learn how to code as a craft. It starts within each of us, and that's not something that can be foisted upon anyone, or dictated with process. We also need to completely understand our craft, and the work we do. It should not be simple rote. Habit and rote are two different things.
There's no "buzzwordy" or consultant-ready answer to that.
Probably not necessarily on topic - but I really want to give a shout out to Mr. Fowler. It’s really hard to find anyone in contemporary and practical software engineering who has been as impactful as he has been. A lot of modern concepts that improve software development has a seminal article authored or made visible by him.
I have no idea which side of the bed HN woke up today, but the comments here disparaging Martin Fowler's body of work really trouble me.
I really enjoy Martin's (and more recently his collaborators') blog posts on various trends. More often than not, they distill key concerns and references into a quickly readable summary. His books on software patterns collect a breadth of problem/solution pairs that have now been baked into so many frameworks that you probably don't even realise you are benefiting from them.
If you are totally new to Martin Fowler's books and writing, I would urge you to not brush them aside because of some snide comments on HN. There's a wealth of material that can surely enrich your thinking.
Lots of comments here opposed to Martin Fowler's advice. I'm curious if anyone who is anti-Fowler has software design books they _do_ recommend? (Asking because I'd love to read them)
Since the problem usually isn't with the software design advice, but "people read software design books and then try to force whatever they read into their projects", I guess what would make a book "good" for that is "the author isn't popular enough to cause something to be trendy". IMHO Fowlers writing is just fine as "these are things people have done and you might consider" (and what I've read mostly seemed to be written that way, not overly pushy), but its so popular that if he writes about a new thing, too many people then jump onto it as the next big thing, if it matches their problems or not, and that gets painful to work with. Although for Fowler to write about something, I think it already has some trendiness going on and he's not usually on the forefront of new ideas?
People like Martin Fowler or Robert Martin seem to be more famous for the books that they write about software instead of being famous for the software that they write. I don't think it should be possible for somebody to become an authority on software design without them showing their designs to the world. If they work exclusively on proprietary software, then they should provide some other kind of proof that the recommendations that they make are beneficial and demonstrate how helpful they are. E.g. anonymous metrics from their client pool, detailed analyses of various OSS projects, etc.
When e.g. John Carmack talks about a technical topic there's a very long and very public record of the kind of experience that he has. It's reasonable to trust that they are correct, although one should still verify before betting their project/company/career on that piece of advice.
The people that don't like him don't enjoy any books. They just enjoy "getting stuff done." That's really what I find when talking to people about why they don't like concepts that he introduces. They have no alternative, they just don't like people who come across as dogmatic.
Fowler states: I’ve long known that when you’re doing very creative work, such as writing or programming, the useful hours you can do in a day is rather less than the accepted industrial eight.
I don't want to disagree with him as this is quite subjective, but when I am totally engrossed in solving a problem, I get into a flow and the time flies by. If the problem is both interesting and challenging, I can't wait to get back to working on it and I hate having to stop!
I do agree if the work is joyless and the product is tedious, one hour spent on it can certainly seem like eight.
That you can be "engrossed in solving a problem" is a luxury. One that I enjoyed when I was responsible only to myself. Once my wife and daughter entered my life, it was gone. And even before, from time to time, I had to make space for other obligations that came before my interest/work.
The problems to solve still interest me, but changing diapers or getting something from the supermarket for my daugher before it closes has become important enough that may they to "disrupt"[1] my life anytime.
[1] in quotes, because I would not want to live without these "disruptions" anymore. The accomplishment of calming my baby daughter in a few seconds dwarves any of the hacks people still know me for.
That is a good observation. I am the primary care-giver for a sick spouse, so I have also seen the luxury of time spent on programming diminish.
I am glad you pointed this out. Now that I am older, and perhaps wiser, I can look back at some of the unhealthy attitudes that I had about co-workers who had to balance the much more important demands of life against completing a certain feature or getting a demo ready by a certain date.
When I look at my younger self, I can see how this attitude can lead to ageism and the lack of respect and support some employees see in organizations that only value how much code you can crank out.
> [W]hen I am totally engrossed in solving a problem, I get into a flow and the time flies by. If the problem is both interesting and challenging, I can't wait to get back to working on it and I hate having to stop!
I agree that it is possible to remain in a state of flow for "the standard industrial eight", but on occasions where I've remained in a state of flow for eight (or more) hours, the quality of the work suffers, and the following day is a total loss (as in "me no think good, think meat broken"), so it is better to keep to a more sustainable pace.
In my experience if I'm in flow for 14 hours and time flies it's because I'm doing a high-value refactor at roughly the right time (not premature)
If I'm doing a more open ended problem like modelling something from scratch (I have boxes and arrows on a piece of paper but the edge cases are still a mystery) then it's something I'll exhaust myself on much sooner
I can only speak for myself, but I find the quality of output decreases pretty steeply after around six hours of flow. Determination can keep me going but I’ve learnt that it’s probably more efficient to just rest and return to it after a good night's sleep.
I agree but the thing is that not every day in the week will be like that. So if you have this "hyperfocus" for two days a weeks and on the other days you don't do that much work it might average out to pretty much what he's describing.
I tend not to have flow in programming for more than 3 or 4 hours. In writing my experience is that you can have flow for days but at some point it starts to peter out and then you grind to a halt, and because you have been running on flow for such a long time it is difficult to force yourself to just write, because you hope to get the flow back, so then you spend a few days doing nothing but a paragraph or two until you realize you have to do the work and the flow is just not coming back by itself.
We talk about this a lot on HN, and I'm curious: Are there good studies on this stuff /in the programming context/? I'm aware of the work with athletes and musicians (an excellent start, but requires a lot of generalization to corporate knowledge work).
Even better, given that people are going to vary by at least as much as we vary in physical stamina, advice about how to learn what my own limits are, given that it's so hard to introspect?
Even so, you should provide some evidence to back up your claim. If you said "Fowler is the master of overcomplicating things, like in his work on A, or when he argued B", then people could engage with those instances or provide counterexamples. As it stands, your original comment is just flamebait.
Somewhat flippant, but also a serious question - do people like Martin Fowler and uncle bob actually do anything or do they just describe systems that do not (and probably will never) actually exist? Have they built real world software, or do they just describe some ideal state that has never actually occurred in reality - some kind of geek porn?
I’d much rather listen to people who have actually done things than people who have theoretical done things (people who do both are best).
I ask because I thought Rob Pike was one of those people until I found out what he does / has done.
Frankly, Bob Martin (let's stop calling him "uncle", please) is a joke. He has a very expensive lecture series called Clean Coders which is goofy, quirky, annoying, and just a plain waste of time and money. He wears goofy costumes, rambles on about irrelevant things and has too many jump cuts. He also peddles things based on zero actual hard evidence... pure opinion.
Not to judge anyone by a single repo (I certainly have worse things on my github) but that isn't exactly a shining beacon of good software practices...
One small C file (with... creative indentation) and a 2000+ line lua file apparently written by someone unaware of for loops.
Not bad per se, but definitely not the kind of project that makes me think I should listen to author's advice.
about half of "Patterns of Enterprise Applications" reads like introduction/documentation to Hibernate (java's object-relation mapper), also you can see traces of earlier ASP.Net there.
I don't know what was first, but this certainly describes frameworks that actually exist.
Can you link to any resource? What I read from Fowler is that he has lots of reservations about it.
To quote:
Any architectural style has trade-offs: strengths and weaknesses that we must evaluate according to the context that it's used. This is certainly the case with microservices. While it's a useful architecture - many, indeed most, situations would do better with a monolith.[1]
He even states that you never should start building your application as microservices, but instead as a monolith.
So my primary guideline would be don't even consider microservices unless you have a system that's too complex to manage as a monolith. The majority of software systems should be built as a single monolithic application. Do pay attention to good modularity within that monolith, but don't try to separate it into separate services.[2]
Fowler also wrote about "cookie-cutter" scaling of monolithic applications. Seems to me that the fault lies with the people who cargo cult approaches rather than reflecting on them and adopting them if and when they make sense.
Maybe it's the right pattern for some problems, and got overused when it was high on the hype curve? I don't have a horse in the race, but have seen many other technologies/patterns go that route
Indeed. It seems that it's useful to have descriptive patterns to understand what people are already doing. But then the hype cycle kicks in, and people treat them as prescriptive patterns. MVC, Agile, Design Patterns, monoliths, microservices, trunk-based, spaces vs tabs, Emacs vs Vim...
I find it particularly stupid compared to most other hyped technologies, as the pattern pervades the entire engineering superstructure and has to be entirely burned down in order to fix the core rot. The foundation of any microservice-based system is inherently shoddy in a way other poorly designed systems aren’t.
Attacking another user like this will get you banned here, regardless of how bad other comments are or you feel they are. Please review the rules and stick to them: https://news.ycombinator.com/newsguidelines.html.
If you just want to vent go talk to a therapist, don't waste precious bytes.
Nobody holds comments they agree with to this high of a bar. You are being way too harsh, and this borders on personal attack.
One of the ways modern society is breaking down is that nobody has a community they can vent to and relate to. Everything is somebody else's problem. If someone has something to say, let them say it. If they didn't say it clearly, maybe ask for more. Don't make it personal, and if it's irrelevant, just downvote and move on.
All they're doing is repeatedly insulting this guy with absolutely no meaningful content across multiple posts. It's garbage, it's nothing but unsubstantiated attacks. I don't see how my post was a personal attack either, I said to vent elsewhere, in an appropriate venue.
The value of my post is that a charlatan with very damaging views has somehow been promoted as great thinker when I find his entire engineering philosophy not only incorrect but totally moronic and damaging. I’m calling it out because it’s really stupid and we need more people to feel OK calling bullshit when some idiot carts out the “we need 50 services to power the authentication pipeline!” which I have seen multiple times during my career, as other enthusiastic engineers cheer them on. It’s a dead idea and it is pointless, at least at when operating at the scale of 99% of tech companies. I’m not going to keep pretending this is a good idea when it’s not.
> we need 50 services to power the authentication pipeline!
If the architect is an adherent of Fowler, you can then ask where to get the 50-400 engineers needed for building said pipeline, citing Fowler for teamsizes. ;)
Microservices are the simplest solution to counteract Conway's law.
If you have two organizations writing software, the least-overhead organizational approach that allows minimum communication overhead and release coordination is for both of those organizations to release a package and have well-defined APIs between those.
There are certainly other approaches to achieve similar outcomes, but microservices are the lowest-tech mechanism.
That's where most shifts to microservices fail, right? Sometimes because they don't define their APIs well, sometimes because the breakdown into packages/services is broken (for example: defined by company structure instead of sane software architecture), sometimes because the evolution of each package's features and functionality is at odds with the stability of their API's semantics and guarantees (and documentation), sometimes because the development overhead of communicating between APIs is too high (debugging for example)...
> Microservices are the simplest solution to counteract Conway's law.
I don't think that "counteract" is a useful frame of mind. I interpret Conway's Law more as a lesson: Build your software to reflect your org chart, or else you're going to have a really bad time. If you have two loosely-coupled teams, their respective services better be loosely coupled as well. And vice versa for tightly-coupled teams: If you have some good cross-team communication going on, take advantage of it in your architecture.
Maybe you and them just did not understand that it’s an pattern that, as all pattern, makes only sense for certain applications.
Maybe you could also present some references that back up your claims?
It’s always easy to judge on people that tried to solve problems of the past. Mistakes will always be made but this way we do learn and make progress.
So what I try to say is maybe appreciate that he at least tried to solve problems. All this shitposts here at hn against ppl like him or uncle bob or so unreflected and immature… like really what the heck is going on with you people.
If you build your whole life around didactic speaking gigs where you tell everyone how to do it despite apparently not writing any software yourself, you need to be prepared to be open to some criticism when it turns out your ideas are shit.
True but then please try to back up your hard claims with some references. Fowler himself made the restrictions of the microservice pattern very clear in several posts…
So don’t blame him for stupid architectural design decisions!
I’m surprised by this comment given I use this post by him to convince people to start with a monolith and only break down to microservices when actually needed. Agree it can be a huge pain if you do it before you need it. https://martinfowler.com/bliki/MonolithFirst.html