Hacker News new | past | comments | ask | show | jobs | submit login
Prolific Engineers Take Small Bites (gitprime.com)
251 points by tbassetto on Dec 13, 2016 | hide | past | favorite | 94 comments



This post is a good example of my current issues with modern data journalism as content marketing: you can say anything you want without evidence to back it up, but hey, there's a pretty chart, so it doesn't matter. (In this blog post, the data points don't correspond to actual data!)

This is also the primary reason I have switched to Jupyter/R Notebooks for my blog posts: if I make ridiculous claims, people can check my work as evidence. This post doesn't even provide any quantifiable metrics, just "we analyzed millions of commits."


I just got done staring at some code written by a prolific upper quadrant coder, that I'm going to have to noodle on and figure out how to disentangle some bad design decisions. I might be able to do a fairly short commit to refactor and extend, in which case my metrics will be crap. I might wind up looking at all the outstanding issues with this codebase and rewrite every class in this directory (in which case my LoC metrics will improve dramatically). Right now I'm probably one of those "stuck" coders that need "help". Kinda? But most software developers hit problems like this and run away in terror, and looking around me IDK who can really help me that much, I'll run my ideas by some colleagues, but the other ones who work on problems this gnarly are also neck deep in their own issues...


Mikado Method might offer you some clues, or at least confirm some of your existing habits.

Basically, start a top-down refactor. When you find the bottom you stop, revert your code, and refactor from the bottom up.

You probably won't find the actual bottom, but in my experience the moment when I was about to give up and call it quits was usually just before I reached the bottom. Taking a break, and starting over from the spot where I stopped, usually the solution became obvious, and far less invasive than I expected.


Do I understand this correctly:

You start refactoring at a high level -- essentially set out to rewrite the whole chunk of code in which your refactoring work lies. And focus on the most important until you get to the inner-most structure.

Then when you are to the "sum(a,b)" level you start over with the original code base and refactor bottom up. With the knowledge of how it works all the way up to the top? Re-creating your original refactorings?

It seems interesting. Like tackling a math problem or puzzle from two directions. Are there certain situations in which it is most effective? Architectures? Paradigms?


Most refactoring is from the bottom up. Make the change easy, then make the easy change. I said "start a top down refactor" but redesigning from the top or all at once isn't really refactoring. Some of the actions look similar but you get few of the benefits of refactoring.

When the architecture problem has gotten away from you, or only becomes apparent late in the project, it's hard to see the trees for the forest. The only way to decompose the problem is to start somewhere and see what happens. With a top down rewrite the number of bits in flux becomes overwhelming, which is the anxiety I picked up from the previous commenter.

Mikado is just an exploratory development trick to help you find a way to make the change with refactoring. You try rewriting pieces until the number of concerns starts to multiply, you keep going essentially until your brain starts telling you, "this is nuts, you should stop", and you take that detail that broke the camel's back and do just that part, and build up and out from there.


Sounds like "Discovery" quadrant more than "Prolific" to me.


> you can say anything you want without evidence to back it up

Actually, you can say anything you want with evidence to back up the opposite of it ;)

I have a hate-love relationship with journalists. It's both incredibly surprising and incredibly frustrating to discover what they end up publishing after an interview.

http://www.smbc-comics.com/?id=1623


That is one of the reasons companies train people speaking to the press. Having a message and being articulate is often not enough.


The thing is, it doesn't matter what you say. They'll just write anything they want ^^


They could retitle this "In Defense of Cowboy Coding". Engineers that code this way become the owners of their codebase because nobody else wants to deal with their insane decisions.


Forget cowboy coding, this is cowboy data analysis. At best, it's misleading; at worst, it's dangerous.

This is embarrassing from a YC co.


They sell metrics, this blog post has us talking. This is a blog post, a PR piece, not a news paper story.

Source metrics are the new panopticon of Software is Eating the World.


Is this even journalism?


> Is this even journalism?

I wondered this also, when the popups in the lower right kept presenting a chat box with "Can we help you with something?"(from gitprime). Do they want to sell me something, or do they want me to read their publication?


Am I missing the sarcasm at some point in this thread, or something? It's very clear to me that this is not journalism, and shouldn't be read with the expectation that it conforms to journalistic standards, any more than some observation on distributed systems on a personal blog should be read with the expectation that it conforms to scientific standards.

It's a different type of content, which has a different kind of value. Are people trusting the claims the way they'd trust journalism from a known good media organization (or a paper from a known good research organization)?


I wasn't being sarcastic, I was questioning the labeling of the use of charts and numbers as data journalism (This post is a good example of my current issues with modern data journalism as content marketing) in the comment I replied to.

If it had said something like tools of data journalism I wouldn't have been inclined to reply. A small difference, but enough for me.


If it looks like a duck and quacks like a duck, it should follow journalistic standards. The fact that it obviously isn't is irrelevant. If you're going to pretend to be a journalist, do it right or don't do it at all.


So, should I not post things about distributed systems to my blog without rigorous peer review, lest I get accused of pretending to be a scientist?

(I'm genuinely confused about how this person is pretending to be a journalist. It doesn't look like journalism at all to me, starting from the part where it's on some startup's blog. What am I missing?)


I wouldn't argue the journalism point, but it's very clear that the blog post is an attempt to (1) present solid statistics and (2) bolster their own credibility because "they have good statistics".

Except what we really have is a lack of data backing up the fancy non-informative charts (that clearly in no way shape or form correspond to the so-called millions of data points).


It needs to be very clear from what and how you're presenting information the quality, source, and rigor of your data.

This blog post is pretending to be true.

I want to contrast it with this: another startup(ish) blog post about their own data

https://www.backblaze.com/blog/hard-drive-reliability-stats-...

That is a wonderful, useful, trustworthy source of interesting data.

The post source seems like made-up marketing fluff pretending to be real information. I'm looking at graphs I entirely believe were made up, I have nothing to support the conclusions, and I'm very concerned that a less wary reader would think it's true.


There's a difference, IMO, between a blog post from a person in their spare time and a blog post by a startup whose main purpose is to get people to sign up for it.


Less of a difference, IMO, than that between either of those blogs and a reputable journalistic outfit.


If you publish something, it's your responsibility to ensure what you write is true, to the extent reasonable within your skills and time available to you.

Doing less is harmful to your own reputation at best, but can be harmful to whoever believes in the things you got wrong.

Now that applies to mistakes. But for those who intentionally write and publish misinformation - I hope there's a special circle in hell waiting for them.


> skills and time available to you

Lots of plausible deniability, here.


When a page starts popping up unsolicited chat boxes, the only conclusion I tend to reach is that they want me to go away. I'm happy to oblige.


Would love to better expose our data set. It’s a bit tricky however: we’re specifically interested in enterprise software engineering (the patterns of which differ radically from the open source world).

In order to dig into enterprise data, we sort of need to have a ToS that only allows us to talking abstractly about aggregate data.

Any thoughts on how we might navigate that?


You don't have to expose the raw dataset explicitly, but you do need to describe your methodology (moreso than an arbitrary "impact" score; give a formula) and give actual numbers for the analysis, and not just argue a "strong correlation" with a highly arbitrary matrix of attributes.


It's a marketing article - trying to explain their product to potential customers, not a scientific publication.

I'm not saying that you are necessarily wrong but I do wonder if making an article like this more like a formal publication will actually make it better or worse at it's real goal which is presumably to sell more.


Yes, the standards of an marketing article are nonacademic, but you can't make a article containing a data driven argument and skimp on both the data and the argument.


There is no downside to a legitimate marketing article to make the underlying data and analysis available to those who want to drill down. If it is valid, you stand a chance of persuading the skeptics.

More importantly, calling something a marketing article does not immunize it from critical analysis, and especially not on a forum such as HN.


I don't know if it's respectable to doll up a marketing article like journalism, let alone scientific research.


They go into slightly more detail on factors that influence the impact rating in a linked post. It still doesn't explain what formula or weightings they use. https://blog.gitprime.com/impact-a-better-way-to-measure-cod...


How do you publish them? I had always heard that publishing notebooks was a pain in the ass.


GitHub can render Jupyter Notebooks natively.

For R Notebooks (which are HTML files), you can use GitHub Pages, which is super easy to set up nowadays, as the static files can now be located in the master branch.


Github pages can also work with exported org pages rather easily. (For an example, here is one of my blog posts: http://taeric.github.io/Sudoku.html)


That's misleading though — GitHub can render pre-run Jupyter notebooks but does not allow you to actually run them. IMO that's only half the battle.


When would you want Jupyter notebooks that you'd want to run, rather than just pre-render?


It's also easy to convert R notebooks to PDF, which are easily viewable on GitHub, if you don't value the interactivity highly.


You need a host that serves Jupyter notebooks to actually run them vs render a pre-run notebook, which is what GitHub, Gist, etc do.

A few hosts that currently support that are PythonAnywhere and as of this month, Azure [1].

[1]: https://notebooks.azure.com


They talk about 'small bites', but if their first chart means anything, its linearity shows that commits tend to be of the same size. As to whether those commits represent progress or repeated attempts to get it right, we have no idea.

As for their size measure, they are vague, but the comment about 'normal' commits raises the question of whether they have crafted a metric for which the linearity is built-in by definition.


Non-technical co-founder of Git analysis tool writes blog post that makes assertions about all sorts of things and then justifies this with mockup charts whose axes are unlabeled. Then offers tools to help micro-manage engineers.


On the one hand, I have this depressing vision of the future where a non-technical micromanager is imploring me to 'code smaller' because a tool the suits are paying $1250 / month for told them to.

On the other hand, the irony of being told what to do by a tool coded by some 'out-of-touch know-it-all tech people' is not lost on me - some of my work has surely been used to inflict this sort of micromanagement on others.


Makes me wonder what qualifies a micromanager for micromanagement.


@trevyn Co-founder here.

You’re right: this post is a narrative about product development, and a strong correlation we found between two variables across 20 Million+ commits that we thought was fascinating and supports general 'kitchen logic' around best practices. The axes are not labeled, but if you like we can set you up with a demo account and walk through your data with you.

One other note: the typical common use case for the product has been stakeholder management; something we have doubled down on in product development. Any specific critique about how we can improve most welcome!


Specific critiques about how you can improve your blog post:

1. Data is always necessary to back up claims like these. Almost _any_ kind of quantitative data will do - just a simple average! just give me something I can replicate on other data sets. Similarly, just saying "we made up these variables as a combination of these other variables and called them Flavorfulness and Musicality, look how nicely our circles line up!" is not useful to anyone - a formula or methodology would be.

2. As a software engineer, I came to this post expecting to find a way to improve myself. I invested time in reading it because I expected a payoff in the form of, for example, a metric I could apply to my own work. I suspect that the author of this post intentionally gave the impression that the post contained tools like this. I didn't find anything remotely like that.

3. More generally, this post does not show signs of being written with the reader in mind. You have not given me any new information that I can apply; instead you have given me a marketing pitch (we did this data analysis, we promise! Don't you want us to work with you?) dressed up like information.


> The axes are not labeled, but if you like we can set you up with a demo account and walk through your data with you.

Really? Rather than going back and labeling you axes you are converting this into a sales call???


My actual suggestion was to hop on with the product team/co-founder. This article is a narrative about something we found pretty interesting during product development.

Similarly, the offer here is to take a deeper look at what we're building and a (quite genuine) offer to incorporate any suggestions you might have into our roadmap.


Congratulations, you have completely missed the point.


Sure, specific critique: It appears that you are making fancy metrics that are more complex but fundamentally no better than a LOC metric. Is there a way that you could create metrics reflecting actual monetary business value created?


@trevyn We've got some things in that vein that are working in the app today — and are working toward even more. If you’d like, we could hop on a call and give you a tour of the app and show you where we’re headed?

Seems like you've given this some thought on how this should be done right; would love to include them in our product development discussions.


This post and the follow up comment "talk to sales" costs your company a lot of credibility.

Your post gives prescriptive conclusions not supported by the data presented. Of the data that you did analyze, we're given so few numbers actually published that everyone is forced to question the fundamental methodology, even that is not clearly stated. The specific critique is to make well-founded claims.


Neither in the article or in the referenced article/blog post linked to "Impact" is impact defined.

https://blog.gitprime.com/impact-a-better-way-to-measure-cod...

  What I found is:
Impact takes the following into account:

    The amount of code in the change
    What percentage of the work is edits to old code
    The surface area of the change (think ‘number of edit locations’)
    The number of files affected
    The severity of changes when old code is modified
    How this change compares to others from the project history

No exact or detailed formula for "impact" is given. "takes the following into account" is extremely vague as it could indicate any relationship. Is impact higher with more files changed in the commit? Why would it be better if more files are changed? Is it lower? Why would it be better if fewer files are changed? Is it some totally unobvious non-linear function such as a trigonometric sine?

Based on the vague description above, nothing in "impact" is directly related to the actual end user/paying customer experience or a reasonable proxy such as systematic end user testing by a QA team.

This lack of a direct relationship to the desired end result is the same problem that lines of code (loc or LoC) and many other metrics of software engineer output have.

The "impact" metric, whatever it precisely is, looks suspiciously like it would naturally be positively correlated with a large number of commits/high frequency of commits.

Also the plot is labeled with "volume" on the horizontal axis and not the mysterious "impact" metric. The text implies this horizontal axis is the "impact" metric. Why is the horizontal axis not labeled impact?

Even more peculiar, "impact" is claimed to measure cognitive load:

Impact attempts to answer the question: “Roughly how much cognitive load did the engineer carry when implementing these changes?”

A good engineer will attempt to find a low or no cognitive load solution to a problem! In general this will be faster and less error prone and cheaper! Reinventing the wheel has a very high cognitive load.


Not to mention that's a pretty easy to game metric. By that rational, renaming a variable name across a few files is going to make me quite impactful.

If my job were being judged by this metric, I'd sure be thinking a lot about that.

This measurement also seems to favor overengineering, which is one of the biggest things I've learned to stay away from. When I was early on in my career, I thought I could implement anything myself, and so I often did. It wasn't until years later that I understand the implications of maintaining these implementations.

A true measure of impact to me would be a ratio of the most concise (but still understandable) amount of code to the amount that it solves the requirement it's built for. It's impossible to measure that without the context of the problem you are trying to solve.


The title, well actually the entire article, is one of those things where A -> B is stated, well knowing (and almost expecting) that readers will understand it as B -> A or B <-> A. It's the developer version of the typical "10 Things Successful People Do" clickbait.

Yes, a lot of (although far from all!) good coders have the habit of taking small bites. That doesn't mean though that by taking smaller bites you are (or are becoming) a good coder. I've had the past displeasure of dealing with code that hit all the superficial checkmarks (small commits, unit-tested, peer-reviewed, you name it...) yet completely stunk.


Jesus that site was annoying. I go to read the article, and a popup beeps at me distracting me. I think "what the hell is this crap" and move my mouse towards the "about" page. A pop-up pops up (as they do) and distracts me from even that.

Both times I tried to get information out of their site they found a way to interrupt me.

Then I leave without reading the article.


Heard of Adblocker much? I didn't have your experience on that page :)


I've got ublock installed under ff, and there weren't popups in the sense of a new window, but in the sense of an in page element.


a non-sarcastic phrasing might be....

"Do you use an Ad blocker? I do, and didnt have that experience."


I large agree with this, although with some provisos.

First of all, sometimes it's most useful to take a step back and think about problems in a wider context. Often the biggest impact is from the code you don't write. There's really no way those kinds of contributions come out in any kind of metrics, but the team will sure remember them.

More commonly, I prefer to commit often to aid my dev process but then rebase those commits into more cohesive units before merging. How chunky to make those final commits is somewhat a matter of taste, and probably shaped by the domain one is working in. Personally I like the finest grain possible without any build breakage, that way git-bisect still works, but hopefully there's still enough granularity to help with trickier merge/rebase scenarios around dependency upgrades and/or db migrations. But pardoning the digression, my point is the output seen by the rest of the team may not represent the original frequency of commits.


I tend towards larger commits, large enough that the commit message can be more "why" than "what". That way git blame works better.

(My issue tracker isn't coupled to my repo, and bisect is mostly useless for me because my builds take so long. So like you said, depends on the codebase.)


Title of the article uses the word "Engineers" rather than software engineers, and if you lump together virtually anybody who makes changes to a code base (testers, sw architects, sw leads, etc), it seems like its easy to have the data obviously skewed by people who essentially spam the commit process. I think most people can look at a commit log and sort of eye-ball who are the people who really make shit happen on a project--and who is probably causing more grief than anybody else. I agree its hard to quantify, but to be honest I didn't take away much from this article. I like the goal of the article, just think it didn't really hit the mark.


Extrapolating a "prolific engineer" from one's commit log is no less absurd than from lines of code.

Their methodology fails to account for so many things, commit squashing feature branches, for example. A fundamentally flawed "study".


The methodology also suggests a particular workflow is better. A workflow where you commit and merge, instead of commit, rebase, then merge.


What are they measuring? Public git repositories? The article doesn't say anything about whatever data is being analyzed. If its public git that's hardly comparable to a large corporate entity.


Analyzing churn is tricky and can be easily misinterpreted by management. This reminds me of what was revealed from studies in behavioral sciences and presented by Dan Ariely: https://www.youtube.com/watch?v=Q92BqouxyX4


Can you make any claims about the number of commits? That's another metric that changes depending on tools and processes.

If you're using a code review tool well (if you are more experienced with it) then you'll automatically be making lots of small commits vs people that may not be well versed in working that way.

If you're working with large or prolific teams then you'll be committing all the time with eg feature toggles to ensure everyone is as close to the current code with their changes as possible.

So making general claims about more commits without taking into consideration the ways that metric is bent and changes with tools and processes and different experiences seems to be dangerous.


Unless I'm missing something, how is 'Impact' derived here?

Impact as a metric is fine, but you have to show what it is and how you arrived at that conclusion, and have verified that against a test set of data otherwise it's turtles all the way down.

https://en.m.wikipedia.org/wiki/Turtles_all_the_way_down


This article and company can be safely ignored.


We have a hard rule that you do not commit code that doesn't pass unit tests.

This leads to bad coders not being able to commit as frequently as good coders.

However, this seems like a very, very loose heuristic.


Huh, strange. My made up charts say something completely different.


concepts missing from the article: 'staging' and 'commit size'.

People who have worked in code review shops tend to stage their commits, i.e. do a bunch of work but commit it with git commit -p.

Article also doesn't look at deployment frequency, and 'merge' appears only once. We don't know if these 'high impact' devs are in their own branch for 6 weeks cooking up the PR from hell. That's one way to have a high impact.


I'm curious what the general vehemence is to quantifying and understanding this information is? It's right there in your commit history. I've run my own analyses out of curiosity in the past... statistics just isn't my favourite subject. I think it's an interesting metric and I'd certainly like to know more about how it's calculated.

(Although I'm sure that's part of the secret sauce).


I think people get uncomfortable because people will run with the data and enact policies that are potentially detrimental, and then people will game it. (You see this happen all the time with "velocity" at agile shops). Data itself is great, it's just scary what happens with it when it falls into unwise hands.


We tried this. It gave us an unmaintainable code base that our platform could no longer add features unless we threw so many people at it.


Well I guess I'd be labelled a poor programmer since I wait to commit until my code has passed a few core specification tests.


We tried this. Fast commits for small features gave us an unmaintainable code base that eventually didn't work anymore.


This reminds me that one of the things I loved about working on server-side efficiency is that for any given diff, you can measure the performance improvement, multiply by hardware costs, and say, "That diff saved $X." Makes it really easy to measure impact and feel that my salary is justified.


That is one side to the coin, however if you made a change that was a performance improvement yet the code was highly cryptic and hard for anyone else to understand then this wouldn't be a good thing.

Essentially what I'm getting at is that peer review from team members is the best metric for what is good and isn't.


Absolutely true. Still, it helps when one side of the coin is precisely quantifiable, since it helps frame the more subjective questions like code complexity.


This is misleading, however I would like to point out that if Small Bites == Kaizen. Then, Prolific Engineers like any other prolific professionals may be Kizening the crap out of their codebases. No data here either to back the smart ones up though.


Unfortunately know some bottom-right quadrant people. We got rid of some of those not too long ago - wouldn't want them on my team again. Super hard to manage and work with.


How do you finish a project ? One step at a time !


There's a great, fun, programmer-centric website called thedailywtf.com (sorry, I just scheduled the rest of your afternoon for you). Developers can submit WTF's that they've found in other's code - as the site admin says, "curious perversions in information technology".

One thing that strikes me about the majority of the submissions, as funny as they are, is that they mostly boil down to "so-and-so didn't know that such-and-such feature existed, so wrote reams of code to implement that feature in a complex way". It also strikes me that just this article's sort of analysis of "prolific" (aka "good") engineers/programmers drives this same sort of behavior. If every developer is supposed to be committing code all day, every day, there's no time left over to read the product documentation, try out a new feature, review a reference implementation, read a blog post: to be "good", you must be spending as much time as possible _typing_, because that's what you're paid to do. This (ubiquitous) management mentality is how we end up with roll-your-own crypto, or five competing Javascript frameworks, parsing using regular expressions... it's not so much that what they did was wrong - and trust me, if it works, it won't be removed - it's that it's pointless.


It can also work the opposite way. You can think you understand the requirements and go hole yourself away for a couple of days pumping out code. When you finally merge your work, another developer tells you, "Why didn't you just use function X or package Y?".

If you're doing your work in small, sharp bursts, you actually end up with more time in between work for talking with other devs, reading up on technology, etc.


I think this is the better insight.

A lot of DailyWTF posts are about people not knowing a function existed and then reimplementing it horribly. Many of them include a note like "the other 10,000 lines were similar".

Ignorance is not embarrassing, it's the default state when all of our tools grow and change daily. But a lot of the worst wtf moments are clearly cases where something wasn't sanity checked by anyone until it was presented as a finished product - sometimes too late to keep out of production. That's the benefit of frequent commits, reviews, and research breaks; they might not prevent our ignorance, but they keep it under control.


I think you missed what this article was on about. They're comparing against "impact", not LOC. From what I understood, "impact" is actually LESS from high LOC; it's highest for a few lines in many files.

But, the overall conclusion - small bites, many times - definitely fits both my experience and my intuition.

You either have to a) create a library/module or b) use a library/module. "A" is rare, and it's easier to see the impact of it - "you're the person that made that thing". "B" is the most common, and it usually involves many small bites. It's also harder to notice who's accomplishing what this way, since you have to point at the whole team to say "you made that thing".

Impact sounds like a decent way to see who's meaningfully contributing to a group project, although it definitely (sounds like) it has major blind spots. They could do with more detail as to how "churn" relates.


Some rolled-their-own solutions may have been done before feature/library X emerged. Ironically because those which came later were initially excluded to follow a "take small bites" approach.


You write the product, you influence 1000 users. You write the library, you influence 1000 users with each a 1000 users.


Assuming equal likelihood of success for product and library. It may feel that way to us but there is a survivability bias embedded in that assumption. I'd hazard a guess and say incremental value add per unit of effort is greater for product development due to the inherent bias developers have for building libraries under the false assumption that a domain can be conquered for good.


But, there are only so many documents and so many relevant API's to read. At some point, the ratio of research-to-writing flips.


Are you joking? I've worked as a programmer for 13 years, mostly in the same two languages. But my reading list is longer than ever and my thinking-to-coding ratio is higher than ever. Unless my employers are regularly lying to my face, they are pretty happy with the outcome.


No, I'm not. Most of the time I spend writing code, not checking documentation, or APIs. I've been a programmer for 27 years.


This is going to be domain-dependent. The lower in the stack you go, the slower things move and the less likely it is you're going through wholly unfamiliar territory. Top-of-stack, user-facing code(of which web browser stuff is the poster child) churns endlessly; UI code you write today is likely to have a lifespan of a few years at best, if the system continues adding features and staying up to date with new platforms.

This is true, at least, until you hit code that touches hardware directly, and then you still get to encounter lots of new problems because the hardware is doing things that break your code.


I think that if you are reading > 50% of the time, vs coding, then you are still learning your trade. I write for MacOS and iOS, lots of new material each year, it doesn't take that long to get through the relevant updates, though.

The times I've spent most time reading, are when I've transferred across industries, to capital markets trading systems, to web media, to web apps, to mobile apps, to big data, to VR/graphics apps. Each transfer involves an initial period of furious research, but even though the ratio of reading to coding stays high initially, I am generally able to start writing more than reading within the first few weeks.

I did spend more time reading the first years of programming. But I think you hit a watershed, where it becomes harder to find a treasure trove of ideas about software architecture, for example, in a new book, and instead it becomes more helpful to advance your art by experimenting, and developing, and analyzing your own output.

For what it's worth my main point was about the ratio of coding to research, and although the previous commenter disagreed, I think he also introduced a new point about the ratio of thinking to writing. I remember I used to spend more time staring at walls of code, or paper notes. Not anymore. If before my work was more chunky, now it is more consistent and smooth. That is a skill/discipline I developed. The article resonates with me because my commit frequency has increased since I have been able to improve my analytical process in this way.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: