Hacker News new | past | comments | ask | show | jobs | submit login
The Continuous Delivery Test (sourceless.org)
87 points by sourceless on July 10, 2022 | hide | past | favorite | 67 comments



Extremely web centric list. Any other development (mobile, desktop, games, embedded, …) will find a lot of these steps very weird.


The whole idea of "Continuous Delivery" is not always one that meshes well with some shrink-wrap workflows.

I worked for hardware manufacturers, for most of my career, and software was always just a "sidekick" to The Main Show. We just got the "Supporting Actor" nods.

I'd say that 90% of HN seems to be Web/SaaS (and, these days, crypto), which is an excellent workflow; just not the one I do. Nowadays, I have some integrated stuff, but it's mostly native iOS/TVOS/WatchOS/MacOS deliverables.

CD is nice, but I feel that CI is better, for a team. Even that, is overkill for the way I work. I'm spoiled, I tend to work alone, or very loosely-coupled. That gives me a different workflow from what many experience. I had to spend a couple of years "un-learning" a lot of stuff from my Big Team days.

The way I work results in astonishingly good stuff, very quickly, but the scope is much narrower than what a lot of folks here, do.

As such, I find little utility in telling others that the way they do things is wrong, and that they should be doing it my way. I do talk about how I do things, but I'm not judgmental about it.

I do feel that software quality, in general, is fairly problematic, but don't claim to have the silver bullet. I know what works for me, and I try to stay in my lane.


i work on a very large team and solo for my side hustle. love working solo, no red tape to fight. no waiting 24 to 48 hours for a teammate to review my code per iteration and up to 24 more for it to deploy. just signed 2 contractors on for my solo project though and its really messing up my process. all the overhead is so high I'm not even sure it's worth it even if they were free.


I like to imagine it as a sort of volume vs surface area problem.

The amount of service you can deliver to costumers is your surface area, but the amount of work you put in only contributes to the volume.

As the number of employees grow you will necessarily be less efficient, but there aren’t really any other great ways to gain the required surface area.


there are perhaps interesting options with bringing in partner-level labor instead of buying someone's time-in-seat and the relative personal investment involved in each

the big point glossed over is that wage labor, where the value they produce is taken from them, isn't a great incentive for great work. that's imo a far bigger issue than fundamental challenges with collaborating, and why so many managers need to continuously trick their employees into productivity


I have been thinking about _how_ to build good (and stable) software a lot in last 10 years. Tried many approaches and verified them one by one. In my experience this is the ultimate fix. It may not work for big tech companies, but for most projects, it is much better to choose this approach.

Collaborative efforts to maximize good results, and predictable gains for each participators.


Thanks for sharing your experience, I feel like this is not very well resourced territory for our industry

Btw you may find Graber/Wengrow's recent Dawn of Everything to be an inspiration for how these sort of structures could scale up to larger groups of people than conventionally thought historically possible


What's the ultimate fix?


tbh this is a big weakness of the left (in terms of its marketing at least) that I'm trying to understand better. one idea is that whatever solutions we conceive of now, are constrained within the understanding we've gained living under capitalism/state power etc., and that we need to dismantle the current system incl exploitation/keeping the majority of peoples time/energy locked up in wage labor/systemic lack of freedom to disobey an order or to freely relocate/etc, before we can explore a wider range of solutions. But I get how this is unsatisfying, sounds like it leaves a vacuum for bad/regressive solutions to come in, but this is something I'm only starting to learn about.

That's why I appreciated the new Graeber/Wengrow, it finds a wealth of prev overlooked ways that humans have lived, post discovery of agriculture, in large societies and with greater freedoms and prosperity defined in new ways besides within the terms of economics. This gives me optimism that capitalism/states aren't necessary or inevitable and that we might be stuck on this shitty plateau having convinced ourselves it's the only realistic one


> no waiting 24 to 48 hours for a teammate to review my code per iteration and up to 24 more for it to deploy

This probably depends on the team and practices that are used within it.

Most code changes in my current team are reviewed within an approximately an hour, deployments are a bit more tricky, especially if you have external clients with their own environments and deployment procedures.

Honestly, CI/CD cycles for the internal dev/test environments take anywhere from 5-20 minutes (with the heavier tests being run separately) and the technical aspects of delivering software (build, test, package, scan, generate docs, send) take around another 10-20 minutes.

It's usually when you have to deal with manual procedures and people that it all slows down.

So, it can be really good to automate as much as you can: code style and warning checks, code coverage checks, dependency date checks, dependency CVE checks, unit tests, integration tests, performance tests, packaging and storing build artifacts, redeployments to whatever environments are permissible (usually automated to dev/test, manual approval against prod).


Ya.. my company does all of that. It's just the people process that makes everything slow. Well, mostly the people process. Having millions of dependencies doesn't help with build and test times either. Usually up to 20 min to run something.


> Having millions of dependencies doesn't help with build and test times either. Usually up to 20 min to run something.

I feel that pain! Honestly working with monoliths and older and larger projects is very demotivating. Now, I'm not saying that you need to go full on microservices either, but working with a few smaller services instead of a single huge one has been a game changer in my experience!

I'm not sure I ever want to go back to the loop of changing some code and then having to wait minutes for the app to launch locally just so I can test something and realize that it's still wrong.


We do use microservices, but even then it's still slow AF. I'm not entirely sure why, I guess just via the code dependencies it includes nearly everything. It's all C++ which probably doesn't help.


> I worked for hardware manufacturers, for most of my career, and software was always just a "sidekick" to The Main Show.

Which is an enormous, catastrophic, fantastic mistake that should be leaving everyone breathless with shock.

Realising that the software matters is why Tesla is worth more than most of the other car manufacturers combined.

This is why Apple is the #1 biggest company in the world.

This is why every time some "hardware" has to be deployed, every enterprise admin rolls their eyes and groans.

This is why IoT, medical, and factory automation security is a trash fire.

Smart televisions aren't, and you can waste $5000 on one just to have a substantially better experience by plugging in a $150 box from Apple.

And on, and on, and on...

I literally told the local Toyota rep that I wouldn't buy a 2019 model-year vehicle specifically because it didn't have Apple CarPlay. The built-in system is simply garbage. Maps that are 4+ years out of date!

Apple has nearly monthly updates for iOS, which means if I plug my phone into my car, effectively my car gets monthly updates. With Toyota? Maybe once in a decade they'll release an update, and then never again, slowly but surely deprecating its software capabilities down to "worthless".

Similarly, Nikon releases updates for their existing cameras once in a blue moon. Recently they announced "Firmware 2.0" for their flagship Z9, and I was shocked. This is likely a one-time aberration, probably caused by their software division not being ready in time for the initial shipments. I guarantee that there will never be a Firmware 3.0. Never! Where I live, this camera with one lens and typical accessories will set you back $10K and is deprecating at an exponential rate because Nikon does not give a s%*t about software. Meanwhile my iPhone and its camera will keep getting updates.

So yes, CI/CD is a web-centric notion.

It ought not to be.


Well, hardware is a different world from software. I’ll bet that Tesla doesn’t do “sight unseen” updates. They probably wouldn’t be allowed to.

They likely have huge batteries of tests that the software needs to pass (CI), but the actual release build and “sign-off” involves a human.

Who will get their ass chewed off, if the update borks.

But everything before that point, is 1000% better and more agile than most hardware companies.

I’ll bet SpaceX has a lot more meatware in their process.

I always liked CI, as a basic infrastructure, for my team. I very much believe in early integration testing[0], but automated testing can be a trap. It should not be the only testing, for firmware.

If you push a bad release to a Web server, you have one point of failure, but also, one point of recovery.

If you push out a bad firmware release, you have a million $10K bricks. You may also have fires, explosions, and crashes.

Although I often had real disagreements with the hardware folks, I am entirely sympathetic to their priorities.

The main issue, was that they considered software developers to be “cowboys,” and judging from the general quality level of even enterprise software, I can understand their bias.

However, I am not a “cowboy.” I am absolutely anal about Quality, and I’m regularly attacked as being “too uptight,” by software developers.

Software is a different animal from hardware, and needs to be done differently. Quality, however, should not suffer.

As a standalone developer, I’ve learned to eliminate “concrete galoshes”[1], and CI tends to be that, but only in my case. What works for me, may not work for others. Just as importantly, what works for others, may not work for me.

I’ve spent the last few years, refining a personal process for my software development. It works great. You can see for yourself. Most of my work is open-source, or source-available[2].

[0] https://littlegreenviper.com/miscellany/testing-harness-vs-u...

[1] https://littlegreenviper.com/miscellany/concrete-galoshes/

[2] https://github.com/ChrisMarshallNY#browse-away


This is all solvable.

For the people who are downvoting me, you do all realise that I'm not comparing a pure-software website to some embedded IoT thing, right?

Most of my examples are hardware with software as "necessary evil" vs hardware with software "being taken seriously."

Apple TV is a hardware appliance that takes the software seriously.

My Samsung "flagship" TV is a hardware appliance that does not.

They both get updates. One gets frequent updates that makes the product noticeably better. The other gets infrequent updates that have made it worse.

Cars from most manufacturers are hardware with trash software.

Tesla sell the cars, but unlike their competition their cars are regularly updated with new software. They have weekly(fortnightly?) updates rolled out to their for their beta testers! Not exactly daily CI/CD, but compare this to Toyota. They literally never release updates for most models, ever. And it's not like their 1.0 release is perfect! Mine has a bunch of small bugs and irritations that they should have patched... but never ever will.

It's not a question of "alternate process" or a "different workflow". They have no process! Their release strategy is "don't"!

Apple is about to release a complete car software + hardware suite. So not something you plug in, but the entire "avionics" as it were will come as a OEM part from Apple instead of the car manufacturer.

They're going to wipe the floor with their competition. The screens and software from GM, Ford, Audi, BMW have nowhere near the quality, commitment to updates, backporting of new features, etc, etc...

I will literally stand in line outside of the dealer to get a new car that has this style of Apple-made hardware+software instead of a lump of metal with paint on it and fabric on the inside.

Because I know it will get updates, and that those updates won't make things worse.


As a fellow embedded dev, I think that any system where you run a regular, meaningful risk of bricking with updates is a badly designed system. Other than that, no disagreement. CI is a cheap, fast first step in validation. It's not the stopping point.


Well, I don't do embedded anymore. I enjoyed it, but it can be nerve-wracking.

I write end-user application code, for Apple devices, in Swift. I really enjoy that.

I also do some backend stuff (in PHP). It's not my forte, and I like to avoid it, if possible, but I'm highly skeptical of a lot of backend stuff, these days, and like to know who I'm letting in the back door.

I'd like to do some Bluetooth stuff. I've written a bunch of BLE stuff (even given a class in it[0]), but I haven't found a venue that gives me an excuse (the Meshtastic stuff looks like it might be a good bet, though).

[0] https://github.com/ChrisMarshallNY/ITCB-master


Indeed. Does your deploy process self-heal? Err, no, I need to wait two days for App Store review.


Really? I challenge you to find any items other than 7 and perhaps 8 that don't apply to the types of development you mention.

Essentially the only thing that's different with the types of development you mention is that "final deployment to production" looks different, as it usually involves more or less physically transporting artifacts to your customer.

But the rest is just the same. You should be doing trunk-based development, practise something like code review, gate on integration tests, feature flag, spin up virtual production clones, etc.

In other words, the fact that getting software to your end users is a clumsy process does not preclude you from doing every other step of the process with short feedback loops.


Using distributed version control isn't applicable to most people working in games.

9) is also debatable - requiring someone to clone the entire application to make an infra change to a testing environment.

Adding tickets to commit messages isn't necessarily a requirement - some work (at least in my area) is prototypey and maybe ill defined (the task might be to define it).

Being able to deploy from your own machine is a double edged sword; the situation you need this is an absolute last resort. Enabling deployments from dev machines means credentials to environments, write access to infra, and likely skirting around normal processes.


> Using distributed version control isn't applicable to most people working in games.

Are you sure about this? I mean, code absolutely should be versioned and if you can afford the storage, you should also version all of your assets, like models and audio.

Even Git has specialized functionality for storing binary assets: https://git-lfs.github.com/

Unity also seems to be pushing their Plastic SCM as well for this more specialized use case: https://www.plasticscm.com/

Either way, not using version control for any collaborative project is just asking for issues.

Not using distributed version control in particular might just make things more annoying, as anyone who has ever worked with SVN might attest to.


> Are you sure about this? I mean, code absolutely should be versioned and if you can afford the storage, you should also version all of your assets, like models and audio.

Yes, and I never said you shouldn't use version control , simply that distributed version control isn't necessarily applicable (which is what the claim is).

> Even Git has specialized functionality for storing binary assets: https://git-lfs.github.com/

Let's be honest, LFS is duct tape on a pig. It doesn't support SSH, for one. Mirroring a repository is mired with landmines, and probably most importantly it breaks the decentralised model of git by centralising the storage of your binary data.

> Either way, not using version control for any collaborative project is just asking for issues.

Which is not what I said, at all. The article says _distributed_ version control.

> Not using distributed version control in particular might just make things more annoying, as anyone who has ever worked with SVN might attest to.

By distributed version control you mean git here, right? The advantage of git isn't it's technical merit, or the advantages of DVCS (particularly if you're using something like LFS which turns it into a centralized VCS). The advantage of git is that it's well supported in many tools (ci, merge bits, code review, deployment, package managers). Frankly, my experience is the complexity brought on by git (which is quite a few years using it along side perforce and more recently plastic) often outweighs the benefits over something like perforce, particularly on large repos.


What?

#2 and #5 exist only to support #7.

#4 usefulness is completely dependent on the context.

#9 is a joke, right? It's an anti-pattern for most of the web based SaaS applications, and poisonous to anything that isn't SaaS.

And #12... Try that in a large regulated company and people may quite as well end up in jail.


#2 is to improve quality, shorten feedback loops, and simplify merges. You get those benefits whether or not you're able to push things to your customer at will.

Both #2 and #5 really just set a quality bar high enough to reduce long-term maintenance costs. I don't see why you wouldn't want that in a non-web scenario.

#12 is not talking about deploying to production, and I don't see why you shouldn't be able to deploy to a dev environment from your machine. It's nice not to have to run everything locally. Even for non-EU companies.

So aside from #9 which is of questionable utility for any type of development, no, I'm not convinced you've pointed valid examples.


Pretty much everyone with more than one production environment will. Imagine developing Firefox. How do you do a "self-healing deployment" automatically whenever trunk changes?


You take the standard course of action, sending the PR team to state how Firefox is there as a beacon of hope and aspirations to privacy.


9. Does your Infrastructure as Code live alongside the service it hosts?

No, the IaC is configuration, and configuration should be separated from the application.

in organizations that implemented GitOps, the IaC files in git are not only a copy of the configuration, it is the source of truth from which the configuration is copied to the systems. Unless all the developers who can commit code should have permission to change production configuration, the IaC repo should be separated from the code repo.


> 11. Do you include ticket IDs in your commits or branches?

This is one of the things that sound 'good' on the surface and are totally worthless in practice (especially when practicing minimal commits).

In the wild this devolves into a meaningless umbrella issue (Deliver Feature Foo) or 1 liner issues that sometimes did not even match the pr contents.

Just use the commit message and in pr review enforce commit message norms.


I think it can be useful to encourage (not require) this practice since auto-linking between Jira and GitHub/Gitlab is a nice feature. Jira can now show you what releases a given feature is deployed in, etc.

If you are in a regulated environment or selling to large enterprises you may find that having a SDLC policy of “no changes without ticket to track” is a more easily defensible control against unauthorized code changes (you’ll get asked about this in SOC2 and security questionnaires). The policy “anyone can make a change if they find a reviewer to approve” might not float.

Mandating ticket IDs can be really annoying though, for example if I want to make a few-line no-op refactor to clean up some code, I don’t want to have to create a ticket. So it’s definitely a tradeoff.


It's incredibly easy to simply add the ticket number to branch names (or commits if you're rebasing your branches when you merge). Many times the ticket isn't helpful, but "just make the commit/branch self-explanatory" ignores the fact that non-developers - product folks, designers, etc. - are far more likely to leave useful context in the ticket itself.


Yes, exactly. I treat my commit history like I'm preparing for a detective to investigate years down the line :)

Great commit messages are just one part of the timeline. Ensuring there's a link between the commits/PR and the ticket lets the future detective get more information (frequently, that's "why did we do this").


When there is a ticket, yes, adding the number is trivial and probably worth it. Where the rule breaks down, in my opinion, is when you have to create a ticket and replicate the commit message in it any time you stumble over a code maintainability issue and fix it in stride.


This is the practice we use - link to a ticket if it exists.

There's nothing wrong with fixing a bug you discover upon inspection. Forcing tickets for every change discourages code maintenance.


This is also where automated code quality checks can shine. You can say something like, if it's more than 5 or 10 lines we expect a ticket ID, otherwise we don't.

See DeGrandis’s “Making Work Visible,” the hazard of “link to a ticket if it exists” is that sometimes this causes us to treat tickets as an external process, but we absolutely want to surface tickets that track our cleanup of tech debt and any other procedural tasks that we are stuck doing.

If automated tooling sounds nice but you are at the sort of place where “there are three priority levels: hot, Red Hot, and DO IT NOW,” this can also be part of a pull request template. One extremely effective template we used at a previous job just had quick checklist and you'd check off for example “I tried this code on my dev cluster.” Caught a lot of “whoops I am moving too fast and need to slow down a bit” issues, hah!


> This is one of the things that sound 'good' on the surface and are totally worthless in practice (especially when practicing minimal commits).

I feel like I disagree here.

Turning on Git Annotations in any JetBrains IDE and seeing who made changes, when and which Jira issue necessitated these changes right there is pretty useful. All of the sudden, you can easily say: "This method had been changed in 5 different commits in the last 3 years, what I probably need to read before making my own changes are issues ABC-123, ABC-236, ABC-455, ABC-855 and ABC-1233, to understand the business context and historical limitations behind all of this."

> Just use the commit message and in pr review enforce commit message norms.

I fully agree with this, but unless you squash your commits or rebase, finding out which MR/PR a particular set of changes belongs to isn't always immediately doable, especially when looking at things locally instead of the GitLab/GitHub UI.

Furthermore, the barrier of entry is higher: I always describe my changes and add a bit of context in the merge request/pull request description, sometimes even with GIFs or MP4 video demonstrations of what happens as a consequence of them. And yet, the majority of other developers have no desire to do that - I've regularly seen people leave those empty, and even commit messages are sometimes like: "try fix" instead of "make Ansible ensure that file permissions are set to have the newly created directory be readable by the group that will run the app".

In contrast, everyone is capable of adding a simple identifier to the issue management system and there are very few arguments that they can make against adding a few characters to the beginning, as opposed to: "But adding descriptions would take a lot of time and slow down development, I don't think we really need those because you should be able to read the code and understand everything."

Sure, you can enforce it in a top down manner, but it can be like pulling teeth at times, so you might as well ensure that at least the lowest common denominator is in place, before trying to find better approaches.

Exceptions to this might be small changes that don't really correspond to an issue, then you have two choices:

  - create an issue yourself so the work doesn't become untraceable in the issue management/billing system you use
  - just leave a short text description without issue identifier


> ABC-855 and ABC-1233, to understand the business context and historical limitations

This should be what a commit message is. Commit messages should describe the rationale behind the change.

Ticket numbers are great for linking commits together. But they should not have to be relied on to find out why a change was made. Ticketing systems may come and go but commit messages persist.


> Ticket numbers are great for linking commits together. But they should not have to be relied on to find out why a change was made.

In theory I agree. In practice, I've yet to see it work out that way - there are always discussions with clients/other departments in the ticketing system which will never appear in the commit messages. References to logs, screenshots/animations of the issue and other things that give the full context to why that change matters.

That's also why I believe that code alone is not enough - you need code comments to explain not what the code does, but why it does it that way etc., it's just that ticketing systems can provide even more of the surrounding information which also doesn't fit either in commit messages, or in code comments.

> Ticketing systems may come and go but commit messages persist.

I'd say that everything comes and goes, but in most cases you won't have to worry too much, since most information will be kept around in some capacity, like migrating SVN revision messages to Git commit messages (though even that wasn't always quite possible, in the case of non-standard repository layouts, since most migration scripts broke) etc.

Though with how deeply integrated some companies out there are with solutions like Jira, I doubt they'll ever rid themselves of it.


Have one branch for every story in Jira has been incredibly helpful in teams I’ve been in.

You can check in git if work has actually begun, and you can get from and old commit to Jira by virtue of the merge commit.

I don’t know how you’d do it in a non-mono repo setting


I like this, but it violates one of my norms, haha.

Stories can last for a while, but long-lived feature branches have to be kept in sync with the main branch, otherwise they develop nasty merge conflicts and you are not deploying continuously.

I think the better solution is for your versioning and your issue tracker to be integrated: we still expect every developer to merge to mainline one or twice a day[1], keeping merge conflicts shallow... But they should mention what issue they are working on in the commit, and the integration dumps that information into the tracker automatically.

[1] This is not hard to make safe, though it makes a lot of devs skittish at first. See Three-Flow for a non-CD version, https://www.nomachetejuggling.com/2017/04/09/a-different-bra... as a stepping stone. One key thing is that code review needs to become much lighter than it is at many organizations, “I don't care how you did it as long as it can be turned off with a feature toggle and does not break prod or introduce security vulnerabilities when I merge it: those key things are what I am looking for.”


> I don't care how you did it as long as it can be turned off with a feature toggle and does not break prod or introduce security vulnerabilities when I merge it

Does this not lead to poorly-written features that drag down future development yet cannot be turned off because they have grown important for the business?

At least that's my experience with not regulating at least a basic level of code quality in the review. What are you doing differently to prevent that?


There's a difference between "every story has a branch" and "you need to create a story for every commit" which is what's implied by TFA.


My previous team used to have ticket IDs on every commit but just decided to drop that requirement for commits that are truly self-contained and documented properly in the commit message.

That approach makes a lot of sense to me. Link the commit to the ticket when it enhances future reading, but not out of blind application of principle.


I disagree.

I have dealt with bugs that were thought to be fixed a long time ago, only for them to mysteriously pop up again later. Ticket is created and the old ticket is linked.

It's really helpful to see what was done 2 years ago, including the attempted fix that is now still live in the code base but apparently doesn't work properly.

That said, as always, it's not black or white. There are definitely cases where it doesn't make sense, but I don't think you should call it 'totally worthless'.


Not remotely true in my experience. I can't tell you how many times I've dug into the commit history to find where a change happened, then wanted to know the full context, and been glad I could just pull up the years-old ticket directly and see a detailed description of intent and requirements, discussions in the comments, mock-ups, etc etc


I've yet to work at a company that has managed to figure out a strategy to get the majority of devs to write clear and descriptive commit messages the majority of the time. At least if they include the ticket number I can look at the ticket to understand what they intended to do. Of course whether they actually did it or not is another matter.


> 9. Does your Infrastructure as Code live alongside the service it hosts?

That means minor changes to some test infrastructure have to go through the strict review process, because the gitlab-ci.yaml is in the part of the main code. Last time this happened to me I found it annoying, and I don't think the code quality guardians care about some CI config anyway.


I will be first in line to suggest to not split repos unless required. It is my experience that deployment, admin and node provisioning (and everything that is generally put under the not always useful IaaC umbrella term) is one of those exceptions. Often documentation is, too.

It all depends on what the release flow looks like. A useful rule of thumb when components can share a repository is when the branch model is identical. If that's the case for codified release and provisioning processes and the code that is shipped, then it is likely that you are not taking full advantage of what that deployment and provisioning code can do. If not anything else, then at least test versions must deploy production code, and test code deploy production versions.

Again, in my experience across multiple organizations, it is a good idea to keep that code both forwards and backwards compatible. It should be natural that is sometimes says things like "if build number greater than x then set parameter y". While it may feel a bit dirty for some, that type of logic is what deployment code does, and it is much more maintainable than keeping branches around for the same result.

If I get to say one thing to new devops/release engineer type people, that's probably it. That, and the value of clear and concise commit messages. Which goes double for this type of code.


Changes to CI config for release automation are probably vetted and manually tested more than anything else in the projects I work on.


7. 7! This is by far the one that resonates with me the most. But for some reasons SREs think the opposite is true: the longest it takes to get to prod, the better. What a pain.


I'm an SRE and currently on a mission to get code into prod as fast and safely as possible. Instead of painting such a broad, generic negative picture, maybe articulate an actual position to argue against.


You are right -- I was referring specifically to SREs in FAANG, which enforce a mandate of maximum 1 deploy to prod per day -- in some case even 1 per week, or less.

To be honest, I think there is a threshold, perhaps expressed in terms of risk of losing money, at which it is more beneficial to delay deploys.

But applying the same deployment rules to a multi-billion-dollar revenue machine and to a smaller project with perhaps a few thousand weekly users is ridiculous.


Thanks for the clarification, I can see how that might be frustrating.


This feels like yet more cargo-culting.

Even the first item on this list is hotly contested, with Facebook/Microsoft/Google and Co. using centralised mono-repos, google itself using a server model similar to Perforce called Piper.

To be clear here, I’m not saying they’re right, but with it feels un-scientific to make a blanket statement that they’re wrong.


You can easily have a monorepo with a distributed control system.


Yes. But I’m referring to the article:

> 1. Do you use a distributed version control system?

Edit: the parent originally said “distributed build system”


I'm not sure what you're getting at here. Mono-repo is entirely a different concept then centralized (eg: SVN) vs distributed (eg: git).

I'd also seriously question an org not using git, specifically, despite claiming "cargo cult". Literally every developer knows it, it's got a huge choice of tooling, and every service/tool (CI/CD, issue tracker, etc) has an integration. IMHO, you better have a damn good reason to be on something else to justify the headaches involved of being not git.


> I'd also seriously question an org not using git

That's a very damning statement. Many places use perforce, for example.

Git doesn't handle binaries and frankly git struggles to scale. Try checking out a _large_ git repo and running some common operations on it.


Even Microsoft manages Windows using Git today


> Microsoft developed the Git Virtual File System to be able to get the benefits of using Git without having to wait hours for even the simplest of Git commands to run.

https://techcrunch.com/2017/05/24/microsoft-now-uses-git-and...

Sounds like it's git but begrudgingly.


Just because it's a monorepo it doesn't mean it's centralized?


Yes. It’s the accompanying suggestion of using github or gitlab that means it’s centralised.


Read this as “Do you use git?” This is opposed to svn, cvs, something home-grown, or passing thumb drives back and forth among developers.


Hg, svn, perforce, plastic are all feasible alternatives to git.


I'd like to see a workflow that meets all the criteria. I was surprised to see the item about deploying directly from your own machine; that seems to contradict the other goals which point to automated pipelines that deploy on merge.


I feel like a lot of people misread that point. It's not about deploying to production (that sounds nuts!) It's about spinning up and deploying to a full dev environment. This is in part to not have to run everything locally, but mainly to exercise deploy automation regularly.


This is a very common pattern especially with the cdk. Having a locally deployed stack makes development in the cloud so much easier.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: