Hacker News new | past | comments | ask | show | jobs | submit login

I think that modern CI is actually too simple. They all boil down to "get me a Linux box and run a shell script". You can do anything with that, and there are a million different ways to do everything you could possibly want. But, it's easy to implement, and every feature request can be answered with "oh, well just apt-get install foobarbaz3 and run quuxblob to do that."

A "too complex" system, would deeply integrate with every part of your application, the build system, the test runner, the dependencies would all be aware of CI and integrate. That system is what people actually want ("run these browser tests against my Go backend and Postgres database, and if they pass, send the exact binaries that passed the tests to production"), but have to cobble together with shell-scripts, third-party addons, blood, sweat, and tears.

I think we're still in the dark ages, which is where the pain comes from.




Pretty much.

Docker in my experience is the same way, people see docker as the new hotness then treat it like a Linux box with a shell script (though at least with the benefit you can shoot it in the head).

One of the other teams had an issue with reproducibility on something they where doing so I suggested that they use a multistage build in docker and export the result out as an artefact they could deploy, they looked at me like I’d grown a second head yet have been using docker twice as long as me, though I’ve been using Linux for longer than all of them combined.

It’s a strange way to solve problems all around when you think about what it’s actually doing.

Also feels like people adopt tools and cobble shit together from google/SO, what happened to RTFM.

If I’m going to use a technology I haven’t before the first thing I do is go read the actual documentation - I won’t understand it all on the first pass but it gives me an “index”/outline I can use when I do run into problems, if I’m looking at adopting a technology I google “the problem with foobar” not success stories, I want to know the warts not the gloss.

It’s the same with books, I’d say two 3/4 of the devs I work with don’t buy programming books, like at all.

It’s all cobbled together knowledge from blog posts, that’s fine but a cohesive book with a good editor is nearly always going to give your a better understanding than piecemeal bits from around the net, that’s not to say specific blog posts aren’t useful but the return on investment on a book is higher (for me, for the youngsters, they might do better learning from tiktok I don’t know..).


I personally love it. RTFM is pretty much the basis of my career. I always at minimum skim the documentation (the entire doc) so I have an index of where to look. It's a great jumping off point for if you do need to google anything.

Books are the same. When learning a new language for example, I get a book that covers the language itself (best practices, common design patterns, etc), not how to write a for loop. It seems to be an incredibly effective way to learn. Most importantly, it cuts down on reading the same information regurgitated over and over across multiple blogs.


lol... yeah.... I've become "the expert" on so much shit just because of RTFM :D

It's amazing how much stuff is spelt out in manuals that nobody bothers to read.

The only issue is that so few people RTFM that some manuals are pure garbage to try and glean anything useful. In those cases, usually the best route is often to just read the implementation (though that is tedious).


I have someone in my network, who is very active in PHP scene. Tutorials, tips&tricks, code reviews, you name it. He pretty much abandoned his very popular website and went all in on YouTube. Why? Apparently watching video is so much easier than reading 3000 word article.


>He pretty much abandoned his very popular website and went all in on YouTube. Why? Apparently watching video is so much easier than reading 3000 word article.

Learning from videos is lazy? Content you've learned is only valid if you read it? I'm not sure what exactly you're getting at, but I'm a visual learner and I much prefer (well made) videos over text. There's a visual and audio aspect to it enabling so much more information to be conveyed at once. For example, Dr. Sedgwick's video series on algorithms and data structures has animations as he steps through how algorithms work. He's overlaying his audio while displaying code and animating the "data" as it's processed by the algorithm. I have a physical copy of his book Algorithms but I go back to his videos when I need a refresher. https://www.youtube.com/watch?v=Wme8SDUaBx8


I do have a prejudice against people who tell me they learn something through a video. There are good video creators and good educational videos, but there’s just too much trash.

Besides that, I default to text content because of the monetary incentives on youtube, it’s absurd that a 10 min read, can be a more than half hour video that never gets to the point.


>I do have a prejudice against people who tell me they learn something through a video. There are good video creators and good educational videos, but there’s just too much trash.

Bad content has absolutely nothing to do with the format. What a ridiculous statement. As if there aren't thousands of even worse articles, blog posts and incomplete/outdated documentation for every bad video. Does W3 schools ring a bell?

>it’s absurd that a 10 min read, can be a more than half hour video that never gets to the point

It's absurd you're blaming bad content on the format, which is also ironic because you missed where I explicitly wrote in text that I prefer well made videos. Do you expect every single piece of text you come across to be a concise and up to date source of truth?


It depends on what you're learning. Try to learn to chop veggies and a video is superior to text. Try to learn all the inputs to a function, return values and thrown errors and a video is far inferior to text. The crossover point probably varies from person to person, and my guess is snobbery about videos is based around the personal differences in how a person learns and where what they already know or what they learn is on that spectrum.

There's also a wide (wider than text?) difference in video quality. How many videos out there are badly narrated sources available in text format?


There are pros and cons of each media, I'm not arguing against it. What I mean is that there's a general tendency of migration from longer versions( books) to more compact,bite size content(videos). It's one of the reasons why there's countless videos on YouTube on how to create a dictionary in Python,even though the documentation has it covered wide and deep.


I agree that something like a Python dictionary is probably unnecessary to make a standalone video for, but simple concepts like that also wouldn't be a 3000 word article unless you're looking for literally every nuance and weird quirk the language has to offer.


like an explanation of what 'this' is in JS:)

https://www.amazon.co.uk/You-Dont-Know-JS-Prototypes


Haha, I'm genuinely intrigued to see the contents of that book. It could certainly be copy/paste from the Mozilla docs, but who knows! I was pleasantly surprised by this video about the JS event loop, although in fairness it is a lecture at a conference rather than a made for youtube video. Regardless, the length of the video made me question how necessary it was but I ended up watching the whole thing and enjoyed it. https://www.youtube.com/watch?v=8aGhZQkoFbQ


>Also feels like people adopt tools and cobble shit together from google/SO, what happened to RTFM.

Sometimes it's easier to google because TFM is written by people who are intricately familiar with the tool and forget what it's like to be unfamiliar with it.

Look at git for example; the docs are atrocious. Here's the first line under DESCRIPTION for "man git-push":

>Updates remote refs using local refs, while sending objects necessary to complete the given refs.

Not a single bit of explanation as to what the fuck a "ref" is, much less the difference between remote and local refs. If you didn't have someone to explain it to you, this man page would be 100% useless.


I think that reading the man page for git-push is not reasonable if you don't understand git first.

That being the case, the first thing you need to read is the main git man page. At the bottom of it (sadly) you find references to gitrevisions and gitglossary man pages. Those should provide enough information and examples to understand what a ref is, yet probably even these could be better.

I'm in full agreement that this is terribly undiscoverable, but if you really want to RTFM, you mustn't stop at just the first page.


> treat it like a Linux box with a shell script (though at least with the benefit you can shoot it in the head)

To be fair, that by itself is a game-changer, even if doesn't take full advantage of Docker.


Any CI books you can recommend? I have been completely treating it as a linux box with a shell script and have cobbled together all of my shit from google/SO.


Every programming book I've ever bought has been way outdated to the point of feeling kind of useless.

I mean yeah there's a lot of algorithms and other fundamental concepts you can learn from books, but specific tooling? I'm not sure.


>Also feels like people adopt tools and cobble shit together from google/SO, what happened to RTFM.

I find that documentation has gotten worse and harder to find. It used to be trivial to find and long to read. Now, between SO and SEO of pages designed to bring people in, it's 95% examples not enough "here's all the data about how x works". It's very easy to get a thing going, but very hard to understand all the things that can be done.


> Docker in my experience is the same way, people see docker as the new hotness then treat it like a Linux box with a shell script (though at least with the benefit you can shoot it in the head).

"Adapting old programs to fit new machines usually means adapting new machines to behave like old ones." —Alan Perlis, Epigrams in Programming (1982)


I think there is a difference between taking technology, framing it into own mindset, getting parallels, and diving into presented story full of unicorns. Unfortunately seconds scenario is what often business wants, technology is mostly same, story is specific to the company.


I am very similar. I read through the documentation the first time to get a ‘lay of the land’, so I can deep-dive into the various sections as I require.


This is something that I love about using Bazel. It allows you to do this. Bazel is aware of application-level concepts: libraries, binaries, and everything that glues this together. It has a simple way to describe a "test" (something that's run who's exit code determines pass/fail) and how to link/build infinitely complex programs. Do you build a game engine and need to take huge multi-gb assets folders and compile them into an efficient format for your game to ship with? You can use a genrule to represent this and now you, your CI, and everyone on your team will always have up-to-date copies of this without needing to worry about "Bob, did you run the repack script again?"

It also provides a very simple contract to your CI runners. Everything has a "target" which is a name that identifies it.

A great talk about some things that are possible: https://youtu.be/muvU1DYrY0w?t=459

At a previous company I got our entire build/test CI (without code coverage) from ~15 minutes to ~30 to ~60 seconds for ~40 _binary and ~50 _tests (~100 to ~500 unit tests).


I agree with you in principle, but I have learned to accept that this only works for 80% of the functionality. Maybe this works for a simple Diablo or NodeJS project, but in any large production system there is a gray area of “messy shit” you need, and a CI system being able to cater to these problems is a good thing.

Dockerizing things is a step in the right direction, at least from the perspective of reproducibility, but what if you are targeting many different OS’es / architectures? At QuasarDB we target Windows, Linux, FreeBSD, OSX and all that on ARM architecture as well. Then we need to be able to set up and tear down whole clusters of instances, reproduce certain scenarios, and whatnot.

You can make this stuff easier by writing a lot of supporting code to manage this, including shell scripts, but to make it an integrated part of CI? I think not.


I'm curious what's a Diablo Project ? I've never heard of such technology unless you're speaking of the game with the same name.

Did you possibly mean Django ?


Argh it was indeed Django, I was on mobile and it must have been autocorrected.


While it comes up, I think it's more of a rare problem. So much stuff is "x86 linux" or in rare cases "ARM linux" that it doesn't often make sense to have a cross platform CI system.

Obviously a db is a counter example. So is node or a compiler.

But at least from my experience, a huge number of apps are simply REST/CRUD targeting a homogeneous architecture.


Unless we're talking proprietary software deployed to only one environment, or something really trivial, it's still totally worth testing other environments / architectures.

You'll find dependency compilation issues, path case issues, reserved name usage, assumptions about filesystem layout, etc. which break the code outside of Linux x86.


You're vehemently agreeing with the author, as far as I can see. The example you described is exactly what you could do/automate with the "10 years skip ahead" part at the end. You can already do it today locally with Bazel if you're lucky to have all your dependencies usable there.


I dunno... it seems important to me that CI is as dumb as it can be, because that way it can do anything. If you want that test mechanism you just described, that shouldn't be a property of the CI service, that should be part of the test framework--or testing functionality of the app framework--you use. The tragedy of this super thick service is that CI services suddenly start using your thick integration into your entire development stack as lock-in: right now they are almost fully swappable commodity products... the way it should be.


Hard agree. I've been using gitlab CI/CD for a long time now. I almost want to say it's been around longer or as long as docker?

It has a weird duality of running as docker images, but also really doesn't understand how to use container images IN the process. Why volumes don't just 1:1 map to artifacts and caching, always be caching image layers to make things super fast etc.


"I almost want to say it's been around longer or as long as docker?"

I had to look it up but GitLab CI has been around longer than Docker! Docker was released as open-source in March 2013. GitLab CI was first released in 2012.


I don't know if I'd say they're too simple. I think they're too simple in some ways and too complex in others. For me, I think a ton of unnecessary complexity comes from isolating per build step rather than per pipeline, especially when you're trying to build containers.

Compare a GitLab CI build with Gradle. In Gradle, you declare inputs and outputs for each task (step) and they chain together seamlessly. You can write a task that has a very specific role, and you don't find yourself fighting the build system to deal with the inputs / outputs you need. For containers, an image is the output of `docker build` and the input for `docker tag`, etc.. Replicating this should be the absolute minimum for a CI system to be considered usable IMO.

If you want a more concrete example, look at building a Docker container on your local machine vs a CI system. If you do it on your local machine using the Docker daemon, you'll do something like this:

- docker build (creates image as output)

- docker tag (uses image as input)

- docker push (uses image/tag as input)

What do you get when you try to put that into modern CI?

- build-tag-push

Everything gets dumped into a single step because the build systems are (IMO) designed wrong, at least for anyone that wants to build containers. They should be isolated, or at least give you the option to be isolated, per pipeline, not per build step.

For building containers it's much easier, at least for me, to work with the concept of having a dedicated Docker daemon for an entire pipeline. Drone is flexible enough to mock something like that out. I did it a while back [1] and really, really liked it compared to anything else I've seen.

The biggest appeal was that it allows much better local iteration. I had the option of:

- Use `docker build` like normal for quick iteration when updating a Dockerfile. This takes advantage of all local caching and is very simple to get started with.

- Use `drone exec --env .drone-local.env ...` to run the whole Drone pipeline, but bound (proxied actually) to the local Docker daemon. This also takes advantage of local Docker caches and is very quick while being a good approximation of the build server.

- Use `drone exec` to run the whole Drone pipeline locally, but using docker-in-docker. This is slower and has no caching, but is virtually identical to the build that will run on the CI runner.

That's not an officially supported method of building containers, so don't use it, but I like it more than trying to jam build-tag-push into a single step. Plus I don't have to push a bunch of broken Dockerfile changes to the CI runner as I'm developing / debugging.

I guess the biggest thing that shocks me with modern CI is people's willingness to push/pull images to/from registries during the build process. You can literally wait 5 minutes for a build that would take 15 seconds locally. It's crazy.

1. https://discourse.drone.io/t/use-buildx-for-native-docker-bu...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: