Hacker News new | past | comments | ask | show | jobs | submit login

> Everyone hates YAML. Everyone writes a lot of YAML.

I don’t know a single engineer (ops or not) who enjoys writing YAML, yet it is utterly unavoidable.

I’ve lost count of the amount of bugs and broken deploys that have happened because of YAML, or because of a type error caused by it.




What people really hate is having to write complicated configuration. No matter what format you put it in, it is still complicated configuration. It's hard if not impossible to test, there's only one right way to do it, and it is likely interconnected with other complicated configuration.

Whatever format it happens to be written in (now YAML is the apparently trendy way) it is guilty by association.


Pure YAML will never DRY.

https://en.wikipedia.org/wiki/Don%27t_repeat_yourself

If you write procedural configurations with a real Turing Complete, but not Turing-Tarpit language like Python or JavaScript, then you don't need to repeat yourself ridiculous numbers of times, and manage thousands of lines of hand written eye-sore almost-but-not-quite-entirely-unlike-tea YAML. Plus you can implement validation, warnings and error checking, and support multiple input and output formats.

But so many DevOps dogmatically go out of their way to avoid writing any code, even when the alternative means writing a hell of a lot more brittle unmaintainable YAML where you have to carefully check every line for correctness, and make sure that you're actually repeating the same thing in every location required without any typos, and don't let any of those repetitions slip through the cracks when you make changes.

With real Turing complete code, macros, and templates, you can accomplish the same task as with pure un-DRY YAML data, much more easily and efficiently, with orders of magnitudes fewer lines of code, that you can actually understand, maintain, validate, and test, so you can be sure it actually works without meticulously checking every line of output by hand.

It's better to combine DRY combinations of different non-Turing-complete data formats like CSV, YAML, JSON, INI, XML, using the most appropriate formats depending on the type of data. Spreadsheets are much better than JSON or YAML or XML for many different kinds of data with repeating structure (so you don't repeat key names), and JSON and YAML are much better for irregular tree-shaped data (since you're not restricted to rigid structures), and XML for tree structured data and documents including text.

You end up needing different formats as input as well as output. So you need a real Turing Complete language to read them all in, combine and validate them, and shit out the various other formats required by your tools and environment.


Almost every piece of software running on Linux or Unix requires some kind of basic configuration file. Whether it's INI or YAML. You can't just 'code all the things'.

If you want an example of a dynamic configuration, look at RPM's .spec. That's a monstrosity. What you're asking for is more of that, and that's insane.

You could also use something like Python to do all your typing, build a dictionary, and just dump it to a yaml file if you think writing yaml by hand is too error prone (which I personally disagree with).


I think RPM .spec files are sunk way deep into the "Turing Tarpit" area I was referring to.

https://en.wikipedia.org/wiki/Turing_tarpit

Any time you take a language like bash that's ALREADY deeply muddled in the Turing Tarpit, and then try to make up for its weaknesses by wrapping it up in yet another arbitrary syntax that you just pulled out of your ass like .spec, you're even worse off than you were when you started. Why pick a terrible language like bash that's no good for reading and writing and manipulating structured data formats like CSV, JSON, YAML, INI, or XML, and then try to "fix" it, when there are so many much better well supported off-the-shelf alternatives that don't require re-inventing the square wheel, like Python, JavaScript, or Lua?


I'm sure when RPM first came about, it was probably less of a monstrosity. But like most things in the software world, people start bolting on new features, and you have a big mess.

For most programs though, I would prefer a YAML config file. It's easy to serialize/deserialize for many languages, and you can adjust your init scripts / systemd units to spit out a new config on startup if you so choose. Or you can use something like ansible and some templates to generate that config once when you deploy your application (we're all using immutable infrastructure now, right?), although trying to template YAML files in jinja2 is a real PITA; I'd probably just write an application specific ansible module to dump out my config and skip the yaml jinja template part.

That's the really nice thing about ansible, you can make it do all sorts of interesting stuff.


https://dhall-lang.org/ No tarpit here (no Turing Completeness or recursion), and as DRY as you want.


> But so many DevOps go out of their way to avoid writing code

Well, this is the real problem, is that DevOps people should be writing code in my opinion, especially code to automate deployments and handle configuration. But many times it's just a new job label for people with the same ol' ops and sysadmin skillset who don't want to write code.

It doesn't help that when they make unmaintainable piles of configuration that nobody understands, it typically adds to their job security.


I totally agree!

There are good DevOps and bad DevOps. Personally, I'm a Dev who necessarily knows how to do Ops, because nobody else is there to put out the fires and wipe my ass for me. Good DevOps should not have such disdain for writing code, and should not be so territorial and focused on job security, and should work more closely with developers and code.

And good developers should understand operations, and shouldn't be so helpless and ignorant when it comes to deploying and maintaining systems themselves, so they can design their code to be easily configurable and dovetail into operational systems.

For the same reasons, it's also important for programmers developing tools and content pipelines for use by artists to understand how the art tools and artists work, and how to use them (enough to create placeholder programmer art to test functionality), even if they don't have any artistic ability.

And for artists and designers to have some understanding of how computers and programming works, and how to use spreadsheets and outliners and databases to understand and create specifications and configurations, so they don't design things that are impossible or extremely inefficient to implement, and make intractable demands of computers and programmers.

https://en.wikipedia.org/wiki/Programmer_art


I'm with you on that. I come from a background of Dev and have been called DevOps (by others), although I just call myself a problem solver.

The realization that I really needed to understand what happens in operations for me came around '09, when the Xbox Operations Center called me and told me my code wasn't working, and we had such a wall between us that I couldn't see what was going on, and they couldn't describe it either.

I ended up writing automated publishing pipelines for them to take the most risky parts of their dozens of pages word doc and writing tools to do this for them automatically. Most people didn't even think this was a thing that could be done, let alone should be done. Problem solved!

I think people who are territorial are inherently insecure in their skills and therefore fear getting out of their comfort zone. Generalists are far better than specialists in my opinion. You want someone to go where the problems are, rather than people who invent new problems for others in their own little empire. I think a lot of big companies are so big they can have people silo'd all day, so people don't even think about the people and systems they are affecting.


I've used a lot of JSON. It's OK. No comments, not many data types. But the spec is only a couple pages, and I've never been in doubt about how something should be escaped, or parsed. I could probably write a bare-bones parser in an afternoon, if I needed to.

I've tried to work with YAML a few times. The tree structure and extra data types are great. Everything else is a huge pain. There's at least 3 versions of the spec, and the latest one is nearly 100 pages. The parts I need are always in some "extension", so there's even more that I need to support. It has a system for serializing native Objects, so you have to be careful with untrusted data because there are some interesting security issues. It's so complex, I have trouble knowing what to quote, or how. It's not feasible to write your own parser in any reasonable amount of time. Worst of all, every parsing library is slightly different, so (not unlike SOAP) you kind of have to know that it's going to be parsed with (say) PyYAML.

Complicated configuration is indeed a problem in any format, but YAML makes even simple things complex. From the beginning, I really wanted to like YAML. Unfortunately, I think their goals (human-readable text, language-agnostic, rich data types, efficient, extensible, easy to implement, easy to use) are impossible. You simply can't achieve all of them at once.


I'm launching a CI service [0] which instead of using YAML configs to run builds on a third party platform, will let you run the builds yourself on your own machines so you can just use a script or whatever you want to do your builds/deploys.

I share your frustration and was motivated by it to build this. Why should I spend ages writing up everything as config files when I have a script that already works, is easy to change and debug, and can handle any custom thing I need?

I think config files to describe devops processes are a good approach for huge companies with huge teams, lots of churn etc. The approach perhaps has simplicity & stability benefits - works for everyone everywhere without understanding any detail, changes are a bit easier to track, etc. But for small teams wanting control, speed and the flexibility of just writing code to do what you want it can often be an inefficient approach. At least in my experience.

You should check Box CI out. Launching very soon!

[0] https://boxci.dev


True, though YAML is the most "human readable/writable" of the usual suspects (YAML/JSON/XML)


I'd say that spreadsheets are vastly more readable/writable/editable/maintainable than YAML or JSON or XML (i.e. no punctuation and quoting nightmares), and they're easy to learn and use, so orders of magnitude more people know how to use them proficiently, plus tools to edit spreadsheets are free and widely available (i.e. Google Sheets), and they support real time multi user collaboration, version control, commenting, formatting, formulas, scripting, import/export, etc. They're much more compact for repetitive data, but they can also handle unstructured and tree structured data, too.

To illustrate that, here's something I developed and wrote about a while ago, and have used regularly with great success to collaborate with non-technical people who are comfortable with spreadsheets (but whose heads would explode if I asked them to read or write JSON, YAML or XML):

Representing and Editing JSON with Spreadsheets

I’ve been developing a convenient way of representing and editing JSON in spreadsheets, that I’m very happy with, and would love to share!

https://medium.com/@donhopkins/representing-and-editing-json...

Here is the question I’m trying to answer:

How can you conveniently and compactly represent, view and edit JSON in spreadsheets, using the grid instead of so much punctuation?

My goal is to be able to easily edit JSON data in any spreadsheet, conveniently copy and paste grids of JSON around as TSV files (the format that Google Sheets puts on your clipboard), and efficiently export and import those spreadsheets as JSON.

So I’ve come up with a simple format and convenient conventions for representing and editing JSON in spreadsheets, without any sigils, tabs, quoting, escaping or trailing comma problems, but with comments, rich formatting, formulas, and leveraging the full power of the spreadsheet.

It’s especially powerful with Google Sheets, since it can run JavaScript code to export, import and validate JSON, provide colorized syntax highlighting, error feedback, interactive wizard dialogs, and integrations with other services. Then other apps and services can easily retrieve those live spreadsheets as TSV files, which are super-easy to parse into 2D arrays of strings to convert to JSON.


That is super cool, please don't over complicate it with utility features. I have been considering a project to manage a kubernetes cluster via Google spreadsheet. Google docs have great features relating to user authentication and permissions. The project would needs to visualize the JSON state representation for the k8s cluster... your project is ideal.

e.g. calling another google service with the JSON using a token minted BY THE USER CURRENTLY USING THE SHEET


Thanks for the encouragement! I agree, I'd like to keep it from becoming complicated. My hope is to keep it simple and refine it into a clean well defined core syntax that's easy to implement in any language, with an optional extension mechanism (for defining new types and layouts), without falling into the trap of markdown's or yaml's almost-the-same-but-slightly-different dialects. (I wrote more about that at the end of the article, if you made it that far.)

The spreadsheet itself brings a lot of power to the table. (Pun not intended!)

There are some cool things you can do using spreadsheet expressions, like make random values that change every time you download the CSV sheet, which is great for testing. But expressions have their limitations: they can't add new rows and columns and structures, for example. However, named ranges are useful for pointing to data elsewhere in other sheets, and you can easily change their number of rows and columns.

For convenience and expressivity, I've defined ways of including other named sheets and named ranges by reference, and 2d arrays of uniformly typed values, and also define compact tables of identical nested JSON object/array structures by using declarative headers (one object per row, which I described in the article, but it's not so simple, and needs more examples and documentation).


yeah my eye brows are fairly raised at the thought of embedding a templating language in it. For production use of a spreadsheet, I imagine pulling the source code out of the spreadsheet using https://github.com/google/clasp and synchronising with a repository using Terraform.

At which point Terraform has a weak templating engine already, but its generally enough for building reusable infra. Additional features can be provided within the spreadsheet using reusable libraries. One pain point with embedding functional dataprocessing in a spreadsheet for JSON data, is a decent way of writing tree expressions, to which I would turn to the de facto JSON tooling jq for inspiration.

if you want to take this further, I am up for building some infra for continuous deployment spreadhseets through terraform. tom <dot> larkworthy <at> futurice.com

But I would not embed stuff inline with the JSON. I would have a pure sheet dedicated to stuff going in, and a compute sheet for stuff join out. And the definition for stuff going out should basically be a JQ expression, that can "shell out" to sheets expressions https://github.com/sloanlance/jq/issues/1


TOML is a worthy contender. It is my favorite simple but powerful-enough data language.

Here is a side-by-side comparison of data in TOML versus YAML: https://gist.github.com/oconnor663/9aeb4ed56394cb013a20

And some comments that resonate with me:

  The yaml spec is overly complex and parsing it properly
  is a nightmare. I rather prefer TOML because of it's
  simplicity. Unless one really need the gazillion extra
  features which yaml provides (which one probably doesn't),
  I'd say sticking with TOML seems to be the saner choice.

  I've recently kind of changed my mind on unquoted strings.
  They're nice when you're editing config files by hand, but
  they run into parsing issues in simple cases like when the
  string looks like an int, or of course when the string
  contains quotation marks itself.


I disagree. Yes it is human readable if you just want to read the words (like your would with Markdown), but with a configuration file you want to understand the structure. YAML makes that quite confusing IMO. It seems like a random array of dashes and indentation.

JSON is much more human readable in that respect because the structure is explicit so there's no ambiguity. I'd say TOML is somewhere in-between. But both are vastly preferable to YAML because they don't have Javascript-style type insanity.


I’ve yet to see an editor that works well with YAML, yet JSON and XML are simple. Add comments and trailing commas to JSON and it would be perfect to write. And XML isn’t that evil, really.


To this day I am baffled why choose YAML? Personally, I think its more error prone, less flexible, and harder to read than JSON. Not to say JSON is a perfect format, but it sure feels better than YAML.


Reason number 1: aliases and anchors. Reason number 2: allowing comments.

JSON is a terrible config format, and an ok data interchange format.


Agreed. It amazes me that we got out of the XML-for-everything era with a lot of people thinking the problem was XML, and not the for-everything part. JSON-for-everything is just as maddening to me.


Comments was the reason for me to move from JSON to YAML in config files. I remember perfectly fine what every option does, but adding comments to each option is a must when it comes to sharing my code with anyone else.


If its a config file I'm not sure why you'll need comments. Comments / documentation should be in the system you'll do the config for. If your set options are that quirky / need documentation; why not explain them in a README?

But maybe I misunderstood. Can you give an example of a comment that makes sense / is required in a configuration JSON?


> If your set options are that quirky / need documentation; why not explain them in a README?

Mostly because I don't know whether the person customizing my code will read the README, but they'll definitely see the comments I wrote right before the configuration option itself.

With comments and some extra whitespace, they can go through the file line by line, read what a specific option does, configure it properly, and move on to the next one. No back and forth to the README required.


JSON's finicky about commas. Quotation marks everywhere are a visual noise. Comments are accepted by many parsers, but not by all of them.

Something like http://www.relaxedjson.org/ is a JSON we need. An implicit root object would make it almost perfect for writing configuration.


Yeah, what I want is JSON with comments and nice multi-line string support. I don't like how much syntactic magic YAML does (I don't need or want the country code for Norway to be parsed as a boolean False value). I still don't know what the exclamation point does (e.g., !Ref).

And clearly YAML is the wrong tool for infra-as-code since CloudFormation has to build a macro system, conditionality, referencing, and a couple different systems for defining and calling functions (templates being one and their implicit functions being another). We also see tools like Troposphere and CDK which are effectively different ways to generate CloudFormation YAML via programming languages (or more precisely programming languages that were designed for humans).

And it's not just limitations inherent to CloudFormation--Helm has long had templates for generating YAML, but those also weren't sufficiently powerful/expressive/ergonomic so Helm3 is supporting Lua as well. And as I understand it, Terraform is constantly adding more and more powerful features into HCL.

So what's the solution? It's pretty simple--we should keep the YAML around, but it should be the intermediate representation (IR), not the human interface. The human interface should be something like a functional language[^1] (or an imperative language that is written in a functional style) that evaluates to that YAML IR layer. The IR is then passed to something like Kubernetes or Terraform or CloudFormation which understand it, but it's not the human interface.

As for the high-level language, something like [Starlark][0] would work well. It's purpose-built for being an evaluated configuration language. However, I would argue that a static type system (at least an optional static type system) is important--it's easy enough to imagine someone extending Starlark with type annotations and building a static type checker (which is much easier for Starlark since it's a subset of Python which is intended to be amenable to static analysis).

This, I think, is the proper direction for infrastructure-as-code tooling.

[^1]: Functional in that it is declarative instead of imperative--not necessarily that the syntax should be as hard to read as OCaml or Haskell. Also, while YAML is also declarative, it doesn't have a notion of evaluation or variables.

[0]: https://docs.bazel.build/versions/master/skylark/language.ht...


I do not understand the need for all of these different new language implementations and data formats. GuixSD vs NixOS already showed that Scheme is a superior solution as a configuration language, scripting language, template language, and intermediate representation. A single language that has 30+ years of successful production use, tons of books and documentation. Why re-invent the wheel in four different, incompatible ways?


Is NixOS a scheme? Anyway, we moved away from Nix because of all its problems (the language being only a medium-sized one). Also, I detest CMake, but by your own standard (longevity, popularity), it is better than Nix or Guix. Frankly those tools haven’t shown themselves to be “superior” in any meaningful way. Yes, they have been around for a while, but having been around for a long time and not enjoying any significant adoption is not very compelling.


> Is NixOS a scheme?

That question does not make any sense. I am talking about Scheme the programming language: https://schemers.org/

NixOS is a GNU/Linux distribution built on top of the Nix package manager. The Nix package manager has its own custom configuration language. GuixSD is a GNU/Linux distribution built on top of the Guix package manager. GuixSD uses Guile Scheme as the package configuration language, system configuration language, scripting language, and implementation language for many of the system services (such as init). GuixSD does more things better with an existing standard programming language than NixOS does with its own custom programming language.

> Also, I detest CMake, but by your own standard (longevity, popularity), it is better than Nix or Guix.

What does CMake have to do with anything?

> Yes, they have been around for a while

What are you talking about? The first stable version of NixOS, 13.10, was released in 2013. GuixSD 1.0 was only released this May.

Your post is hard to make sense of.


Yeah, I misunderstood your post—sorry about that. I thought you said “Guix and Nix show us that scheme is the answer”. In any case, I don’t know how you conclude that Guix won over Nix nor how you conclude that the winner is superior to all other configuration languages. It’s especially counter-intuitive that it should be an intermediate representation—the IR should not be executable, it should only be descriptive. The IaC technology shouldn’t need to interpret anything, and the language you pass into it therefore shouldn’t support evaluation/execution.


> I don’t know how you conclude that Guix won over Nix

There is no contest that I am aware of. I used GuixSD vs NixOS as an example of how adapting existing standards can provide a lot more benefits with a lot less effort than coming up with incompatible new languages.

> how you conclude that the winner is superior to all other configuration languages

Here is a condensed list:

1. 60+ years of successful use of S-expressions in all of the needed roles (programming language, intermediate representation, configuration language, template language, network protocols).

2. Many proven Scheme implementations available for use in any scenario, from clusters to microcontrollers.

3. Easily amenable to formal analysis and verification. Excellent tools such as ACL2 available and in use for many decades.

> It’s especially counter-intuitive that it should be an intermediate representation—the IR should not be executable, it should only be descriptive.

S-expressions do both.

> The IaC technology shouldn’t need to interpret anything, and the language you pass into it therefore shouldn’t support evaluation/execution.

That is an unexpected thing to say about an acronym that stands for "Infrastructure as Code." You cannot get automation out of static data.


> Here is a condensed list:

I don’t find that list very compelling. Longevity in particular isn’t very interesting given that Scheme has gained very little ground in 60 years. That seems like an indicator that there is something wrong. I would sooner use Python or JavaScript, which are familiar to programmers in general and which have gained lots of traction in their respective lifetimes.

> S-expressions do both.

Right, that’s the problem. :)

> That is an unexpected thing to say about an acronym that stands for "Infrastructure as Code." You cannot get automation out of static data.

It’s a matter of architecture and separation of responsibilities. The IaC technology takes static data and automated the creation, mutation, and deletion of the resources specified therein. The client generates that flat data by evaluating a program over some inputs.


I replied with this elsewhere in the thread, but maybe Dhall? https://dhall-lang.org/

Functional? Check. Type annotations? Check. Not Turing complete? Check. Compiles to multiple config formats? Check.


Yeah, I think Dhall is the right idea, but I think it’s going to have the same syntax/ergonomics issues that Haskell and OCaml suffer from. Our company invested heavily in Nix, but the developers really struggled with the expression language which seems quite similar to Dhall both syntactically and ergonomically. While Dhall might be great for Haskell/OCaml/F# shops, a configuration language isn’t the right place to push for new idioms or ways to think about programming.


Have you looked at jsonnet[0]?

It has multiline strings and comments among other features...

[0]: https://jsonnet.org/ref/spec.html


We’ve been using starlark for kubernetes configs at my prev gig and I quite liked it. Open-sourced some of that stuff as https://github.com/cruise-automation/isopod (which is based on https://github.com/stripe/skycfg). I hear stripe are also using their thing with terraform although not sure to which extent.


TeamCity actually lets you set job configurations with kotlin in source control. The config is actually a kotlin dsl, so not only can you import settings, you can also have TeamCity generate projects, jobs, and settings from your code.


Gradle also uses Kotlin these days.


I’ve done “devops“ at both large and small organizations. I don’t write YAML. I use Pulumi and TypeScript for system provisioning; it’s all syntactically and type-checked as I write it, and reusable besides. I refuse to deal with the maintainability disaster that is Ansible (I used to use Chef but now pretty much everything can be handled via Fargate) and I don’t have a need for Kubernetes (except in my home lab where I use it with Pulumi’s providers).

The one place I could write YAML is AWS CodeBuild buildspecs. I write them in JSON, when they’re not being assembled on the fly from Pulumi.

You can do it too—just pick better tools for it.


Seeing it mentioned here twice, I went and checked out their website/github. What I see is lots of 20-line examples setting up a docker-container/very basic vm/aws-lambda. While typescript might have better "testability" I'm not sure, where this goes if you just copy/execute bash-scripts or inject javascript code into aws-lambda. To me it seems like it just reinvents the "classic" sys-admin but instead of writing/copying/executing bash-scripts, dev(op)s are now churning out bash-scripts wrapped in tested wrapper code - well...

In contrast ansible has a lot of declarative building blocks (iirc all are unit-tested!), which either fail or do the specified job. And yeah, the specified job might be copying and templating a config-file for some oldschool service (and you might be shocked: people still use these!), which is inherently not really testable (except by integration tests. or maybe you write a test to get the file and to check whether you got to type the name right twice) - I'm not sure how pulumi can help you with that?

And yes, I would love a concise, descriptive DSL (compare for example spack, which does this nicely) over/as an alternative to the loops-crafted-on-YAML-mess of ansible but I take the latter any day over some "we-can-call-cloud-providers-apis-and-kubernetes-but-why-would-you-copy-files?"-stuff like pulumi.


I wouldn't be "surprised" that somebody wants better instance CM. I also don't care and I can't particularly fault the Pulumi folks for putting it on the back burner too; as each of the major instance CM providers are in the process of demonstrating, it doesn't make money and is being eaten by a container-centric and pervasively disposable approach.t I've stopped caring about machines enough to write even a systemd unit file and it's a better world to live in.

You can write instance CM yourself, I'd you want. Pulumi isn't a "cloud thing", it's a lifecycle management tool. But it's typed, and that absolves it of many, many sins given the incredibly positive surface it presents to a developer.


Interesting never heard of Pulumi, thanks

You might be interested in https://www.jetbrains.com/teamcity/

It has a Kotlin strongly typed DSL for pipelines, i absolutely despite Jenkins and Groovy, so many bugs causing huge issues due to weak typing and lack of testing :(


I’ve used Teamcity. It’s totally fine. I don’t want to host anything, though, and I want them inside my VPC. We’ll use AWS until we can’t stand it anymore and them we’ll fire up a Jenkins (just because there’s more internal expertise).


Enjoying it might be an overstatement, but I prefer YAML over JSON, XML, INI files, bash scripts and most other config mechanisms.

Maybe the only comparable thing that comes to mind that I prefer is jsonnet.


I recommend taking a look at TOML [1] as an alternative to YAML for many situations.

  Objectives: 

  TOML aims to be a minimal configuration file format 
  that's easy to read due to obvious semantics. TOML is
  designed to map unambiguously to a hash table. TOML
  should be easy to parse into data structures in a wide
  variety of languages.
A reddit thread titled "YAML vs TOML" has some merit [2].

  YAML downsides:
  - Implicit typing causes surprise type changes. (e.g. 
    put 3 where you previously had a string and it will
    magically turn into an int).
  - A bunch of nasty "hidden features" like node anchors
    and references that make it look unclear (although to
    be fair a lot of people don't use this).

  TOML downsides:
  - Noisier syntax (especially with multiline strings).
  - The way arrays/tables are done is confusing, especially 
    arrays of tables.
Rust uses TOML to configure its build system, Cargo [3].

[1]: https://github.com/toml-lang/toml

[2]: https://www.reddit.com/r/devops/comments/6f82nu/yaml_vs_toml...

[3]: https://doc.rust-lang.org/cargo/getting-started/first-steps....


I find it interesting that it's always called "infrastructure as code" but then everything is basically "infrastructure as config files" where any control flow is application-specific constructs for a language not built for the task.

Why not just read python, like how webpack uses javascript as its "config".


We probably haven't met, but I think YAML is great. If you're doing a lot of work in python and ansible, then you just start using YAML everywhere.

I actually struggled with YAML at first when starting with Ansible because I didn't know it was YAML, and I didn't understand lists vs dictionaries in YAML (seems obvious in hindsight, but just didn't click for whatever reason).

> I’ve lost count of the amount of bugs and broken deploys that have happened because of YAML, or because of a type error caused by it.

YAML can be linted. Also, golang might be helpful to read i and deserialize your YAML files against a known-type so you can catch these type errors. This should probably be done during CI when you create a pull request or merge request. This obviously requires doing things on your end to make it all fit together.


I was wondering if anyone has started using cue[1] and has an opinion about it, seems like a super interesting project aiming to solve this

[1] https://cuelang.org/



Part of my job is introducing people to Ansible. For the people not intimately familiar with YAML, it's the biggest obstacle.


It's a hell of a lot better than JSON and I like that everything uses it.


I have hardly written a line of YAML in my career. It's hardly unavoidable, no one is forcing you to use bad tools. Especially considering that Kubernetes is massively over-engineered for most systems that aren't themselves over-engineered.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: