Hacker News new | past | comments | ask | show | jobs | submit login
Otto, the successor to Vagrant (hashicorp.com)
772 points by agonzalezro on Sept 28, 2015 | hide | past | favorite | 177 comments



Hello everyone! I just wanted to note that I'm running around at HashiConf and likely won't be around to answer as many questions or comments as I'd like. But thank you for all the activity around this and we're excited to show this to you today.

I want to just give a few key notes, though many other folks around here are right.

* For Otto 0.1, we focused on developer experience. We don't recommend deploying for anything more than demos. Future versions of Otto _are_ going to make this production-ready though, and we already have plans in place to do so.

* As others have discovered, Otto is built on top of our other tools and executes them under the hood. We didn't reinvent the wheel, and this has huge benefits. We're dedicated to making all our tools better, and as we do, Otto naturally improves as well.

* Otto is a lot of magic, we're not trying to hide that. It is a magical tool. But we also give you access to the details (OS, memory, etc.) using "customizations." We'll improve customizations and increase the number of knobs over time.

* Vagrant development will continue, we have some major Vagrant releases planned. Please see this page for more info: https://ottoproject.io/intro/vagrant-successor.html

* Remember that Otto is 0.1 today and Vagrant is 6 years old. We have a long way to go to reach the maturity and toughness of Vagrant. We're committed to make that happen, but it will take time.

Thanks everyone, sorry for not being able to be more active in here. Have a great day!


Why isn't Otto named Vagrant 2.0 or something? Its confusing to be "abandoning" Vagrant for a new system "Otto" when really all that is happening is that Otto is fixing issues that have come up over time with Vagrant. Why not a migration strategy, some backwards compatibility during the translation, and then move into the future?


> Why isn't Otto named Vagrant 2.0 or something?

While Otto may be replacing Vagrant as the preferred directly-used tool for most users, its a higher level tool where Vagrant still exists and is used under the covers.

Since Vagrant still exists and is maintained and is used by Otto (as well as being usable independently), it would be extremely confusing if Otto was called Vagrant 2.0.

> Its confusing to be "abandoning" Vagrant for a new system "Otto" when really all that is happening is that Otto is fixing issues that have come up over time with Vagrant.

The main issue Otto seems to be fixing with Vagrant is that Vagrant, alone, isn't a complete solution, a suite of other tools are needed by typical teams; Otto incorporates Vagrant and those other tools, puts an abstraction layer over them, and lets you use them together.


I commonly use Ansible with Vagrant: the two play very well together. Otto seems to be an alternative to Ansible.


> it would be extremely confusing if Otto was called Vagrant 2.0

But that is the closest thing anyone thinks of when you say "successor." Then having to explain that it's not a successor, well, you've done it yourself.


Windows was the successor to DOS, built on DOS. Maybe this is like that?!


It's pretty depressing to see Hashicorp adding more layers to a system that already has way too much accidental complexity.

Ruby on Rails, for instance, is a beautiful example of an internal DSL, but the Vagrant DSL is awful, you could write a much easier to read/use internal DSL in Java.

Then there is the issue that Packer and Vagrant are two different tools. Why should you need to change anything about your provisioners AT ALL when you are trying to burn an image? Doesn't that just defeat the whole point of devops?

And then there is the issue that when Vagrant can't talk to the mothership, it doesn't work right.

It goes on and on. People are screaming out today that "devops is a big waste of time" and I think 80% of that is that Packer and Vagrant are so awful and putting another layer is going to make it 95% awfulness from Hashicorp.


I don't agree with everything you've said, but I do feel like otto is in danger of being another teetering piece on top of the base-os/virtualbox-os/vagrant(+ansible/chef/puppet)/guest-application tower, which are all leaky abstractions and all cause problems in practice and make debugging complicated. "Accidental complexity" is the right phrase.

I can't help but think that what the DevOps world really needs isn't another thin layer of magic trying to shellac over the issues of everything below it, but rather to have a hard think and rebuild everything from the ground up with more appropriate primitives. Sort of like what NixOS is trying to do (haven't used it, so can't attest to how successful it has been).


Replace DevOps with "web technologies" in general and your statement rings even more true.


I think the difference is that in the case of web technologies there are a huge number of competing products: we've had everything from CGI to ColdFusion to PHP to Ruby on Rails to Node.js and we are always learning from what people got right and wrong in the past.


By "web technologies" do you mean all the abstractions away from pages using HTML, CSS and JavaScript if requried?


IMO, DevOps should not be afraid of reinventing their own wheels. The closer you are to the service layer you don't actually manage, the better. i.e. if you use public cloud you should really make use of the APIs and build your own wheels. But of course, use existing tools to do your job first, and slowly create your new wheels. Overtime, the new wheels will function better because they can fit your ever-changing requirements.

That's my two cents.

Sometimes as I am working with boto (Python SDK for AWS), I wish I have the time to write some of the boto modules myself, because they were inconsistent and harder to use than other modules. I can easily create many AWS services with Ansible modules, but I find it easier to hack and integrate more tightly with my environment by writing my own Python code using boto directly. What's inside the machine remains to be Ansible because it does a really good job.

Another classic example is logstash and AWS logs like cloudtrail, flow log and alarams. I can easily write a parser in Python or in C and get my job done, and overtime the code can be reusable. But with logstash I can't guarantee that existing filters and plugins will always work and they get really really messy no matter how good you are with logstash. And that's a layer I have to reinvent because of simplicity and total control.

I'd rather pipe to Elasticsearch myself than relying on logstash in cases like that.


I played with logstash for a day or two, and it looked quite neat, but again, nothing that couldn't be accomplished in a small Perl script.


It is nice to the point you have really sit down sweat through the filters and plugins. The documentation is really horrible. A lot of Internet digging and GitHub source code digging...


'People are screaming out today that "devops is a big waste of time" and I think 80% of that is that Packer and Vagrant are so awful and putting another layer is going to make it 95% awfulness from Hashicorp.'

Awful compared to what?

'And then there is the issue that when Vagrant can't talk to the mothership, it doesn't work right. It goes on and on.'

I'm curious, which package management tool have you seen that works when it can't talk to the mothership?

Hashicorp has a particular philosophy of how it builds and ships its ecosystem: small, standalone, composed tools. That leads to some overlap and inconsistent quality, as they evolve independently. Atlas is supposed to smooth over the overall experience, but, software is hard.

Alternative ecosystems tend to be large pills to swallow (i.e. PaaS), though they might have a better overall experience.


> Vagrant can't talk to the mothership, it doesn't work right

I'm going to assume this is about not being able to detect/handle version upgrades of base boxes if they're not in Atlas (or if you're offline).

This is my issue with this ecosystem too - Atlas is a "free" service but to use/run an alternative "Atlas" would mean reverse engineering it (if there is a spec detailing the endpoints, calls, payloads expected/accepted I'd love to hear about it!).

As for "which package manager works it can't talk to the mothership":

Debian's Apt/Dpkg can work from a purely offline mirror on the local disk if you want. RPM can too. But more likely you'll want to use your own private repo. Possibly just a mirror of the upstream repo. Maybe your own packages. Maybe a mix of both.


"Atlas is a "free" service but to use/run an alternative "Atlas" would mean reverse engineering it (if there is a spec detailing the endpoints, calls, payloads expected/accepted I'd love to hear about it!)."

No official spec to my knowledge, but there are a couple of attempts at this https://github.com/hollodotme/Helpers/blob/master/Tutorials/... https://www.nopsec.com/news-and-resources/blog/2015/3/27/pri...

"Debian's Apt/Dpkg can work from a purely offline mirror on the local disk if you want"

Okay, but Vagrant caches boxes on the local disk too.... if one runs "box update --box --provider" in a scheduled job.


Re: apt, I meant there are well documented ways to host a complete apt repo either on your own server or even offline (eg the Debian cd/dvd images are the same structure)

Thanks for the references - I did find a reference to an environment variable in the vagrant source somewhere so maybe it can work reasonably well with a private reverse engineered "atlas"


> People are screaming out today that "devops is a big waste of time"

These people have obviously never managed fleets of servers pre-devops. Forgive me if the opinions of some devs who have never managed a server in their lives ranks lower than those who have been in the trenches for a while now.


> Doesn't that just defeat the whole point of devops?

> devops is a big waste of time

Devops is about culture, not about tooling.


Culture and tooling go together.

Software managers all the time are bitching that their developers get it into their heads that they need a reproducible build environments and then the three of them go screw off for two weeks trying to get Vagrant to work 100% right.



Otto uses Vagrant under the hood.


Indeed, which makes the tagline "successor to Vagrant" a dubious choice. It would be like referring to fleet as the successor to systemd. They're stacked layers driving existing utilities.


That makes sense, the announcement seems to imply that really the problem was that Vagrantfiles were too restrictive, so make a Vagrant 2.0 that can read the old files (and will whine) and it uses the new super "Appfile" format as its native format. Maybe there is a tool to convert a Vagrantfile to an Appfile automatically (as Appfiles are supposed to be able to do everything Vagrant can do and more)

Anyway, in a more typical software development environment you'd provide such transition tools and carry your users along, so that the next version already starts with a huge base of users. Rather than throw out the old way and come in with a new completely incompatible way to do things.

Very confusing.


I think the idea is that Otto is the (new) interface and Vagrant is a potential implementation. A 2.0 would hide this distinction.


It sounds more like a new product, new direction than Vagrant 2.0....


My take: I think the notion is, it will make sense to use Otto directly from the git-go the way it was for Vagrant in the past.

They still work at different levels, with Otto more likely trying to satisfy needs for things like "Ruby" and "Redis" while Vagrant is still more explicit.


Given how different Otto and Vagrant are, and that Vagrant development will continue independently of Otto, I think it would be very confusing to name them the same thing.


It is bad to name different products using the same name. See gnome2 vs 3. Few would be hating on gnome3 if it wasn't named gnome.


What does Otto do that Vagrant doesn't? I mean, Vagrant is pretty good! What do I need Otto for that I can't do with Vagrant and a few scripts (that I'm likely to have to write anyway)?


The project's Getting Started guide does a really good job at answering that question: https://ottoproject.io/intro/getting-started/install.html


Otto lets you create infra and deploy to it. So it covers production, not just development. On top of that it offers default stacks for PHP/Ruby/etc that will be maintained by the community. From what I understand, if you're demands aren't too specific, you'll profit from infra/os/apps all set up as a foundation for you app. It detects what app you have and hence you can even have an empty Appfile (similar to Vagrantfile, does more, requires less, hence the magic)


I dunno, I have yet to find a real-world situation where the demands/requirements aren't "too specific". There are always obnoxious little kinks unless you are using some tool from scratch as the basis of your entire operation. Particularly in production!

And I don't see why we need default stacks, don't we already have that with Vagrant? Isn't that the entire point of the Vagrant file and the setup file? You grab an image, you grab the post-setup file, and off you go? Is it really that much work to write a post-setup script to run apt-get and do some config work?

It just seems like we're reinventing the wheel, again. And again, and again. And we invent all these new tools just to go through and spend another 6 years fixing them, improving them, debugging them, cajoling others into using them, etc., instead of just improving the tools we already have.

Or perhaps I'm just in a shitty mood, I dunno. All I know is that the only reason I can use Vagrant in my professional life is PRECISELY because it's not for production deployment, and that's awesome, because it fills a very specific and needful spot that was vacant before. Why do we always have to keep expanding, cannot we not be happy with just really good tools for a really good specific purpose?

P.S. Thanks for the info.


"Why do we always have to keep expanding, cannot we not be happy with just really good tools for a really good specific purpose?"

Because that tool, along with the beer in their fridge, is for paid by VC money, who needs to recoup their investment. $99 Vagrant licenses for use with VMware isn't going to cut it.


True, but to be clear, Vagrant can also cover production.

https://docs.vagrantup.com/v2/push


ftrtggt4


I'm not sure if I find it astonishing or expected that somebody finds all that magic good enough to both build that tool and to be thrilled about somebody did. I, for one, find it absolutely unacceptable. The whole point of virtual containers is to have development environment closely resembling production one (not entirely possible, but better than nothing). Your promise that your tool gives me "best possible environment for Python web-app" isn't enough for me, not even close. In fact, I don't really care what you consider "best possible environment", because I already know that there isn't one and I have experienced multiple times these unpleasant moments when you find out that for your app it actually matters if you use Amazon S3 or GlusterFS on real hardware, how many nodes there are, what are exact settings in php.ini or something else you'd be glad not to care for, but you suddenly do care. I don't need magic software, I don't want magic software, I don't like magic software, I'm afraid of it. I really struggle to imagine somebody who isn't a total newcomer and feels otherwise, but apparently there are such people.

What I ideally want is virtual machine based (like vagrant), immutable configuration (like NixOS) approach, with reasonably simple configuration (like docker+fig) and file-based settings (as opposed to docker, where your image is pretty much separate from Dockerfile) with 1 common repo for your "best possible environment" config examples, where every somewhat important decision is explicitly listed and can be changed by user. So something similar (in some sense) to vim-pathogen: git clone, maybe run some other magic command and your env is up and running in several minutes. If contents of config get changed, so does virtual machine.

I understand that what I'd like to have is a bit utopical in today's reality. But nevertheless, Otto is pretty much opposite of what I consider perfect — I cannot imagine anything farther from desirable than that.


Yes - one size doesn't fit all.

Yes - sometimes you need something very specific.

But, please - it sounds like you've just damned us to repeat the same low-level tasks again and again.

80% of websites ARE the same. If you're in the 20% (or 10% or 1%) then good luck to you. But for those of us deploying another typical webapp - I'd really like to draw on community knowledge. I never wanted to learn devops same way as I never want to learn cryptography, oAuth, SQL internals, how nginx works etc. I just want to use tools that solve these problems for me.


Amen to that. We copy/paste the same Vagrantfile from project to project, making minor adjustments and improvements (typically after some part of the previous file failed on us in a strange way and we dug around for a fix).

The odd time we have some special requirement (a work queue perhaps?), but most of the time it's language + store + web server and we're off.


Couldn't agree more. When I was young magic software seemed magic. Then I found out it's 10x harder to debug when it breaks.


> it actually matters if you use Amazon S3 or GlusterFS on real hardware, how many nodes there are, what are exact settings in php.ini or something else you'd be glad not to care for, but you suddenly do care

It actually matters... at scale. No project starts at scale. Most projects never need to scale; most projects die before they scale.

Magic software is for prototyping. Sensible defaults and convention-over-configuration mean trying (and failing, and trying again), quicker. Even though I have software in production with a million users, I still code new my new experimental projects on Heroku, because relying on "magic software" is one barrier less in the way of getting to work. (Not a technical barrier, mind you; a barrier of choice paralysis about what my architecture is going to look like.)

At scale, meanwhile, you have a separate thing, a magic piece of strong-AI-equivalent software called a "dedicated ops team." When you get there, the task of refactoring your idiotic prototyping decisions becomes their (hopefully-well-paid) problem.


Key takeaways for me:

- __Right now__, Otto appears to be Vagrant++. I wouldn't use this for prod, at least not for a while.

- Otto is written in Go. Source is here: https://github.com/hashicorp/otto

- Otto uses a plugin model for different applications. Plugins aren't supported yet. https://ottoproject.io/docs/plugins/app.html

- Built in plugins don't appear to be consuming a sane plugin interface. How the built-in plugins work is non-obvious.

- Under the hood, Otto appears to be using Packer, Terraform, and Vagrant.

- I would consider Otto, Nomad, and Terraform to all be "provisioner tools". They seem to be all directly related to tools such as Ansible provisioning, Chef provisioning, Fog, or other direct management tooling, like the AWS CLI or Powershell CLI for VMWare.

- Otto, Nomad, and Terraform all promise to solve the same problem in prod in different ways:

-- Otto is a one-off push to set up infrastructure and deploy to prod.

-- Nomad is for pushing jobs to help maintain long standing infrastructure in prod.

-- Terraform is for periodic pushes to prod to create idemopotent infrastructure.

In other words, from least to most robust IMO:

Least Robust -------> Most Robust

Otto --> Terraform --> Nomad


I think the Terraform/Nomad/Otto hierarchy is supposed to be like this:

  - Terraform defines that you have X servers with certain specs.    
  - Otto defines that those servers are running Docker or whatever.    
  - Nomad defines that your application is running in X containers in your infrastructure.
So, it's a layered approach (Infrastructure -> System -> Application), akin to the layers of your network. Each one is isolated enough to not worry about the others. At least, that's how I'm hoping this all shakes out :)


"Notice that the Appfile makes no mention of OS, memory, disk space, etc. Otto has built-in knowledge of best practices and picks smart defaults for you."

Sounds like a big bag of "nope."


Why? This is what I love about Heroku. I get to build the app I want without worrying about all the other aspects of hosting that make me want to never launch an app ever again.

I love the idea of focusing on the development part and letting something / someone else worry about hosting.


Have you ever dealt with a large app deployment on Heroku? Unless you're willing to throw money at the problem it sucks.


Yes and if you're a small team with limited resources and you don't have an operations person/team throwing money at the problem works.

There will come a time when you will need to setup a more customized hosting solution but hopefully by then you can hire someone to do operations that knows what they are doing.

I don't think Heroku or Otto are designed for big projects anyways, they're a great way to get up and running and help you grow without having to worry about infrastructure upfront or for a couple years.


> I don't think Heroku or Otto are designed for big projects anyways.

If you're right about that in Otto's case, that's a bummer. It's frustrating that there is this seemingly intractable divide between things that are great to start with for a new project (Heroku, etc.) and things that scale well as a project grows huge (Kubernetes, etc.). Every time a devops tool comes out, I read about it in hopes that it has both the easy-start and but-scales-as-needed stories, and inevitably find people saying it's actually one or the other.


I think the truth underlying that observation is that scaling is always hard. It's relatively easy to pick some defaults that work to get 99% of projects off the ground when they are small, but once you approach real scale inevitably hard decisions need to be made, and those decisions will depend on seemingly minute details of your service's workload. Therefore, tools like Kubernetes necessarily have a threshold of expertise required to use them effectively. I think all of these things are all trending towards more scalable and easier to use, but fundamentally there will always be a tension between those two goals.


A Heroku-like PaaS layer that serves as a manager for—but crucially, doesn't attempt to abstract away—a set of IaaS components would be interesting. Sort of a convention-over-configuration CLI tool for interacting with some CloudFormation-like API.

The "non-abstraction" part doesn't seem to be in any cloud provider's interest to sell, though; even with AWS, when you allocate a database, it doesn't result in a new EC2 instance for the DB being dropped into your bag of instances, such that you just get charged instance fees for the instance. Instead, it all gets packaged up so that you can be charged higher, separate, value-based database fees. It's a bit ridiculous.


What works for me is

    cf scale my-app-name 4
Or 10, or 100, or however many copies I need.


As opposed to throwing money at a person to solve the problem?


Note that it appears that the Appfile can (or will be able to before a 1.0 release, at any rate; the documentation is incomplete and not entirely consistent and its not clear if how to do this is just not documented or not yet present) specify these things, but the intent to is to provide basic defaults so that the "get something minimally working" workflow is as simple as possible.


I think that's why they're defaults? Not hardcoded unchangeable values?


Everything having a default is sometimes not sensible.

In such cases, explicit and declarative is much better than implicit and hidden.


I don't understand this sentiment. What's the issue with having defaults that are used if no values are specified? Assuming the documentation is good and you can find a list of all options (and default values).


Indeed. Sometimes it's worth making people do a little bit of extra work so that they can actually start to grok a system.

A little (but not quite) like this: https://github.com/eggheads/eggdrop-1.8/blob/master/eggdrop....


Or a huge bag of "finally!"


I use Vagrant regularly for development work and it is a good tool, however, it does still have a lot of issues. I have also gotten the sense over the years that HashiCorp developers could have been more responsive to users and addressed more of the issues that mattered most to the user base.

And now, with a new, complex, and broad product introduced...I just don't have a lot of confidence that the quality is going to be there or that it will ever fulfill the very large goals that are outlined.

I would prefer they focused on Vagrant and made it a really outstanding, polished tool.


Maybe they thought the best path to a really outstanding, polished tool was to approach the problem slightly differently and Otto is their solution? If that's the case (I really have no idea), maybe rather than devoting too much time to fixing vagrant they decided to limit the time spent on that to further what they thought is a better overall solution?


Yeah unfortunately I think it's more about tackling entirely new problems in addition to the problems Vagrant already solves.


According to the blog post Otto is actually built on other tools including Vagrant (which provides the dev environments) and Terraform (which provides the infrastructure management), so I think they will keep actively developing Vagrant in the future.


Seeing that it uses Vagrant behind the scenes, and other HashiCorp tools as well (meaning it does things beyond just local development environment configuration... which is what 99% of people use Vagrant for), how is Otto "the successor to Vagrant"?

To me, it seems like it's just a bit of word play to try to get more interest in a product that is less interesting to some people. Vagrant is generic enough and helpful enough that it has become one of the two or three preferred tools for building local environments for developers.

AFAIK (anecdotal evidence here, to be sure), other HashiCorp tools are nowhere near as dominant. So is the tagline just to try to get more people interested in the tool?

It wouldn't sound as interesting to _me_ if it were "Otto, something like Heroku but a Go app that uses a bunch of HashiCorp products to deploy apps locally and in the cloud".


I think you've pointed out exactly the reason why this product exists.


Yes, but I was making the point that the tagline under which this news story was released ("Otto, the successor to Vagrant") makes it sound as if Vagrant is no longer going to be supported, or as if Otto will focus on local development environments, something like that.

In reality, it seems the "successor to Vagrant" line is more to attract attention, as it's not at all a _replacement_ for Vagrant, just a tool you can glom on top of Vagrant and a bunch of other HashiCorp tools.


I'm kind of glad to see this, as I wrote https://github.com/mpdehaan/strider for the reason that Packer and Vagrant used different config files.

This appears to address that.

OTOH, it says it's executing Vagrant and Packer under the covers, so I really need some of the Packer limitations I have (like https://github.com/mitchellh/packer/issues/409 ) addressed more than I want glue on top.

Anyway, if people want to hack on Strider, pull requests are welcome.

I'm not using it actively (yet), but it's a very very tiny amount of code to supply both. All the work gets done by boto.

Back to otto - I am curious what otto means when it says it's going to start to talk to infrastructure and means that it's going ot be more of a workflow engine that can also invoke terraform or what.

The DSL changes appear to maybe be a step in that direction?

Would probably benefit from more than one liners on the homepage, to show what is really involved more quickly.


> OTOH, it says it's executing Vagrant and Packer under the covers, so I really need some of the Packer limitations I have (like https://github.com/mitchellh/packer/issues/409 ) addressed more than I want glue on top.

Ahhhh that Packer issue. I have been following that one for what seems like years too.


So I feel like the reason I like Vagrant is because it does it's one job well, then gets out of my way. Doesn't sound like this shares that philosophy.


Probably because they did such a good job with Vagrant that they realized they were running out of things to do and decided, in the interests of job security, to ride the train a little longer by "pivoting."

Then again, who am I to complain? Vagrant is a superb tool and the endless and pointless reinventing of standards every few years keeps like 70% of us employed.


This addresses a bigger picture that some folks have been (ab)using vagrant to accomplish and requesting features that belonged in a new project, rather than on top of vagrant.


Such as?


Such as provisioning and deployment I guess... I've seen those requests over and over.


Which is why this uses Vagrant and doesn't replace it.


> So I feel like the reason I like Vagrant is because it does it's one job well, then gets out of my way. Doesn't sound like this shares that philosophy.

Seems to me it might, if you view its "one job" as coordinating a bunch of lower level tools that each do their own "one job".

Well, and assuming it does it well.


All Ruby development environments look alike, all PHP development environments look alike, etc.

Did they do any user research? Doesn't feel like it based on the above statement.


A lot of PHP applications or websites (especially small ones) just require a basic LAMP stack at a bare minimum to work fine - e.g.: the success of XAMPP/WAMPP/CRAMP/TRAMP/BAMP/CLAMP whatever. (Yes, there's plenty of extras that could/should be added for doing things the right way)

For example, a lot of Vagrant LAMP users don't require much provisioning at all. Of the top 10 most downloaded Vagrant boxes [1], two of them are pre-provisioned (or nearly) Vagrant boxes. Homestead [2]: 2,769,045 downloads and Scotch Box [3]: 275,963 downloads.

The biggest problem with these setups is deployment. Vagrant Push [4] requires a little bit too much overhead for this audience. The blog announcement even admits this. Hell, Laravel/Taylor Otwell (the Homestead guys) even built a full on deployment service called Forge [5].

If I had to guess, Otto basically is a hybrid to all this. Like an easy Vagrant and basic Heroku all-in-one to help push their Hashicorp's Atlas product.

I'm definitely looking forward to testing it out. Personally loving Hashicorp.

[1] http://vagrantcloud.com

[2] http://laravel.com/docs/5.1/homestead

[3] https://box.scotch.io

[4] https://docs.vagrantup.com/v2/push

[5] https://forge.laravel.com


Mitchell Hashimoto has been supporting Vagrant users for several years now. I wouldn't be surprised if that experience has led him to such a conclusion in regards to the use of Vagrant, e.g. all uses have dependency on a version of Ruby/PHP and some versions of some number of libraries and so on. The same abstractions allow control despite differences at a finer grain.


The number of variations on PHP applications are infinite:

1. Apache or Nginx+PHP-FPM 2. INI Configuration 3. PHP Extensions and their INI configuration 4. Vhost Configuration, especially rewriting rules 5. Docroot in app root, or dedicated directory

This cant possible be defined in a common/generic way and supporting more than 50% of the use-cases.

There is a reason why complex click to configure interfaces exist for PHP and Vagrant with Phansible http://phansible.com/ and PHPuppet.


Doesn't matter. There's always the 80/20 rule. It doesn't have to catter to any and all exotic need. One could always customize further (even script that) after Otto's initialization.

Besides a lot of this variety is because people don't follow best practices, and instead have a hodgepodge of this and that technologies, with this and that settings.


Ruby is pretty standard and I assume PHP is as well being VERY mature. However, outside of the core things vary wildly. People using noSQL or simply using rails-api for some examples. Maybe they can manage this well and allow for variance, but Even somewhat granularly no environment is the same. I have never used Docket or containers, but this seems to be the selling point. You don't use a service to help you configure your environment, but you configure it for docker and you can run docker anywhere a bit can go.


Right, but since this is for microservices that should be not caring where your servers are coming from, just looking in your ENV or generated config, you let Otto make the noSql services and etc.

edit: Or at least it should in theory.


What do you mean? How is the statement false?


Ruby: MRI vs. REE vs jRuby vs Rubinius, for starters.

And this choice is solved through a variety of mechanisms that people rely on for other parts of their infrastructure, and feel strongly about.

For my part, this is also a solution that is way too late: We have Docker/Rocket and a range of similar tools. Why do it yet another way, when if you instead build a Docker image, you can take that Docker image and deploy it without having to translate your dependencies to a different format and re-test everything?


Docker is a wrapper around Linux. Vagrant is a wrapper around the entire VM [e.g. Virtual Box or VMware]. Thus Vagrant can be used to manage Windows environments. See the list of Vagrant boxes here:

http://www.vagrantbox.es/


Docker can wrap KVM/Qemu, so you can in fact use Docker as a wrapper around the entire VM...

But I take the point - the idea that anyone deploys stuff to Windows is just so foreign to me that it didn't even occur to me.


As far as looking alike goes I think it could end with uses Ruby or uses PHP. A lot of modules will differ for different use cases. I may not need any of the modules another user needs.


nothing about mentions installing modules for you. This is about the environment, not the actual project


PHP/Apache modules, which are very much environment based.


I think HashiCorp did themselves somewhat of a disservice by presenting Otto as a "successor" to Vagrant. Vagrant is a great technology that solves complex problems. Otto uses Vagrant to solve a different set of problems.

Otto is designed to automate the provisioning of local dev environments and production environments. While some use Vagrant to solve this problem, it's typically an un-standardized, home-grown solution. Otto is an attempt to standardize and automate the process.

FULL DISCLOSURE: I've been working on a project for the last year-ish that solves the exact same problems: Easily provisioning and configuring local dev environments and finding consistent parity between dev and production environments. I'm interested to really dig in and see the differences between Otto and the project I've been working on, Nanobox. Would love some outside feedback so feel free to take a look: https://nanobox.io

But I digress. I think Otto is less a "successor" to Vagrant, and more of a natural offshoot that solves a different problem. I don't ever see it replacing Vagrant, especially since it uses Vagrant behind the scenes.


Looks really cool, excited to see some more detail on how it actually works!

Why the new custom config file format? I've mostly found that these homegrown formats (logstash? nginx?) suffer from inconsistency and lack of flexibility, and don't have any obvious benefits. Why not use one of the following like other Hashicorp tools?

- JSON/YAML, possibly with support for templating like Ansible

- A Ruby DSL

- A limited, but well-tested and understood format like .ini files


> JSON/YAML, possibly with support for templating like Ansible

When you are writing templates for your configuration files, what you've wanted all along was a real programming language.


Yes and no, a lot of the ops folks I know peter out at pretty basic scripting and find a templating system easier.


So we should build our most essential tools to suit the lowest common denominator? A proper programming language would allow the people that can only do basic scripting to do that, whilst allowing more advanced users to extend the system further and take full advantage of all the language's features and libraries. When those more basic users got more accustomed to the system, there would be room for them to move up that they wouldn't have if they were stuck with a nasty string-based templating language.


Actually I'd love to be able to drop down to Python in my ansible playbooks/configs but ansible modules are very different to yaml/template files for the effort.


I have some misgivings about Ansible's fundamental design choices. It's main claim to fame seems to be that it sucks less than Puppet and Chef.

I used to really enjoy using Fabric. I only ever tolerated Ansible. I wonder if building on Fabric might have resulted in a better devops tool than Ansible provides.


For smaller setups, I've been very happy with Fabric and the declarative configuration addon Fabtools - https://github.com/ronnix/fabtools


Answering my own question after looking at the code:

https://github.com/hashicorp/hcl#why

It's JSON-compatible, at least (valid JSON is valid HCL).


The config format looks like the same format as Terraform.


It's HCL. Same format used by Terraform and the just-announced Nomad.


Fails on Windows:

		the following errors and try again:

		vm:                                                                                                            
		* The host path of the shared folder is missing: C:Projectsotto-playground                                     
		* The host path of the shared folder is missing: C:Projectsotto-playground.ottompiledppoundation-consulpp-dev

		Error building dev environment: Error executing Vagrant: exit status 1

		The error messages from Vagrant are usually very informative.                                                  
		Please read it carefully and fix any issues it mentions. If                                                    
		the message isn't clear, please report this to the Otto project.


Easy to reproduce, latest everything: otto, Vagrant, VirtualBox


"Hah" says the person who is trying to reproduce this in about a month


Same, despite running under Cygwin (babun).


So the supported environments are available at https://www.ottoproject.io/docs/apps/index.html, and they're Go, PHP, Node.JS, Ruby, and Docker.

I hope Python gets added to that list as a first class environment in the future.


I thought the omission of Python to be... strange.


Probably punted because they couldn't decide between 2 or 3.


Do you have an axe to grind? It's not terribly hard to support both.


I don't understand why so many people are so scared/not interested/afraid of system administration. It's not that hard. Why does it have to be abstracted? Bringing more and more layers makes everything slower, more complex for sure. The only thing is really hard is configuring and managing an email server.


I agree, in spirit. Setting up an app server, a web server, a MySQL database, or a cron job is pretty easy once you get the hang of it. There are a lot of complex sysadmin tasks -- running a BIND server, an email server, a Cassandra cluster, etc. -- but that's not what the majority of developers are doing.


I don't understand it, but as an SA there's also a lot of people who are incapable of thinking 5 minutes ahead to the next problem - which is also baffling because its pretty much the same as programming.

You setup a cronjob, did you code the script well, who does is notify, how do we know when it fails, what's the recovery strategy?


You mean like an in-house metal server? Well because I need to adjust resources on the fly sometimes, or experiment with different configurations, or test upgrades on my own.

That's why you use a virtual machine.

If you're wondering why to use a scripted provisioning system, it's to have that code under version control and shared between developers.


Which tells me that you never worked on a platform with multiple servers and complex network and storage situation. Atleast not for a few years when your infrastructure bitrots away and you're fighting to keep it all together most of the time. But by then you're used to the mess. No big deal.


I love Vagrant. At this point, every repo I contribute to has a Vagrantfile. I've turned dozens of coworkers onto Vagrant. That said, I am not a fan of the full Hashicorp ecosystem. Reading through the Otto site, and comments on HN, my first reaction is that my favorite tool has been selected for planned obsolescence.


I had sort of the same reaction. I just learned a bit of Vagrant recently. I needed something to set up a simple three-node environment for a tutorial I'm doing, and the best starting point happened to be a Vagrant project. It worked well enough that I've been thinking of other uses for it, especially as a basis for automated tests on the distributed system I work on (our current infra for that frankly sucks).

Then I read this, and it doesn't look like it's a Vagrant successor in any way that's useful to me. I don't want it to try and figure out service dependencies for me, because I can absolutely guarantee that it's unable to do that for the component I care about. I don't want it fiddling with DNS. I think using the same description but different commands for dev vs. production is a terrible idea. They say Otto does application-level instead of machine-level configuration, but machine-level is what I want. They say multi-VM is too heavyweight but that's also what I want. It's opinionated in all the wrong ways. Everything in https://ottoproject.io/intro/vagrant-successor.html makes it clear that Otto is fundamentally different from Vagrant, which totally belies their claim (at the end) that it will replace Vagrant in any significant way.

They should just come right out and say that they've created something different on top of Vagrant. Maybe it's cool, but it's not a successor. This brand hijacking just makes them seem fickle or shifty. Now I think I'll just leave Vagrant behind while my investment in it is still small, and learn one of the bazillion other tools that I could use to accomplish the same thing.


Otto uses Vagrant under the hood. Vagrant isn't going anywhere as far as I can tell.


There's a lot of magic here. I'd love to know what's going on beneath the hood.


That magic will surely prove to be problematic. I see lots of potential issues. It automatically installs other Hashicorp tools, presumably outside of your system's package manager, which means that it is a package manager in addition to whatever else it does. Where does it install software? How does it verify integrity? Does it build from source or use pre-built binaries? If they are pre-built binaries, how do I get the corresponding source code and build scripts? Does it bundle all other dependencies? How do I make sure I get security patches for the whole dependency tree? I don't expect that Hashicorp would have satisfactory answers to all these questions.

Furthermore, automatic dependency installation sounds like it will make reproducibility difficult, and I imagine automatic application type detection will fail in spectacular ways. What happens when the magic doesn't work? The Appfile looks like yet another ad-hoc domain specific language that you need to learn that has the usual major deficiencies compared with using an existing general purpose programming language.


I love the idea in general and especially using git urls as dependencies and use the Appfile from their repos to bring those services (and their dependencies). That keeps ownership to the people best know about it (the developers of a given service) instead of have that centralized. But all this sounds very ambious.

I'm especially curious how to configure dependencies. You might need to creating tables in a database which is setup as a dependencies, but also need to support restoring from backups or setting up replication. Beside that, some of this configuration should belong to the owner of the service using the db (db names etc), other to the owner of the db (global server settings, limits etc). To add to that, some configuration needs to happen at runtime, so you can't just update the Appfile.

Anyway, this sounds like an awesome "UX" - let's see how it plays out in reality.


Does this use Heroku buildpacks or do they reinvent the wheel?

Edit: Otto appears to do a lot of things. But the part that everybody's complaining about is the "magic" part, the part that sets up a system automatically based on the language of the app. Heroku buildpacks also perform this magic, and have been open sourced and have a large community that helps maintain them. They're useful outside of Heroku -- for instance you can use them with docker via https://github.com/progrium/buildstep

It seems crazy to me that Hashicorp would try and reinvent this wheel. It's a problem that's fairly easy to do as a proof of concept, but it's the niggling details and number of combinations that can really explode.


I've worked on Cloud Foundry buildpacks team. Several of Cloud Foundry's default are downstream from Heroku (Ruby, Python, NodeJS) and several aren't (Java, Go, PHP).

Heroku's buildpacks are open, but they build them for their own purposes. In particular a large part of my work involved recreating behaviours added to support this or that Heroku change (particularly STACK, god what a mess that was), dealing with binaries being silently substituted, the messy statefulness of their staging architecture.

Oh, and most importantly, we made it all work in a disconnected environment. Which Heroku never intended.

Speaking individually, not as a Pivot, if you didn't have to, I wouldn't recommend starting with Heroku's buildpacks. They solve a lot of problems, but no small part is solving Heroku's problems.


It's interesting you say that, because Cloud Foundry recently built a requirement for the environment that runs in their Warden containers (https://github.com/cloudfoundry/stacks) into their buildpacks, which caused applications on CF distributions that used other stacks to break when using their buildpacks.


I was partly responsible for that work. We did it for the same reason as Heroku: Ubuntu 10.04 passed out of LTS.

CF buildpacks explicitly state which binaries should be used with which stack in the manifest.yml file. If you had a breakage, feel free to report it on the relevant github repo.


In that analogy Heroku buildpacks are more like the boot the police put on your car.


Why not choose a name that is not already used by other project?

There is already a pretty popular library for android named Otto http://square.github.io/otto/


"If your application depends on other services (such as a database), it'll automatically configure and start those services in your development environment for you." -- Can you guys setup some tutorials that show this? Like you have a set of instructions for a Rails AWS deploy, but it would be nice to see like how to setup postgres etc with the dependencies.


For the love of all that is holy, why do their sample code fragments have `smart' quotes in them?


If I understand correctly, this is actually a bridge between several existing Hashicorp tools like Vagrant, Terraform and Consul (as hinted in "What is Otto?") integrated into a one-stop solution.


Its an abstraction layer that uses them behind the scenes. If it can avoid being a leaky abstraction, it could move away from the current underlying tools to future tools and built-in functionality in the future.


> Vagrant is a mature, healthy project that is continuing to grow every day. We are committed to supporting Vagrant for the foreseeable future and will continue to release new versions. Otto is our vision for the next generation and will be developed alongside Vagrant.

Yeah I agree. That's why there will be a time gap between 5 years, until Otto becomes mainstream. The eco-system of vagrant is so good that I even got it supported in my IDE ( WebStorm ). I don't plan to use Otto, before I can enjoy all the benefits of Vagrant that exist today.


Real world has taught me 'magic' only works in some specific cases and mine usually isn't one of those. That is why anything that claims to be smart gets to my nerve. Instead, I prefer tools that are clearly scoped, do their own scope extremely well, and designed to complement each and work well together, AND with good documentations. Yes, Unix command line tools are good examples (except not all have good docs though)

There, I will vote for this option against any other tool that claims ability to do things automagically.


Wow, there goes my business idea. Kudos.


Why? If someone else is doing it that probably means there is a market for it.


Sure, but could I compete with free?


If you or someone created something that was basically Heroku in a box that could then be hosted anywhere. I would be interested.

I want to build apps, not worry about hosting and servers.


Did you try http://www.openshift.org? The self-hosted version is basically what you described, at least for the JVM languages.


I have tried it, but just don't hold up as well as Heroku. I still use and love Heroku, but would be nice to have a similar option I could host anywhere on my terms.


Cloud Foundry. It runs on AWS, vSphere and OpenStack. Apache 2 license, IP belongs to an independent foundation, not a single vendor.


So you seem to actually worry about hosting and servers, despite you saying you do not ;)


If you are talking single server solutions, then https://github.com/progrium/dokku


Yes. Especially if you have a good tool that you pair with 'paid for' consulting. Consider RedHat or Canonical (among others).


I almost feel like I am in the majority here but I am insanely excited for this.

It is precisely what I have wanted for a long time and the ability to customize and override defaults where wanted seems to handle most of the complaints that people in this thread are mentioning.

I don't think I can underestimate just how much I don't want to have to stay up to date with all the various best practices for deployments. I have zero interest in that and it is currently a fairly expensive problem to fix for many.


Otto looks great, and is probably a substantial improvement over Vagrant.

My biggest concern is that it still relies on VirtualBox for local dev. VirtualBox is unreliable and slow, especially its file mounting driver.

I'd love to see this evolve to use a different virtualization solution, maybe something based on the OS X Hypervisor Framework (ala https://github.com/mist64/xhyve)


Vagrant works with parallels desktop (free plugin, requires parallels pro in v11+) and VMware fusion (with a paid plugin)


If Otto's Appfile mimics Heroku's app.json file (https://devcenter.heroku.com/articles/heroku-button#creating...), that'd be a huge win to me in having more sustainable Rails open source projects.


Seems like a good start to a useful project! I'm a bit turned off by the lack of customisation options for the servers though. I would like a bit more options in terms of what gets installed and what not (I don't need Bazaar, Mercurial _AND_ git installed f.e. and I'd like nginx i.o. Apache).

I'll definitely be keeping an eye on the project!


Heads up blank page for docs on custom types: https://ottoproject.io/docs/apps/custom-plugin

Eagerly waiting to see what goes there eventually - very exciting!

Also, will there be a way to create custom Infra Types?


Never used Vagrant that much. How does Vagrant/Otto compare to Docker/Docker Compose/Docker Swarm?


Really nice to see it. Me and my co-worker had this feeling of needing something like OTTO when first using vagrant... I remember saying that it should definitely be a product. We ended up writing a python script for our user case that makes life really easy for new developers joining us.


Does this mean that development focus will shift from Vagrant to Otto?

Is Otto mean to replace vagrant?


Maybe, eventually? But not right now.

https://ottoproject.io/intro/vagrant-successor.html


Maybe Vagrant with Docker as backend will get some love now? That is native Docker (ie on a linux host) not just a separate vm to host Docker. The fact that one can't set static ip-address using the Vagrantfile is annoying.


Unfortunate name, there's a popular Java open source library called Otto.


There are a lot of things called otto - https://github.com/search?q=otto&type=Repositories

doesn't mean its a bad name


Yeah, but look at the number of stars and forks. Square's Otto is a pretty popular library.

Side note: I don't know why someone is down voting people saying this. I mean yeah they will not change their name I guess but it should be bad for the next big project to choose a name that is not taken by a popular project already.


One thing Vagrant does right is to use a DSL. I have no idea why this has been abandoned in favor of the common error of inventing Yet Another XML/JSON/YAML/TOML/INI.


there is already go project named otto: https://github.com/robertkrimen/otto


there's also an android event bus by square http://square.github.io/otto/


Last commit a year ago. Unless you trademark a name or at least has active development I don't see anything wrong with using the name.


nothing wrong with one year last commit when project's pretty mature


I think I'll stick with Vagrant and Ansible.

I'm not going to trade it for magic to be honest. Unless there is a compelling reason to do so that I'm not seeing/reading.


Wonder if this attempt will work properly with windows unlike packer and vagrant. Path issues, unreliable builds, ugh. This isn't the portability promise I expected.


Strange. What issues were you having?

We've been using Packer and Vagrant with Windows builds for the better part of a year now and it's been rock solid.

We actually leverage it into our Windows Deployment Server to create automatically updated OS Images to be deployed to production machines.


WinRM dropouts when building images in packer, random failures that are unexplained, network not coming up in vagrant, consistency.

I've partially written my own version of it in powershell and Hyper-V now.


It seems that my ISP is poisoning my Internet connection once again :(

    Error building dev environment: Get https://checkpoint-api.hashicorp
    .com/v1/check/vagrant?arch=amd64&os=linux&signature=&version=: x509:
    certificate is valid for www.example.com, not checkpoint-api.hashicorp.com
    https://checkpoint-api.hashicorp.com/v1/check/packer
    ?arch=amd64&os=linux&signature=&version=


How does this compare to azk http://www.azk.io/ ?


I'm not seeing anything about Python/Django. Will that be added eventually?


"The creators of Otto are also the creators of Vagrant. After working on Vagrant for over six years, we've learned a lot and we believe Otto is a superior tool for development plus so much more." - another successful project written on Go! Nice!


Does this work on Digital Ocean droplets? Vagrant doesn't.


How so? The DigitalOcean provider for Vagrant works great: https://github.com/smdahlen/vagrant-digitalocean


Something Else!!!


[flagged]


Do you mind backing up your opinions? Vagrant is a very widely used and beloved tool.

edit: I just noticed your profile indicates that you work at Docker, Inc. I'm trying to figure out whether I feel that plays into your opinion or not but I would expect a bit more professionalism at least.


No this isn't Docker speaking. This is me.

I've always felt that Vagrant was a solution in search of a problem and no being at Docker hasn't changed that either way. I have colleagues that love Vagrant and they are certainly entitled to that opinion, as much as I'm entitled to think that it's horrible.

I honestly haven't seen any value brought from Vagrant or its ilk to anyone. There is temporary relief from some very real problems of transmitting development environments around the place. But the tool ends up creating pets that you have to care for over long periods of time. You have to deal with entire operating systems when you really want to abstract away the OS in favor of getting some real work done.

Configuration files as infrastructure aren't particularly interesting to me. I don't see how building annoying bits of configuration will fix anything in the long run.

Also note that while I think the HashiCorp infrastructure-as-code attempts are quite lacking, their Consul and Vault tools are really nice. :)


> Configuration files as infrastructure aren't particularly interesting to me.

How is this any different than a Dockerfile? Vagrantfile is certainly more abstract I guess.


I agree. Last time I used it in mid 2014, it was unusable. I spent more time trying to fix and work around its bugs than I would have building a VM from scratch, so eventually I gave up and went back to my plain VM. It's great for trying stuff out, but anything that needs to work for more than one session was extremely problematic with a lot of networking problems that required destroying the VM and rebuilding it constantly.


Thanks for the insightful and well argued opinion.


yezsutexryckjg




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: