This looks like a great little system. Congrats to all the team. So many questions!
The most important one: Is that Patrick Stewart doing the voice over in your video? :-)
My only concern with the design is that it's not clear that clustering gives you true redundancy, as it appears the 48 port 10gbps switch is integrated to the same PSU as the controller, and could be a SPOF for a rack of servers?
I assume the switch is essentially running a completely separate control plane from the x86 hardware, and forwarding plane, that can be controlled by other appliances in the cluster?
I think the idea is to have multiple nebula controllers, one per rack or so? From 2nd page of The Register article:
"You can run a cloud with a single Nebula One controller, but the system was designed to have multiple controllers for high availability and resiliency, says Kemp. The Cosmos operating system can currently span as many as five controllers in a single OpenStack controller domain and automatically load-balances work across controllers and the five racks of servers attached to them. With those five racks, you can have on the order of 2,500 cores and 5PB of storage, depending on the servers you pick."
I'm saying that a rack full of servers is all hooked up to a single controller. You then have multiple, interconnected controllers, each one connecting a rack of servers. This is meant to give you HA and resiliency.
However, that is only true if the switch control plane is separate from the x86 hardware running in the controller (because, here we're getting HA and redundancy against the risk of controller hardware failure). I'd hope or expect that to be the case, because Nebula has smart people, but I don't see any material to that effect, yet.
(Typically a DC server farm will have two switches per rack of servers, with a dual-homed server having a connection into each. Therefore if one switch dies, you still have a redundant data path (though, only half the bandwidth)).
This is correct. The switch will continue to work if the x86 hardware fails. In this case the other controllers will take over management of the servers that are plugged in to the controller with the failed x86 hardware.
Hi, first off...the Nebula One looks great, really hope to try it out someday at work.
Regarding the underlying framework, I'm not very familiar with OpenStack (we use CloudStack right now), but is this a proprietary fork of OpenStack or are you guys moving the project forward upstream?
Hi, I'm a nebula employee and I have been a key contributor to OpenStack since its founding. We have not forked OpenStack and we are committed to making the upstream project a success. We have some proprietary pieces around the control plane, storage and UX, but we will continue to participate heavily in upstream development and plan on integrating more OpenStack pieces as they stabilize.
Cloudstack as it stands currently is a bit scary to me. They moved from being fully supported by Citrix to an Apache project which may or may not be fully supported by Citrix? Given that there are few contributors to the project, unlike with OpenStack, losing Citrix support I would see as essentially the project dying.
This looks very compelling. There is huge demand from large (enterprise/govt) companies for cloud solutions they can run on in-house hardware. I'm actually not that familiar with what you get from OpenStack out of the box, after the Nebula has provisioned everything, can the customer just start writing over HTTP? NFS? Is Active Directory integration supported? What is done to ensure data integrity on the storage long term?
OpenStack provides virtualization, storage and networking services. You have the basic gist, once nebula provisions everything, you can interact immediately with the system, provisioning virtual machines and storing data.
You can use a browser-based web dashboard or REST apis to interact with the system. Object storage (think S3) is exposed externally, block storage is supported, but it isn't currently exposed outside the system (its for vms only). AD integration is in the works.
I was under the impression that Nebula's sole mission was to further OpenStack, but I suppose that was a misunderstanding. To clarify, is it Nebula's long-term goal to sell hardware?
Having a hardware device reduces the number of things you have to worry about not being under your control.
For example, what networking are you going to use? Arista? Great. What's their latest supported APIs for Quantum? Well, they just changed (as in, last I heard they had a pull request open to OpenStack Quantum for support). OK. So Arista's API is now stable, great. Do they actually support what you want to do with your end devices?
What are you going to install Glance onto? Is the hardware going to be good enough? SSD? How fat will the pipe be into the switch? Dual homed? Quad? Is that going to fast enough if you want to spin up 100 VMs at once?
Yeah, many people are furthering OpenStack by 1) building on it, in an open-source (or combined OSS+Comercial) fashion, and 2) Selling Consulting and Services. We are doing it by making an appliance that you plug servers into and get OpenStack.
Developing the hardware gives us full control over orchestration and updates and many many other areas.
We do have many of the contributors of OpenStack under one of our roofs (Mountain View, Seattle, Remote) and we do contribute a ton to OpenStack, both in code and the Foundataion. As far as the company goes, yes, we are a computer systems company, making hardware and software.
Plus a per-server charge after the initial 5(?). The Register article was questioning the cost of the additional server license (5 - 10k per node), but having seen a lot of the pricing out there, it seems inline to be a little cheaper than what the big-boys will charge for their management platforms (without the services).
Honestly, if I was CTO at a start-up: OpenStack, a stack of cheap machines, maxed with memory and disks (if required), Open vSwitch, a standard 48 port 10gbps switch (though you can probably get away with 24 ports), open to all staff.
Setup of OpenStack may take a little longer, and it won't scale without serious work, but it will be cheaper and easier to control costs than spinning up AWS.
Oh for sure. OpenStack is going to be way cheaper in the near AND long term than AWS, if you know what your growth rate will look like. I've said it time and time again on here when 'Cloud' stuff comes up, that you MUST as a company do the math if 'Cloud' or in-house services (Cloud?) will be more reasonable.
I think the main pain-point that is being fixed here is that initial setup of OpenStack. I've been doing this stuff for many, many years and it give ME a headache. It truly is that bad (and deploying with Puppet or Chef only lessens the pain while adding different layers of complexity).
Really Awesome. This sounds like an awesome market to get into. I just spent the last three days toying with various Data center visualization solutions and they all suck ( even OpenStack) in terms of usability and accessibility. It would be awesome to plug in a server and have all the power that OpenStack promises up and running, maybe Nebula can do it.
I've spent some time deploying Openstack on Cisco UCS and this box would seem to replicate much of that functionality while adding versatility.
Pretty much exactly what I've wanted.
The benefit of converged infrastructure while leaving a choice in compute vendor and integrating the controller, interconnect and orchestrator in a single unit.
I really love this. I have been waiting for somebody to do this to OpenStack and for it to be a team of original contributors makes me even more excited. I would love to "cloud-enable" our server infrastructure.
Another random question: does this require any special software on the managed servers? For example, HP iLO Advanced, for additional server management features? (Don't think so, but worth asking).
The most important one: Is that Patrick Stewart doing the voice over in your video? :-)
My only concern with the design is that it's not clear that clustering gives you true redundancy, as it appears the 48 port 10gbps switch is integrated to the same PSU as the controller, and could be a SPOF for a rack of servers?
I assume the switch is essentially running a completely separate control plane from the x86 hardware, and forwarding plane, that can be controlled by other appliances in the cluster?