> This address is only accessible from other other servers in the NYC2 region that have private networking enabled.
Probably a really stupid question: is that interface only accessible from other servers in the same Digital Ocean account (in the NYC2 region w/priv networking enabled), or to every machine across all DO accounts (in the NYC2 region w/private networking enabled)?
It might be worth noting that in the tutorial. While this has a lot of great benefits for people that move a lot of data around between their servers, it doesn't really improve security at all.
It's actually worth noting that more clearly on the main page actually. Your parent question wasn't stupid at all. Not to mention the fact that the sales page hasn't even been proofed very well part of it reads:
"Each new Droplet spun up in NYC2 can include a second interface on a network with no public internet access that is accessible from other Droplets have the private networking interface. You can enable shared private networking on your Droplet on the Droplet create screen.
Traffic sent between Droplets on across the private network"
Specifically "droplets have" and "droplets on across"
Additionally "droplets" is not an industry standard term. It's a term (afaik) that DO invented for their marketing. They might want to define that as well for anyone who lands on that page and doesn't want to explore the rest of the site. That's the type of thing that will stop people dead in their tracks when trying to understand what is going on. It's cute but I'd really rather read industry standard terms for things.
In one sense the security is "added". But in another sense it's a false sense of security. Because if someone wants to get at you the simply have to get a DO server in the same place and potentially exploit the fact that people have their guard down. (The closest example I can think of is people who have firewall and don't spend as much time locking down the machines behind the firewall because they think they are covered.)
The real security this provides is that now your access polices for firewall are much simplified. You can maintain a very reasonable back end network of hosts that aren't exposed to the public Internet and spin up a droplet to be your jump/bastion box, run certificates and lock SSH down to a sane source to an individual host (only the jump/bastion and not public).
Beyond that it adds no functional security - in fact port scanning on the inside will be much more fruitful with regard to services that default to starting on 0.0.0.0. With that in mind - make sure you're not exposing things that you don't mean to be on the backend.
According to Digitalocean the routing issue wasn't on their side. From what I could see, the cause of the latency was asymmetric routing. Some of my Amsterdam -> Amsterdam traffic had its return traffic sent via NY. This was resolved when they got more bandwidth and were able to disconnect Cogent some time last week.
When will they introduce IPv6? They could then allocate a /64 to each customer and the customer can then firewall off their own little corner. 2013 is not a year when IPv6 is optional.
"Once the initial rollout of [private networking to] the first region is finished we'll be moving to get IPv6 enabled in our NY2 region first and are targeting an October ETA for the first public beta!"
Just FYI - you can't technically carve up a /64 without breaking EUI-64 rules if you're talking about subnets.
With that said, you can still technically do just that or just break it up logically. You can have some interesting fun with routing between a few internal hosts. I've had some fun with Quagga on Linode slices in this regard.
I am not suggesting carving up a /64, but rather just allocating a /64 to every customer. Then I can allocate addresses out of the /64 to my own instances. I can further instruct them to not communicate on certain ports with anything outside of my /64. For example, I could tell my DB servers to only serve my /64, etc. I suppose if you are doing something cross data center then you want a larger allocation.
Linode currently allocates 1 address per instance I think? It's better than nothing, but really they should do a /64.
> Then I can allocate addresses out of the /64 to my own instances.
I think that would call for a /48† if you're going to split it farther. Handing out /64 per server regardless of how many servers a client has seems just easier to me.
One thing they could do due to virtually unlimited number of addresses is to let you keep them (instead of reusing for other customers) even if the servers are offline. You could create servers with IPs known in advance that way.
Which actually makes it even more convenient to reserve /48† per customer and then carve out /64 per server. So maybe that's a better route.
† or /56 but I don't know if the savings are worth the potential hassle.
"and thought they wouldn't survive, but here they are and constantly upgrading."
I wish them well and they seem to be doing a good job as you are saying. But don't confuse being able to "survive" for 2 years approx (founded 6/2011) as a result of raising over 3 million dollars (ref: crunchbase) with long term survival.
One of the first things I learned back in the mid 90's was the expression "the great provider today can be the shit provider tomorrow" (that was in reference to bandwidth providers btw.)
I was getting really close to switching to another cloud hosting provider due to the lack of private networks. Great to see that Digital Ocean is staying a step ahead.
A cloud is not defined by high availability. Just look at Amazon's cloud - so many single points of failure. Amazon says to distribute your application and redundancy across multiple redundancy zones.
Cloud is defined as:
1. On demand automated provisioning
2. API interface for programatic control
3. Infinite resources from the customer's perspective
4. Virtual addresses for physical location
5. Utility measured service
When people have multiple servers, often they need/want those servers to communicate. For example, I might have a web server and a separate database server. So anything on my web server that needs to make a database query would have to communicate over the network to the database server. Prior to this change, Digital Ocean users would have to do that communication over the public internet interface (public IPs), and the traffic would count towards their quota. Now, the servers can have private-side interfaces that aren't internet-facing, allowing this internal communication to take place on the internal network, avoiding the public interfaces and not counting towards the traffic quotas.
the latency will be similar in un-congested conditions (it will stay inside the network edge) but private networks allow the switched network to broadcast less, allowing it to handle "more" traffic across switch backplanes and area networks.
Well, the nature of routing is such that, for communication between two VPS in a given datacenter, the traffic wouldn't really "go over the internet", but it ought to still help to have the internal traffic separated from the public side.
Say if you have 3-4 cache servers that need to talk to one another, rather than having to give it a public IP (even though it's not routed outside of DO), you can give it an internal IP so all traffic is routed through that network interface instead of the public one - slightly better for security though not perfect as everyone with a DO box can see your private network ip, so you still need to secure it.......
Not everyone is born with intrinsic knowledge of systems administration. Deriding someone for attempting to expand their knowledge is disgusting and needs to stop.
There are lots of people who hack electronics or do desktop programming who have no idea how servers work even though they might be interested. We can't know everything and reminding people of their ignorance when they're asking to know more is discouraging. Stop it.
I spun up my first unmanaged server this morning on digitalocean and found the experience difficult (having next to no sysadmin experience) but now I've got it all set up it's fast and powerful.
I love the work these guys are doing and can't wait to see more from them.
Awesome, glad to hear that the experience turned around for you. We do have a ton of articles in our community section to help you get started as others have mentioned:
Let me preface by saying: I LOVE DIGITALOCEAN... However...
For the past two days I have been evaluating a production system for my client base (which is primarily in the North Texas and Oklahoma Area. Here are my ServerBear results:
All in all, I decided to go with a managed Linode server. I'll be paying out the ass for it... but I think the bandwidth to my client base is more important.
EDIT: I host just about all of my other projects on DO and I love it :)
That's correct. DO does some filtering to prevent networks from leaking to a different droplet's interface. However, we recommend that the users protect both public and private interfaces with iptables filters and use encryption where the data stored or transfered is sensitive.
Great news for the NYC2 datacenter. I'm just waiting for the SFO to have this feature, among other things. One other critical feature I think should be implemented is the ability deploying instances to different physical hosts in master/slave setups, either automatically or manually[0].
Needs VLAN support, atleast it's not on by default (yet), or someone might have the inkling to scan all internal IPs for people who didn't secure that networking interface.
Just signed up for DO mostly just as a layer of separation for when I am on IRC. Very impressed with them and thinking about many other possibilities now. Keep it up DO!
DO is really nice for this, and the bandwidth helps when you need to proxy traffic through it, for example twitch.tv is very slow here yet routing through DO I can reach 2.1mb/s (using twitch-dl i can reach full SSD Disk speed(!!).)
Essentially, yes. Sometimes as a proxy and sometimes straight from shell (irssi). My long term goal is to tailor an IRC bot to represent me virtually while I am unavailable (at work, etc).
Is this not bounded to a particular tenant? Meaning if I have a droplet I can hammer/DoS/or exploit test other tenants? Obviously people should be giving these private IPs the normal care and concern that they give their "public" IP, but with many other vendors these are by default externally limited to images that you own, effectively providing layers of defense.
EDIT: Note that I ask this specifically because the term private networking may be misleading to some. These are non-publicly routed, but they most certainly aren't private.
That's not entirely true. While Rackspace does provide a shared private network for intra-DC communication, it also provides the Cloud Networks product that is capable of creating tenant specific networks. Think VLAN tagging for Cloud.
On that private network, you can use your own addressing, use multi-cast, etc. Much less limited and more secure than a shared private network. It's also free.
The last time I looked in the Rackspace docs, it looked like this was in the process of being rolled out ("production ready but will be available to customers in a phased release"). Is Cloud Networks considered fully supported now?
RackConnect is a product that allows us to link cloud servers in our public Cloud environment with servers in a dedicated configuration.
We are currently using RackConnect 2.0 which achieves this by attaching the shared private network to the dedicated environment and configuring the cloud servers network stacks to use the dedicated load balancer and firewall as their default gateway, so that all traffic flows through the dedicated config. Incoming traffic (or traffic from the dedicated configuration) will be routed out to the Cloud servers by the dedicated load balancer.
RackConnect 3.0 (coming soon) will provide the same service, but the connection from the cloud servers to the dedicated configuration will be provided by Cloud Networks, our SDN product. This simplifies the configuration and provides additional security to the traffic.
>Most other providers do not restrict private network either
EC2 has security groups, and the default group would be that non-tenant machines could not access your services (this is external to your image, at presumably the hypervisor or networking level. What you do in ufw/iptables is above and beyond this). I don't see any similar mechanism in the Digital Ocean world.
AWS supports VPC (virtual private network). It lets you setup sub groups of VMs that are only network accessible to each other with explicit endpoints open (ex: just HTTP open to an ELB). It's recommended for all new deployments. We use it in our cloud deployment of JackDB and it's really pleasant to use. Plus it makes it really easy to setup a bastion SSH proxy as an end point (vs. having all your instances publicly accessible).
Assuming it's similar to Linode, nope, not bound. I ended up doing a very the very not-scaleable thing of dropping packets at the iptables layer from any IP that wasn't my other Linode.
Sounds like something VXLAN is well equipped to handle - VLANs tunneled over IP to create private networks in large cloud environments: http://en.wikipedia.org/wiki/Virtual_Extensible_LAN
I wonder if DO will be using something like this to mitigate direct attacks (though VXLAN wouldn't solve the problems of link oversubscription—but there are other mechanisms in place to solve this)
Probably a really stupid question: is that interface only accessible from other servers in the same Digital Ocean account (in the NYC2 region w/priv networking enabled), or to every machine across all DO accounts (in the NYC2 region w/private networking enabled)?