Hacker News new | past | comments | ask | show | jobs | submit login
GoBGP: BGP Implemented in Go (github.com/osrg)
168 points by ryancox on April 18, 2016 | hide | past | favorite | 126 comments



This is the sort of thing Go really shines on: network and infrastructure services that would ordinarily be provided by big ugly C programs, where the latency requirements are significant but not as bad as raw packet forwarding. If your current best alternative is a C program, I'm not sure why you wouldn't seriously consider replacing any of the following with Go (or Rust) programs:

* Authority DNS

* DNS caches

* ntp

* SMTP

* SSH

* IMAP (added later)

* SNMP

* PBX/Telephony

Fortunately, as time goes on, fewer and fewer people need to run these services at all.


...well, because of the primary shortcoming of the Go system: lack of standard infrastructure to manage the inevitable updates.

Let's say I have replaced bind, unbound, ntpd, postfix, openssh, dovecot, snmpd and asterisk with Go-written equivalents. Three weeks later, there is a bug found in the standard Go TLS library.

My distro ships all the packages noted above, but not their Go-equivalents, so my work load now includes monitoring security-announce lists for eight different products, where before I monitored the security-announce list for my distro.

I need to be able to rebuild all eight systems myself, rather than getting automatic package updates to my test systems, and then promoting the packages through alpha and then production. Go is nicer than some other languages about that, but it builds binaries, not packages.

Next:

I'm pretty sure you can't build an snmpd without ASN.1 parsing, and ASN.1 parsing is the very model of a fraught and perilous splatter-fest. Will the Go ASN.1 parser be better maintained than libtasn? Maybe, maybe not. Repeat this for everything else.

Can these problems be solved? Sure. Are they ready right now? Not that I'm aware of. Please enlighten me, if you have good answers.


No. I've written SNMP from scratch in Ruby, Python, and C++. DER ASN.1 for X.509 might be treacherous (simply in the sense that any mistake you make at all will be ruinous), but that's just not the case for SNMP's BER.

The whole point of using Rust or Go instead of C is that the "peril" of implementing things like ASN.1/BER is pretty much eliminated.

As for your former point: I don't follow. Go's deployment infrastructure is a superset of C's, and, if you're a masochist, almost everything in C's deployment toolkit is available to Go projects as well.


The viewpoint you were espousing was, if I understand correctly: I should replace all my existing services with Go/Rust equivalents, unless they handle packets directly.

My objection is that you are advocating this in the same narrow-focused way that people advocate node with npm, python with pip, ruby with gem: little or no cooperation with the whole system is available yet. This is perfectly fine from the point of view of a group which does one thing, but not from my point of view, running large numbers of diverse systems.

When libfoo gets updated, all N packages on the system which use it via dynamic linking gets the benefit as soon as the packages restart. This is highly desirable.

If Go-libfoo is updated, each of those N packages needs to be rebuilt, but I don't have a programmatic way of finding out.

If there are N teams developing those packages, some of them will be faster off the mark than others, and now I have a window of vulnerability that is larger than the one I had when I could update libfoo on day 1.

You have multiplied my workload. I won't do that without a really good reason.


If the Debian people don't want to include Go for some logistical or religious or religiously logistical reason, that's fine with me. I don't think people should run critical infrastructure from Debian releases --- when things go wrong, you want to be prepared to patch source on a moment's notice, rather than waiting for the upstream synchronization dance --- but hardly anyone seems to agree with me on that, either.

But I notice you didn't respond to my SNMP point, which is disappointing, because I was hoping that at least some fake Internet points might accrue to my otherwise fruitless efforts at implementing SNMP from scratch three separate fucking times. Can I at least be rewarded for that by winning a dumb message board argument!?

There's even a cool trick to implementing BER encoders I could have talked about!

Instead, it looks like the thread is going to be about dynamic versus static linkinnzzzzzzzzzzzzzzzzzz.


> when things go wrong, you want to be prepared to patch source on a moment's notice, rather than waiting for the upstream synchronization dance

This is IMHO the most backwards logic ever. Everything about dpkg and apt makes this process easy, from running an internal custom packages repository, through to "apt-get source" for any system package on a moment's notice, through to having everything just magically revert back to Debian-patched versions as part of the normal upgrade process assuming you versioned your custom packages carefully.

A well run Debian shop is a thing to be seen, unfortunately it's not cohesively documented in any one location on the Internet. If there's any problem encountered in the wild on the Internet, after 22 years, there is almost certainly a solid process built into Debian to handle it.

Compare that to home directories full of tarballs of binaries with dubious compiler settings and godknows what else, I have no idea why someone would advocate against it, assuming of course they've actually done sysadmin anywhere aside from the comfort of an armchair


It's super easy to install a patch using dkpg and apt. I didn't question that.

The problem is that you have to wait for the patch to be bundled. I've watched that take a long time, while services I knew to be vulnerable had to sit there and be vulnerable because the organization deploying the service didn't have any infrastructure to apply a custom patch.

Consider the degenerate case, where you have to wait for a Debian patch because you paid for the research that found the vulnerability. More than one of my clients wound up in that situation. But that's not the only way to learn about a simple, critical source patch that won't land in a Debian patch for days.


You bundle or create it yourself..

    $ apt-get source bash
    $ cd bash*/
    $ quilt new my_urgent_patch
    $ patch -p1 < ~/my-urgent-patch.diff
    $ quilt add file1 file2
    $ quilt refresh
    $ dpkg-buildpackage ...
    $ dupload ../*.changes
    # trigger apt-get upgrade on target machines


That's fine. I don't care what you do with your self-built binaries once you manage to build them yourself. But too many firms have no infrastructure in place to do that. They wait for upstreams to synchronize to fix security flaws that they could fix directly.


scp and "dpkg -i" are readily available, but it's really not that much work to setup a repository (aptly, reprepro, apt-ftparchive etc.)

I know I'd personally chose maintaining the system packages and where possible put extraneous language dependencies in packages too (fpm comes in handy as it can deal with a variety of packages, gem, npm etc). It makes life a lot simpler when it comes to administering a bunch of systems and trying to keep things consistent.


> when things go wrong, you want to be prepared to patch source on a moment's notice,

That's a misunderstanding. What Debian, and every other mature Linux distribution, gives you are the tools to not only rebuild a package on a moments notice (try to build any non trivial third party package sometime, and compare that to rebuilding the Debian package) but also keep track of those patches over time (where did it originate? bug id? upstreamed yet?) and keep a bird's eye view over deployment (which nodes? when?). You need to ask yourself those questions, because your auditor will.

Good for you to implement SNMP, and using the f word in writing, but maintaining infrastructure is something else. Your reason to not use Debian for critical infrastructure should be because of contractual liabilities and/or support reasons, its build tools and associated policies are solid. It's not the only way to roll, but it's a perfectly valid one.


What is the cool trick to implementing BER?


you encode it back-to-front

i don't even

care

anymore.


> you encode it back-to-front

sorry, but i still don't grok it. for example, if you take a person 'object' defined as :

Person {

     name string (or equivalent asn.1 type-name, with type-identifier == 1)

     age  int (or equivalent asn.1 type-name, with type-identifier == 2)
}

since b.e.r is basically a tlv (type-length-value) encoding, a person with name "james" with age '10' i.e.

james_person = Person(name = 'james', age = 10)

gets hex-encoded as :

"james" : 01 05 6a 61 6d 65 73

"10" : 02 01 0A

so the whole thing looks like this:

"01 05 6a 61 6d 65 73 02 0A".

ofcourse this would be prepended with appropriate type-number for 'Person' with corresponding length.

if we assume that 'Person' gets a type-identifier == 3, then 'james_person' instance would be encoded as:

"03 06 01 05 6a 61 6d 65 73 02 0A"

where '06' == total length (6 bytes) of this instance of person object.

may you please elucidate your trick with the above example ? thanks for your insights !


I would think implementing SNMP from scratch is the reward in itself, right? Are ulcers not a badge of honor for you like they are me?

And yes, I agree. I think we're to the point now where sysadmin-style "I'll wait for Debian to entmoot on this TLS vulnerability and eventually drop something" server maintenance is on its way out for those who operate cattle. Especially high-visibility cattle, as you imply. The industry is teasing the post-distro world into existence but doesn't yet know what it's dealing with on that point. I don't even consider CoreOS a distribution, for example; I think it's more of what Linux will look like in several years time for cattle herders, while Debian and friends will continue to go in pretty hard on pets.

Dynamic linking creates more problems than solutions in a cattle fleet as opposed to a pet fleet. People who philosophically argue for one or the other are expressing their preference for how to administer a server and do not realize that it is a preference, and not "correctness," per se. The package manager argument is the same way. Execute the code and get it done, or be "correct" and only apply updates through RPM. Cattle, pets. It's all cattle and pets, and the arguments that spawn between the cattle camp and the pet camp will never be resolved, this thread included. People need to realize this, that there is not one way to operate a server, and my way is not more correct than your way.

My way, for example, comes with the baggage of an expected organizational structure to enable its mission. That's not always easy, and I understand that. I can say, however, that the SRE/cattle way makes a hell of a lot of sense at scale.


He gave you a really good reason: knocking out most severe, low-level errors in high-usage, critical services. You'll put work in for the risky ones. Why not the low-risk alternatives...


Static linking is also a thing in C.

Apparently gcc seems to be the only C compiler lacking this capability, thanks to glibc.

Also, you can use dynamic linking in Go since version 1.5.


gcc has no problem with static linking. glibc can be statically linked (with gcc or clang) except the pluggable parts, which is NSS. How would you reconcile static linking and plugins?


By making use of plugin selection at compile time, like we used to do when dynamic linking wasn't available in mainstream OSes.


The whole idea of NSS is to be pluggable to let the local administrator chooses what they need.


Back in the day when UNIX only had static linking that type of plugability was achieved via configuration files and UNIX IPC.

Dynamic linking just makes it easy to program for the same scenario.


As a result of that good intention glibc must now be compiled into each and every program that wants to allow a customization complicating life for anybody who writes system software in anything but C/C++. I would rather preferred for NSS to never happen and instead Linux defined a set of services available over unix sockets for things like DNS resolution or user/group databases.


i don't know if it counts, but there's musl-gcc, right?


curious, is there no glibc solution? Its just the libnss* that requires dynamic linking?



Actually, Rust would support updates for your SSL, ASN.1 and other libraries; because native bindings are widely accepted and used with Rust.


You can always use gccgo, which supports dynamic linking of the go stdlib


> Go's deployment infrastructure is a superset of C's...

That is true; but updates to native libs that would propagate to C, Python &al. will generally not propagate to Go because, like Java, Go eschews native bindings.


So you install the go equivalents from your distro. If your distro doesn't have them, you vote for them to add them (or go through whatever process your distro has).


The repo maintainers are going to be on the hook to rebuild every dependency every time any package in the dependency chain changes. That sounds like a nightmare versus the current scenario where only one package gets revved when a library has a bug.


If you have an automated build system (like OBS -- the Open Build System used by openSUSE) where dependencies are rebuilt automatically and security fixes can be pushed to maintainence automatically.


Not every time a library changes, only every time one has a security bug.


So either the repo maintainers do it, or they stop being relevant (for this use case). Or someone else comes along to fill the gap.


Having the shared library that can be replaced by itself, instead of deploying N things, is nice. Not sure it's the end of the world, but it's nice.

On the flipside, though, this is probably a case where using something like Rust could be excellent: stronger language support for eliminating entire classes of bugs but also able to be compiled down to something that can be a shared library.

I'm not an ML expert, but reading about the things the Mirage project has done related to TLS, leveraging OCaml to provide compile-time guarantees, sounds very exciting. Rust feels like it could be that bridge. Start rewriting core libraries in it: cleaning up old code, removing dead code, avoiding bugs by virtue of the language/compiled, in one fell swoop.


> Let's say I have replaced bind, unbound, ntpd, postfix, openssh, dovecot, snmpd and asterisk with Go-written equivalents. Three weeks later, there is a bug found in the standard Go TLS library. > My distro ships all the packages noted above, but not their Go-equivalents, so my work load now includes monitoring security-announce lists for eight different products, where before I monitored the security-announce list for my distro.

But say your distro includes the Go versions, but not the C ones... You're basically complaining that your distro doesn't include everything.


Regarding packaging, I mostly agree. However, I would highly recommend looking into the Nix package manager and our packages (and packaging infrastructure). Updating one of our go packages is sufficient for all other go packages to be built with the new version. So that would mostly solve the problem you're talking about.

https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/...

(Sorry for being light on details; typing from phone)


https://github.com/whyrusleeping/gx is pretty cool.

You could handle it the same way that python handles it. Don't rely on a distro, have each person package their own stuff.


>Fortunately, as time goes on, fewer and fewer people need to run these services at all.

I'm not sure what you mean here. Are you suggesting it's a positive that the Internet is becoming centralized? Why is it "good" or "bad" that lots of people run SSH, their own email, or their own phone server?

Obviously one part is that it's hard to run them correctly, but I would argue the internet may be a better, more authority-resistant mechanism if some of these services were run from each person's home or VM rather than on Google's platform.


I would say that it's not as much of a problem that it's centralised, the biggest worry is that our core protocols are terribly insecure:

https://security.stackexchange.com/questions/56069/what-secu...


RE: the (or Rust) comment-

All of these services would be better suited for a language which has blessed async IO, concurrency, parallelism primitives. Go has these, Rust does not. pthreads are not the answer.

I much prefer Rust, but Go's stdlib and concurrency features far exceed that of Rust at the moment.


> All of these services would be better suited for a language which has blessed async IO, concurrency, parallelism primitives. Go has these, Rust does not. pthreads are not the answer.

Go does not have truly async I/O. It has a userspace (M:N) implementation of threaded, synchronous I/O. There is a distinction, and it's not an academic one.

Rust uses the kernel-level (1:1) implementation of threaded, synchronous I/O, with async I/O provided by mio if you want it.

There is no meaningful distinction between Go's language-level primitive channels and Rust's MPSC channels in the standard library. Not supporting generics is a good reason to put channels directly in the language, but that doesn't apply to Rust.

Additionally, I don't see how Go provides any parallelism primitives that Rust doesn't. In fact, Rust's parallelism (particularly data parallelism) libraries far exceed the capabilities of Go's, mostly because of SIMD and generics which allow you to build highly optimized data parallel abstractions. If I tried to parallelize the project I work on in Rust using Go's built-in primitives, it would be far slower than sequential.


Can you be a bit clearer on the distinction here? It's "synchronous" in the sense that the code is straight-line synchronous, but threads are scheduled in part based on I/O events (IIRC, this used to be the primary way threads were scheduled).

Yes, every connection has a (small) thread stack associated with it. But in a "truly asynchronous" network program, every connection still has memory associated with it; it's just that the memory doesn't take the form of a procedure stack.


> Yes, every connection has a (small) thread stack associated with it. But in a "truly asynchronous" network program, every connection still has memory associated with it; it's just that the memory doesn't take the form of a procedure stack.

That's also true with 1:1 threading. It's just that the context switching is handled by the kernel.

Semantically, there's no difference between what Go does and what NPTL does. The difference is in implementation: Go does a lot of the work itself in userspace, while NPTL does the work in the kernel. (I say NPTL not to be pedantic but because there were pthreads implementations in Linux that used Golang-like schedulers. They were abandoned in favor of NPTL because the extra complexity was judged to not be worth it for small if any performance gains.)

You're right of course that you need per-connection state in any model. But with a state machine you can be much more compact than a call stack. Modern compilers (I doubt this includes Go 6g/8g, but haven't checked) will do stack coloring to reduce stack usage, but the overhead is still significant because compilers essentially always choose runtime performance of straight-line code over stack compactness wherever there's a tradeoff. State machine compilers, like C#'s async/await compiler, make the opposite choice, and as a result they can use less memory. Moreover, with state machines you can go the extra mile and really pack your state into a tiny fixed-size allocation you can allocate with a segregated fit arena. That's pretty much unbeatable for performance.


In general, almost every stdlib will be "better" than Rust's, as we're taking an "anti-batteries included" approach.

And while Go does have great built-in stuff for a certain kind of concurrency, Rust's approach is more flexible and safer. It's a tradeoff, not a "far exceed" in my mind.


Isn't the Rust approach the Ruby, Python, or C++ approach?: give pthreads, rely on the community to produce a fragmented, incompatible ecosystem.


The Rust approach is, Rust's safety guarantees work for concurrency, but the details of that concurrency are left up to libraries, not the language. Since the safety is in the language, things are always safe, but you get the flexibility to do what you need.

You have to remember, Rust is a systems language. Which means that you need access to what the system gives you, and that means at least OS threads and both synchronous and asynchronous IO. We can't just decree "the world must use only aio and green threads", or we would be compromising Rust's fundamental design goals.


That's excellent and must be this way to allow programming microcontrollers or linux kernel modules in Rust, but you guys also need a blessed cross platform N:M threading async I/O sockets and files library, so people can write snmp/mail/web/etc servers/proxies/etc in Rust.


Few C implementations of these applications use M:N threading. M:N threading was tried early on in Linux's history and was abandoned for a reason. It is not a requirement for implementing those applications; in fact, it'll always be suboptimal from a performance point of view, especially in low-level languages like Rust. (Note that I'm not saying it's not fast enough for most applications, or that it was the wrong choice for Go, just that it's not optimal.)

I think the right solution is something like async/await to make truly asynchronous programming palatable. But in the meantime, 1:1 threading is really not that bad on Linux, because the kernel is very optimized.


(Off-topic) what do you think about marshalling for continuations from await? So the issue where on one hand it's confusing/surprising if you don't default to marshalling the continuation back to the thread (etc.) that created the continuation (e.g. the UI thread) but if you do then mixed blocking and async this can result in deadlocks (contention on the thread/etc.)

.NET defaults to this, you can opt out (which you usually want to do, at least for correctness) by doing await Foo().ConfigureAwait( continueOnCapturedContext: false )

EDIT: my etc.'s are weaseling around the word thread... more details about .NET here: http://blogs.msdn.com/b/pfxteam/archive/2012/04/12/async-awa...


So, mio is certainly becoming the core library everyone uses for async I/O, but N:M threading is more complex, and doesn't have any libraries that are mature enough yet to start consolidating. I think AIO is more important than the threading model for this kind of thing, personally, but it's also not exactly my area of expertise.


Can mio be pulled into the stdlib, then?

I fear a world of gevent/Twisted, EventMachine/Celluloid/base Ruby, Boost/a litany of other options.


I wouldn't mind pulling it into the standard library if we're sure it's mature enough to be ready. In the meantime, though, it's the de facto standard async I/O framework, and I don't see that changing anytime soon.


Agree 100%.


That hasn't been the case. In practice, mio is the standard for everything asynchronous in Rust.

Adopting Go's approach would force either asynchronous I/O or M:N threading on everyone, which is unacceptable for Rust's goals.


You want ponylang.org

Concurrency & parallelism primitives + safety.


Because some C programs are pretty excellent, and rewrites cause bugs? E.g. replacing OpenNTPd, Postfix, OpenSSH - or djbdns or qmail - by a rewrite probably doesn't reduce the number of problems.

(In particular, note that SSH daemons can fail in many ways other than by remote code execution.)

In the long run, C's role is indeed shrinking - but let's not be too hasty.


For a full-featured SSH running on a machine I was likely to log into and work interactively on, I'd prefer OpenSSH.

But most of what people do with SSH in a devops context isn't interactive; it's a simple control channel for well-defined sequences of file transfers and commands.

I'd prefer a minimal, Go/Rust-based SSH server for my EC2 servers, for instance.

I don't know why I'd prefer OpenNTP to a Go/Rust NTP. What's the advantage to it? OpenNTP is carefully built to avoid a class of bugs that its implementation language is very susceptible to. Go/Rust simply don't have those bugs at all. The latter seems like the safer option.

Same goes for DNS.


You can always do it close to original implementation. A straight-up clone. Should preserve at least most of the logical-level countermeasures.


Go is missing a few things for this. There is no good predictable event-driven polling library for networking with proper error handling and no GC pressure (no heap allocations), etc. And it has to implement its own syscall wrappers, because Go's syscall wrappers on non-blocking FDs call into the scheduler and even produce garbage on some errors. TLS library needs to be predictable too, produce no garbage and play well with polling.

Doing it in idiomatic way and dealing with all that goroutine per request model, concurrent memory access and unpredictable GC pauses is simply not worth it. It's going to be safer, but not of a decent quality. Better to live with what we have.

For Rust, I imagine, it's going to take even more work.


There's no good predictable event-driven polling library for networking because the whole runtime is a good predictable event-driven polling library for networking. You're not supposed to "event" Go I/O.

Virtually every Go program that anyone has deployed at scale has scaled I/O with goroutines (though not necessarily with "concurrent memory access").

With the exception of NTP, I can't see a single example of a service in the list I provided that is sensitive to "GC pauses" on the scale you'd end up with in an idiomatic Go program.


You are not doing event-driven programming just to scale I/O, you are doing it to avoid mutexes and concurrent memory access, to have easy cancellation, easy management of connections and other resources, etc.


Idiomatic Go code also doesn't use mutexes and concurrent memory access, unless you're trying to argue that channels are implicitly mutexes. Also: you now seem to be subtly backing off your original point. Can we stipulate that the performance of most network programs would not be meaningfully improved by switching from M:N I/O scheduled threads to async I/O?


My original point was about _predictability_ of performance and accidental complexity, that M:N threads with synchronous APIs introduce. But either way, they use way more memory, than necessary and also have to implicitly and explicitly synchronize every little thing, so performance on any meaningful load is going to be very meaningfully worse, than that of event loops.

Still, I want to reiterate, that event loops in modern languages are more about managing complexity, than performance.


Some think it's very appropriate for high performance networking: https://github.com/google/stenographer

Granted, that uses pfring pretty substantially, but still...


You will be doing battle with the GC in many of these applications. Yes, even 1.5+. 10ms is an eternity to do no workload.


The only service in this list that I can see the GC mattering for is NTP, but then, you can design a tight NTP server that virtually eliminates the GC's work in Go, taking advantage of the rest of Go's high-level features that C lacks.


From 1.5 to 1.6 @brianhatfield saw pauses in a 8GB heap & 150M allocs/min go from 40ms to ~3ms [0].

[0]: https://twitter.com/brianhatfield/status/692778741567721473


Cloudflare mentions they are heavy users of a golang DNS lib https://blog.cloudflare.com/dns-parser-meet-go-fuzzer/

ntppool.org uses golang for DNS https://news.ntppool.org/2012/10/new-dns-server/


Cloudflare is a heavy user of Go period.


> I'm not sure why you wouldn't seriously consider replacing any of the following with Go

just curious about this: are there folks trying out dpdk with go ? implementing these control-plane applications in vanilla sockets (or close-to-zero-wrappers on those), doesn't seem fruitful anymore.

fwiw, i have been doing dpdk stuff, but have been mostly using C...


Seconded on the dpdk point. I've been working with kernel bypass networking in C but it seems to me that the asynchronous queue based APIs would be perfect for Golang or even Javascript. The only project I've seen so far to make kernel bypass networking nicer to program is http://www.seastar-project.org/ (which has DPDK support)


Animats suggested the same kind of services for Rust projects as well given huge benefits of less memory attacks in critical services. Im in total agreement while throwing in they should preferrably be compact so vendors put them in commercial routers and appliances.


replacing openssh with something you've written yourself seems like reinventing the wheel, just because you can.

the sheer number of person-hours at developer salary rates in north america would probably amount to at least a few million dollars.


You'd be surprised. Go actually has a supported SSH library which is a fairly decent protocol-level implementation. To actually get a "shell server" you would need to implement handling of SSH sessions (as described in the RFCs) but all the low level stuff is taken care of.

EDIT: As an example, the gogs project has implemented a small ssh server so people running it don't need to hook into OpenSSH, which relies on specific versions of OpenSSH to be performant. See https://github.com/gogits/gogs/blob/master/modules/ssh/ssh.g...


Except that OpenSSH is a very large, complicated, and featureful piece of C code that most servers need only a tiny portion of.


I was thinking Go is implemented in C?


It used to be. As of 1.5, go's toolchain (compiler, etc.) is implemented in go.


That used to be the case. But with v1.4, it's mostly (all?) in golang now.

http://dave.cheney.net/2014/09/01/gos-runtime-c-to-go-rewrit...


Not anymore; it's in Go these days.


Translated to Go from C, to be precise, not completely reimplemented.


Precise in terms of what? I have no idea why parent brought up implemented in C. It usually is a person who thinks only C is low-level, fast, portable, or whatever enough to handle the task at hand due to shortcoming in other language being discussed. Compiler done in C implies Go wasn't good enough.

Compiler entirely in Go and very fast cancels that whole line of thinking. Far as precision, the original source could've been ported from COBOL and that wouldn't matter. The point would remain it's now in Go and gets the job done.


I look forward to the first programmer friendly SMTP/IMAP implementation. Haraka is the closest friendly SMTP server I have come across.


I can understand why there hasn't been much movement in the SMTP world. SMTP is pretty hard to get right, as is maybe hinted at by the many RFCs. You really don't want to be making any mistakes because it's a somewhat unforgiving protocol... unless you send an error code.


I think the reason SMTP is hard to get right is not because of the many RFCs. The reason is that it's not documented.

Operational experience at scale is needed to know how to write an effective SMTP implementation, and that experience is half-documented by many people in many different information silos.

But... I'd also say it's an extremely forgiving protocol. In fact, it's the fact that it's so forgiving which makes operational experience required to implement it. A "correct" SMTP implementation has a lot of latitude in the choices it makes - and it's that latitude which makes life difficult.


Agreed. But like most things opensource, it just becomes better over time if someone did a good start :-) .


So at the moment I see no reason why Go written BGP would be better than standard Quagga/Zebra. There aren't really concurrency or resource issues with large scale Quagga in my experience.


Quagga/Zebra is a giant C project. The industry is moving away, as much as it can, from serving critical infrastructure on giant C programs.


I'm not aware of any trend in the area of routing/switching for linux away from C projects. nftables and open vswitch are both new-ish and written C.


"As much as it can". nftables and openvwitch both forward packets, and thus need to be written in C (or, perhaps, in the long term, Rust).

Really, you're playing on a semantic ambiguity in the word "router". A BGP implementation doesn't forward packets; it maintains a database of forwarding paths that the packet forwarding layer consults. In a large Cisco router, the SOC that runs BGP and maintains the RIB isn't the same electronic component that forwards packets.


I'm not playing on anything; just not aware of a trend away from C for this stuff.


Not really. Neither the BGP layer nor the packet forwarding layer in that big Cisco box of yours is moving away from C code.

Standard network software such as Postfix and OpenSSH took ten years to replace their predecessors, and their eventual replacement will be just as gradual. It's not happening right now, so I think it's a bit of a stretch to call it a trend.


I didn't say it was. But then: I don't trust that Cisco C code at all. Do you?


In the decade I worked at an (smaller, regional) ISP, there were a number of times that I know of that Cisco provided a custom firmware to us get around a bug we found that prevented regular configuraitons from working as expected. Considering the scale of Cisco, and that we were small enough at the time to need less than five people in network operations, I find that terrifying. They weren't security issues, but it does point towards their code base being too complex for them to adequately manage.


Yes. It currently runs over 70% of global internet and considering all kinds of error conditions that show up on the global internet the code is extremely stable.


Sendmail used to run on something like 90% of the global Internet. And mail in the 1990s pretty much did work, pretty reliably. Would you have banked your site's security on the quality of Sendmail 8.6.12's code?


Slam dunk on that comment! Such systems, due to lots of debugging, can work reliably in a narrow set of use cases where specific features have massive use. Then there's the uncommon, usage scenarios and features that get much less debugging. Then there's all the patches they keep distributing to fix... "things."

And then the fact that safe, reliable code is only first step toward secure code where an intelligence, malicious person is targeting it. Totally different ballpark that neither Sendmail nor Cisco handled so well. Small shops like Sentinel and Secure64 did way better with a tiny fraction of the money. So, it has to be intentional for the extra profit at customers' expense.


Extremely stable, but not extremely secure. This could be said for many companies, and Cisco shouldn't really be singled out here, but I think this is part of tptacek's overall point. The world should move away from huge C code-bases for critical infrastructure and adopt "safer" languages (Personally I love Go, but Rust may be a better option for high-speed packet routing).


I thought the replacement of Telnet with OpenSSH was fairly fast? Few years at most, it seemed to almost happen over night compared to the move from SSL to TLS for example.


SSH had been around for some five years or so before OpenSSH arrived. It slowly gained popularity on telnet/SSL and kerberized telnet because of the trivial small scale deployment, but also because it was a drop in replacement for rsh. Had it been fundamentally different it wouldn't have gone so easy, and I suspect the familiar language helped there.


I would imagine/hope that it's more about integration with other code than using it solely as a BGP daemon. The repo seems to be related to http://osrg.github.io/ryu/ which is a "software-defined networking framework"

Off-hand, you could use GoBGP to do cheap loadbalancing-ish things without external dependencies.


This is not due to being written in Go, but GoBGP looks like it has a nicer (non-Cisco-clone) configuration language.


Go is easier to profile and test.


Nicer to integrate with other stuff maybe. E.g. for simple "just announce these routes" or a looking glass, where I'd right now might use the (python-based) ExaBGP


Indeed, this is great for things where you want to do programmatic manipulation of routing. Something which ExaBGP is good at, but is very slow and Quagga/BIRD are really poor at, but are quite fast at.


I remember years ago when every new PHP application would have "PHP" before its name. PHPNuke, PHPMyadmin, etc, etc.

Seeing the same trend with Go now. Why add the language name to the software name? Real question...


Another real question: What is the better approach? Generic names (eg. "bgpd")? That seems decent if you have an over-arching project to group the generic stuff under (eg. "Apache httpd"). Making up codenames for everything (eg. "Zebra")? It's a pain to think of those, and they're rarely descriptive or meaningful.

I don't really like the language-name-prefix thing either. It makes the language seem like the important thing about the project. Sometimes it is the most important thing, but even then, that is mostly only true at the beginning of a project when attracting contributors is most critical. But I'm not sure the other approaches are much better.


I really dislike projects that assume you know the definition of an acronym and never (1) expand it nor (2) explain it. BGP is super important to the GoBGP project. It deserves at least a mention somewhere in the first 4 sentences introducing the project. Gahh!


Would elixir or erlang also be a good potential language for bgp/quagga/zebra?


I feel "BGP in Erlang" would be exactly the kind of thing I could implement well - even if it did feel icky having to implement "MD5 Authentication" in 2016.

The problem with those sorts of projects however, is inertia. The average hobbyist rarely ever uses BGP. Large networks and ISPs aren't going to implement my personal project as a critical component to keeping their entire infrastructure online without a very good reason.

This project looks promising, I'm hoping it doesn't suffer this problem.


I assume BGP == Border Gateway Protocol https://en.wikipedia.org/wiki/Border_Gateway_Protocol

Suggestion: include a quick abstract what what BGP is with a link for more information.


It's the routing protocol that computes paths between ISPs and their largest customers, and that associates ranges of IP addresses with those networks.

Even if you're not a huge ISP, it's handy to have a BGP implementation available because you can use it to do network analytics and traffic management.


This happens fairly often (projects assuming I know the tech they are built on), and I'm no dummy.

I also clicked through several pages on the repo / site and there was no clue as to what BGP was, except some mention of RPC.


Is that necessary? I'm not sure how many people are unaware of what BGP is.


I'm not aware what BGP is, clicked hoping to find out, was sorely disappointed.


<ctrl-t> bgp <enter>

A project page for a $language implementation of $protocol shouldn't be expected to give a basic description of $protocol. If you care about a new implementation, you already know what the protocol does, at least generally. If you're lucky, the project page links to a protocol description (possibly at wikipedia), or, as above, you can simply google it yourself and then decide whether a $language implementation of it is something you care about.


I don't think it's a lot to ask for a readme to contain the full name of the acronym it's implementing and maybe a link to a wikipedia page.

I mean, it already has a link to the golang website, but no mention of what BGP actually is.


A quick Google solves that problem. Anyone that wouldn't do that much is unlikely to be valuable to the project. It's a nice filter at the least.


Bah accidentally downvoted you nick, sorry. Since I downvoted though, I might as well play the game as if I had a reason (because it annoys me when people dv without a reason): I think the request for some basic information without forcing the reader to google/duckduckgo/wikipedia some of the most basic info (such as full name, basic description) is not too much to ask from a journalistic perspective and using it as a barrier is not a good thing for encouraging education.

After all, there is a reason it's called the wikipedia rabbit hole, do you know how often I start with a quick search and suddenly it's an hour later and I've learned all about $something-other-than-originally-intended?


If i do it, I just load up comments of that person and upvote any decent comment tgey have. Cancels it out.

On other issue, here's what typing BGP into Google gace me at the top: "Border Gateway Protocol (BGP) is a standardized exterior gateway protocol designed to exchange routing and reachability information among autonomous systems (AS) on the Internet. The protocol is often classified as a path vector protocol but is sometimes also classed as a distance-vector routing protocol."

Some things are hard to search for. Others, like BGP protocol, are so common you'll get it easily. Those can default on Google. Further, what use is a programmer going to be in robustly implementing the protocol if they can't figure that out? Hence the filter part. So, my position is more solid now that I Googled it.


I guess it couldn't hurt then!


I was generally aware of what Border Gateway Protocol was, but it did not immediately spring to mind when I read BGP, and the full name is not mentioned in the repo readme.


This is just classic karma mongering by running Google for other people and posting the result. So no prob. not necessary.

But it can be helpful for topics with ambiguous acronyms or tech. names (Apple) Swift vs. (OpenStack) Swift, for example.


Addressing multiple comments here, but to your point specifically: I don't really care about karma. I would've preferred exactly what I recommended to the author, instead of spending the time to google it and posting a comment.

In general: We can all be better teachers. Acknowledging that not everyone who writes code in Go shares the same background, training or interests is a good step to getting more people to use Go.


I don't think this is necessary. If this is relevant to you, you will know exactly what it is, in the same way that GoDNS would be obvious to people who know what DNS is.


You can't get very far in life if you assume every acronym you encounter is irrelevant to you if you don't already know what it stands for.


[flagged]


Please stop posting unsubstantive comments to Hacker News.

We detached this comment from https://news.ycombinator.com/item?id=11521496 and marked it off-topic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: