This looks like _a lot_ of work was put into the article.
Nevertheless, I think you shouldn't do this. This takes a lot of time and is basically copy/pasting things around. For something as crucial as your own mail server I'd go with a solution that is automated: If your server goes away you fire up a different one in a couple of minutes, restore backups and change DNS records, done.
Reasons for this might be, in no particular order:
- VPS provider goes away
- hardware failure
- user error (think "Oh, I probably shouldn't have issued that command as root on my mailserver")
- maintenance: This guide is (initially) from 2009. How are you making sure that your system is up to date, after you followed the instructions/installed it? Either you'll leave the system to bitrot or you need to actively maintain it and probably will run into issues upgrading the machine. Or .. you migrate to a brand new machine every 6-12 month, which gives you a decent excuse to verify that your backup/restore process is working as well.
Backups. Your colo, cloud host, mail provider are equally likely to disappear.
> hardware failure
Unlikely to be a big issue unless you own your own kit and can't afford redundancy or replacement. Otherwise, it's a case of firing up another VM and restoring it as per any other computer.
> User error
Less likely to happen than on a hosted service from experience. I've seen massive data loss on a couple of big names.
> Maintenance
Seriously you just leave it. "apt-get upgrade". Perhaps some minor reconfiguration required between distro versions but I haven't had to do this in 5 years on Debian.
Also don't forget, bar gross misconfiguration, if your mail server goes away even for a couple of days, email usually makes its way to you eventually anyway. It's quite resilient.
As for backups, I always test restore at least once a month.
Mail hasn't changed much in more than a decade. Systems like Ubuntu update cleanly from release to release. Maildirs are very easily migratable.
You can literally set up a new mail host, and then rsync the old Maildir contents -- aka, all your user's e-mail -- onto your new host, whenever time permits, without disrupting mail delivery or reading. User's historic e-mail will simply gradually fill in.
If your VPS disappears, or hardware failure occurs, or user error strikes -- simply restore your Maildirs on a new system from your backups (tarsnap works great!).
This documentation is long and thorough, but once you know what you're doing, setting up a basic mail server, even manually, should only take you a few hours at the most.
I actually wrote a script for Ubuntu server (works on 13.04) that encapsulates most of this tutorial. It doesn't include ClamAV or Spamassassin, but does everything else. You end up with a server that uses three mysql tables.
But then I discovered that Virtualmin does all of what this tutorial does, with an understandable u/i and all of the same capabilities from Dovecot/Postfix/saslauth. Doh!
Never restore anything but user data. Best implementation is a herd of puppet modules and classes.
So your class mailserver { include shorewall ; include mysql-server ) and a whole bunch of other modules. That way you can leverage your efforts to have a class like database-server { include mysql-server } or whatever.
then you have
node 'somevps.something.com' inherits mailserver {} or whatever other future/load balanced machines.
You can go from bare iron to in production in a couple minutes if done right. This makes troubleshooting easy, just bring up a test box during a coffee break and see what happens under identical conditions as production. And the other way around, never change anything in production until it worked on the test box, which only takes ten minutes...
My experience with postgrey has been utter failure in that 3rd parties too dumb to have a mail server that responds correctly to temp fails by retrying and instead give an insta-perma-bounce in response to a temp fail invariably are also too stupid to understand the problem is on their side. All they know is the mail server they installed in 1996, and never touched since, always gets bounces when they email you, therefore the problem must be on your side. And the global population of idiots will exceed any normal human level of patience. So, yeah, greylisting... bad, bad idea...
I've had the exact opposite experience. I've been using greylisting for almost a decade now. Greylisting is so common that people with broken mail servers is a non-issue. It is literally just exchange 2003 without any updates applied. That is a tiny number of servers, and they will generate bounce messages when trying to deliver to tons of domains, not just yours. Greylisting cuts down on a huge amount of spam with virtually no additional load on your server like content filtering does. So, yeah, greylisting... good, good idea...
Considering that this can be installed to a VM, an image of which is backed up, what is not automated about this solution?
Maintenance: can't the distro package manager take care of that for me?
Sorry for the ignorance in my stance, but I came across this article because I'm looking to move away from Google infrastructure to hosting my own stuff, and this seemed like a great piece :)
VM Images not very portable, and have essentially no built-in documentation. They're also large if you're not working from a snapshot system.
Basically, if the exactly prescribed restore procedure fails, a sysadmin is stuck with a virtual black box of bytes and will have to divine out what sort of configuration mismatch there might be between the image and the new host. Different block devices? Different network devices? Wrong CPU architecture? Wrong virtualdisk format? Some other magic of some sort? It can be awhile before you even get the machine up and running, to say nothing about running services.
In contrast, a scripted install starts with a running, networked machine and if any point in the install procedure fails, you'll know what point it failed will have a detailed error message explaining why. You can read the script to see what it's trying to do and may be able to either correct the script for the new environment or correct the environment for the script. You can make steady progress towards restoration while if the VM restoration fails, you just keep slamming your head into the brick wall until it gives way.
It's true in many situations that restoring a virtual machine snapshot is much faster and easier than a rebuild and data-restore. My experience is that it's great when it works, but a nightmare when it doesn't.
I've been running a mix of http://james.apache.org/ and postfix for years. I've migrated 3 times with no issues over the years. (zip up database dump and distribution, done.)
Something as small as "your own mail" is one or two packages on one server; this guy is implementing most of an entire IT infrastructure including database server and virus scanning capable of operating a hosting company.
Sort of like the existence of Oracle's products and IBM's DB2 does not preclude end users being able to use SQLite.
I was able to run my personal mailserver for nearly two years without having to learn what amavis-new was. After that, I started getting enough spam that it became an annoyance and I added it.
Webmail is not that important to me now that I have a smartphone (I really recommend K-9 Mail for Android, which is FOSS https://play.google.com/store/apps/details?id=com.fsck.k9&hl...). Failing that can always log into my machine using ssh and use mutt to read and send mail.
I think the mailserver SQLite is Postfix (SMTP), Dovecot (IMAP) and some DNS records.
I haven't looked at OpenSMTPD. This guide does include a lot of configuration, but it's worth mentioning that not only are the defaults pretty reasonable, but that Ubuntu (and Debian) can configure the most important options using dpkg-reconfigure.
SQLite doesn't have all the conveniences of Oracle products or DB2 so that is not a good analogy.
I'd define a minimal working system as the distro's MTA set up to BE a smarthost instead of set up to USE a smarthost which is usually not much, and an imap server so "normal" non-mutt using endusers can access it.
Everything else is bolt on fun once that basic setup works. Client side spam filtering gets you out of being the middleman, which is awesome if you'd be stuck as the middleman.
There's nothing wrong with the article as a display of nearly the largest most complicated system you can design, which is perfectly cool and fun. Personally I'd add commentary to the article about load balancing and proxying and DNS needs more than "yup gonna have to do DNS" but it is very near a maxed out design.
Many HN comments are along the lines of "this maxed out design is large and complicated which implies ..." However That doesn't imply anything about non-maxed out designs. The level of what can be accomplished in (insert programming language) in a maximally complicated situation doesn't necessarily imply much about the difficulty of "hello world"
Exim and a simple dovecot imap are about ten lines of config editing total. It won't do much, but sometimes that exactly what is needed.
I run a mail server with no spam filtering and its not too bad, maybe 2-3 spam messages a day and Thunderbird catches those. I just have a somewhat strict postfix configuration with graylisting enabled.
My email is only ~13 years old so I'm not sure how bad it is for people with older domains/emails.
Spam can be hard to predict. I know people who have had a setup like that (working just fine as for you), and then all of a sudden they've been overwhelmed with hundreds to thousands of spam mails per day.
A mail setup does not have to be this complex. Mine, for example, consists of seven lines of configuration for the daemon, one line to have the init system start the server at boot, plus a few aliases & virtual users. This is using the mail server that comes pre-installed with my OS. It took a few minutes to set up and test. And it's been running for a few years now with no changes.
It really isn't. It is a testament to how many people don't want to learn anything, and just want to follow a list of directions. Setting up your own mail server is trivial, it is a 5 minute job.
I moved off Abrahamsen's recipe 18 months ago or so for no good reason other than I like to make work for myself: it had been working just fine for me. You set it up on a cloud server, take an image, and that's that - you have a working base from which you can restore your mail server if anything bad should happen.
Last year I put up the Dovecot / Postfix / Postfix Admin / Horde recipe I used in place of Abrahamsen's, and judging by the feedback it's helped a great many people:
(You can replace Horde with Roundcube or any other webmail package you care to put in place if you don't like it).
The market here is for learning, not necessarily for setting up. If you don't know how to set up a mail server, it is well worth walking through one of these as it will teach the lay of the land. Then you can later move to a packaged solution, as you'll be knowledgeable enough to troubleshoot it when it breaks.
I was intending to put together a Chef cookbook for my recipe, but looking at what's already out there it seems like it would duplicate some good ones that already exist. e.g.:
I think that running your own mail server is certainly worth the initial effort; once you have the thing set up it ticks along with very little upkeep needed.
Can I ask the obvious question Why? Email (IMAP/SMTP) is so standard, there are lots of companies out there that do it better and cheaper than you yourself can do. GMail (Google Apps for business) and Fastmail.fm are the 2 names I always recommend.
Another factor to consider is how you will keep your SMTP ip-addresses from being blacklisted. Companies like Mailchimp and Sendgrid spend a lot of time and effort on this. With your own very low volume, it takes only a couple of persons to mark your email as spam and suddenly all mails sent from your domain end up in in spam folders, no matter what DKIM or SPF settings you have configured.
Only reason I would see is when you want to keep your email away from the NSA and therefore out of the USA (Fastmail is Australian based btw http://thenextweb.com/insider/2013/10/07/are-overseas-based-...). Keeping your server out of the USA will not mean that your email will never travel through the USA, where the NSA can still access it, as SMTP is very cleartext between MTA's.
As much as the NSA concerns me, and as a political issue it's far more important, I am interested in reducing dependency on Google because as far as I am concerned Google has broken its trust with me all by itself. It has nothing to do with the NSA debacle.
I don't have time for a rant but the short is: I'm sick of their identity management, their willingness to put blatantly obvious "search bubble" results based on recent (or even non-recent) activity-- stuff that is prominently visible to anyone who might be sharing a screen with me. Ditto for chrome's "most visited pages" thumbnails when you open a new tab, which I disabled but had to install a plugin to do. There's some other minor issues. But the final straw came a few weeks ago when my search history showed up unexpectedly on my girlfriend's phone-- I suppose because she logged into chrome on my laptop to check her Gmail. Until she logged out, auto-complete on her phone was using MY search history from chrome on my desktop. What the FUCK? Obviously I trust her enough to let her log into my laptop so it didn't cause me problems here, but I had no way to know that would happen. So what's it going to be next time? What's google's next clever/reckless silicon valley echo chamber idea that's going to cause me problems?
I'd rather not wait to find out. So I've begun the process of reducing google dependence. Email is a big one, Gmail is my single biggest google dependency.
At the risk of sounding like another one of the "Google is evil" people, I'd like to voice my concern that Google is better at managing mail then I am.
I just moved from one country to another and I can't change the billing settings on my Google Apps account because... Google Apps don't support that. The official stance is that I should just download a backup of my accounts, delete them, recreate them with different billing options. However, other than preserving my mail and Google Drive, this won't preserve any of my other content which must either be backed up using third party tools and then manually recovered/recreated.
I'm not sure how this can be interpreted as a sign that Google can do this better than I can.
Does anyone have experience running a mail server on a home connection with a dynamic IP?
I'm imagining a system that will have a homebox (long term storage, privacy) and a cloudbox
(provides availability) and the mail flow will be like this:
sender ---(1)---> cloudbox ---(2)---> homebox
Assuming cloud stays up (1) will happen, and if homebox is reachable, then (2) will also happen. However, if (2) can't be done, then cloudbox will temporarily hold the email until homebox comes online (POP mail style).
Has anyone ever setup something like this before? Any pointers will be appreciated.
This is pretty well trodden ground, between people here and google of decades of history you'll have no problem. Don't forget to implement the arrows flipped around, I sent all my outgoing thru my cloud server after authentication, so everything goes in and out the same way. Eventually I gave up because in my situation forward everything to gmail is not an issue for me; email is just a spam and amazon receipt delivery service, so its not worth much effort.
Anyway one interesting design aspect is you're going to fetchmail from home to a cloud pop server and you'll want to secure that, so the natural inclination is to take smallest possible steps while testing all the way, first you try an unusual port number to keep scanners away, maybe you only allow the big netblocks from your provider (not specific /32 addrs but if your provider gives you an addrs from a /19 well then permit the whole /19) then all manner of SSL/TLS protocols instead of plain text passwords and things like that.
But I'd advise saving a lot of time and going big for security before even playing with the mail system. So put openVPN on the cloud provider and have your home and your phones/tablets/laptops/friends/family can VPN into your big happy psuedo-internal LAN... don't let "the internet" connect to your cloud pop (imap?) server at all, only allow connections from internal (via VPN) addresses. You're probably going to end up doing it sooner or later and a VPN'd config obsoletes quite a bit of internet accessible stuff for the pop server. You can have a perfectly static internal LAN addrs for homebox and cloudbox over the VPN, even if endpoint addrs change, which makes some simple ip address firewall rules easier than if both endpoints were dynamic. So by the magic of openvpn 10.1.0.0/16 is in the cloudbox and 10.2.0.0/16 is at home, all the time forever, regardless of current provider both at home or cloud, which makes it pretty easy to create static iptables rules and such.
I find it handy to have my phone and tablet always on my home network, because I have fun stuff on my home network. So I'm going to be doing this anyway... may as well use the same security infrastructure to secure my fetchmail sessions rather than trying to secure them on the open internet.
I've not tried the above -- but there shouldn't be any problem with it (although if you control both servers, you might as well not use smtp from 1 to 2). If you want to do this, you could also just try and keep your dns ttl reasonably low, and updated, and recieve stmp straight at home -- after all:
"The sender MUST delay retrying a particular destination after one
attempt has failed. In general, the retry interval SHOULD be at
least 30 minutes; however, more sophisticated and variable strategies
will be beneficial when the SMTP client can determine the reason for
non-delivery.
Retries continue until the message is transmitted or the sender gives
up; the give-up time generally needs to be at least 4-5 days."
(My emphasis) http://www.ietf.org/rfc/rfc2821.txt
I host quite a few things out of my home connection, not a mail server (yet) because I know my specific IP blocks inbound port 143 and I didn't want to get into a fight with them over running imap off another port.
But I just use a dynamic dns address from freedns. I leave my computer on 24/7, and I just run all my web services as nobody users and use fail2ban and ufw as access control., and I prefer secret key authentication over password based auth methods when I can (nothing stops me from giving someone my certs if I want them using some service I provide).
Certainly. There are several ways to set it up. For example, you can set cloudbox as your domain's MX with a lower priority than homebox. If mail can't get to homebox, the originating mail server will try cloudbox automatically.
If cloudbox's MTA is running in secondary mode, it will try to deliver to homebox, but hold it in the queue until it succeeds.
Or, you could have all mail sent to cloudbox, and use imapsync or getmail or fetchmail to pull mail off of it whenever you want.
> you can set cloudbox as your domain's MX with a lower priority than homebox.
This is one way to do this, but if you do, remember that a spamming tactic in the past has been to direct spam to the lower priority MX records as often they were somewhat forgotten and not kept up with the latest spam guards.
Basically, always make sure your primary and secondary mail servers have the same and latest of whatever you are using to stomp on spam.
You will also want to change the length of time things are held in the queue. The default is something like 7 days. That might not be long enough if you're using the secondary MX as a failsafe.
I'm not familiar with cloudbox, but depending on your ISP be prepared to handle issues with spam filtering and the possibility that port 25 may be blocked
I have tried this guide and others a couple of times. Unfortunately, no matter what I do, Gmail and Hotmail always classify anything I send from it as spam. DKIM? Check. SPF? Check. Reverse DNS? Check. Spamassassin score? 0.0. Result? "google thinks this message is spam". While it was an interesting experience, the end result is invariably a server that can't send mail to most of the people it needs to, which is quite frustrating.
I run my own e-mail server and it's a lot of work. I have exim with ACLs, spamd and just mutt. Nothing "fancy" like other users, pop3/imap/smtp auth, virus scanning, additional mxs...
Still took me a weekend to get ACLs right, tune the threshold for greylisting (I greylist e-mails which score between two and seven points in Spam Assasin), set up DKIM and SPF... it's not difficult but there's a long list of things to do.
If you're doing it just for yourself, exim, spamd and mutt are enough. But I wouldn't want to be responsible for a larger installation with regular, non-technical users.
It's not just the configuration, your service will always be subpar. At least until there's a good webmail (I have high hopes for Mailpile).
I don't know what you're doing then, because I run postfix+procmail+mutt and nothing else and it is zero maintenance and hardly any config. I don't use DKIM or SPF or SpamAssassin. All I have is a RBL client restriction and recipient/helo restrictions. I get possibly one spam a week and I can deal with that manually and I post to a lot of lists.
I've also built a couple of very large mail clusters (full 42U rack sized, 20+ machines) for ISPs before running courier, sendmail and procmail and it's really not that much management or effort to get it off the ground. The real bugger is getting a management front end on it all (postfixadmin doesn't cut it on that scale so it's LDAP time which isn't much fun).
I've rather horribly dealt with Exchange (2000, 2003) and that's just a whole pile of pain. My noble Exchange battling colleagues inform me that it still stinks.
Yeah here you go on Debian. I just pasted this from my notes and added some formatting...
== set up mutt ==
$ sudo apt-get install mutt
$ echo "export EMAIL=user@domain.com" >> ~/.profile
$ source ~/.profile
== postfix ==
$ sudo apt-get install postfix
.. answer system mail name as your host name
.. add your domain to domains to accept email for
.. Follow instructions here WRT SPAM:
https://wiki.debian.org/Postfix#anti-spam:_smtp_restrictions
.. basically add two lines to /etc/postfix/main.cf
$ sudo service postfix restart
$ ufw allow 25 # allow smtp in firewall. I use ufw.
.. add your hostname as the MX for your domain (I use 123-reg)
.. Visit mxtoolbox.com and check the machine isn't an open relay and is functioning correctly
== procmail ==
$ sudo apt-get install procmail
.. add following to /etc/postfix/main.cf
mailbox_command = /usr/bin/procmail -f- -a "$USER"
$ sudo service postfix reload
== root alias ==
$ echo "root: youruseraccount" >> /etc/aliases
$ sudo newaliases
Done.
I genuinely get virtually no SPAM. RBLs and postfix sender validation above seems to work pretty well on its own.
I've done the same on OpenBSD with OpenSMTPd and spamd with even less effort.
I run my own mail server and it's essentially zero work. I have postfix, dovecot, SpamAssassin, and RoundCube, all authenticated via LDAP.
It required some upfront configuration, but it has run itself with minimal configuration changes and 'apt-get' updates for almost 6 years now. I never touch it, and it works.
> It's not just the configuration, your service will always be subpar. At least until there's a good webmail (I have high hopes for Mailpile).
That assumes that user's want to primarily use webmail. In this renaissance of Mac OS X mail clients and mobile phones, that hasn't been the case here.
dsync seems to stop working properly after a while for some reason, the usual solution is to upgrade it, which implies a compile from source. after a while it's a pain to keep all these packages up to date.
i am using it in production (debian wheezy package, version 2.1.7-7 now) and it works pretty reliable. do you have a link to a bug report? i kind of depend on it atm, so i'd like to know its bugs.
I remember actually understanding sendmail.cf syntax. Those were the days!
The problem is not the daemons; those are easy.
The problem is spam on the inbound and deliverability on the outbound. And I do /not/ want to fight that fight on my own!
I've personally not found this difficult. Unless perhaps you're dealing very large volumes of mail and overhead of SpamAssassin is an issue. Mail volume in 10s of thousands/day on a low end VPS has been no problem for me.
I would definitely recommend using Virtualmin, its a hosting control panel that can control mail/websites/dns in one via a simple control panel.
All works like Plesk, but without breaking everything.
They've got a script that sets up dovecot, postfix, apache, bind, clamav, spamassassin, Mysql, and loads of other hosting software and then you configure every thing via a web interface.
I am looking at running my own mail server out of a Linode instance. It would service a number of our domains and serve to migrate from having web and email live on the same machine at other VPS's. I want to have email serviced from a dedicated instance that does nothing but email for all of our sites.
This seems like a better overall strategy but it's been hard to get motivated to get it done due to the amount of work this represents. I have not studied this guide in detail yet but it seems to really take you down the path step by step very nicely. Thanks!
Good tutorial with detailed set up instructions. On some configuration values I would say the default would suffice as well, but mentioning what can be configured might also be helpful.
However, from experience, setting up a database is overkill for a small mail server instance hosting only a few mail boxes. It's way easier to use system user accounts (just disable remote login for them). That avoids the hassles of setting up the virtual mail delivery. Only for a large mail server using a database backend would make sense to gain more performance.
What about the reliability of your home Internet connection? Isn't that a major issue?
You can have a power outage, ISP problems, or your server might simply die. What happens to the important emails you might miss?
No big deal, its pretty reliable, but also, but i got an MX at a friend's place. Never lost a mail, even if my server goes down for a couple of days.
Mail actually retries for a certain amount of time by default, so short downtime generally causes no problem at all. Longer ones do require the 2nd MX, but yeah, I do that.
In about 15 years i think the backup MX helped twice so far (which is probably totally worth it)
Mail servers are made to attempt redelivery if a mail is undeliverable. Most mail servers will attempt redelivery for multiple days before disposing of it so it shouldn't be an issue.
Furthermore, to address the 'server on fire' issue. If you build the mail server and its ancillaries up in a VM ( or number of VMs ) then you can sync VM to another host on a schedule, or even live.
Set-up a heartbeat ping and the alternative host can bring-up its copy of the mail server VM in a few seconds.
That's nice. I have for a long time been running just Postfix + Dovecot IMAP and relying on OSX Mail's spam detection, which is not that great (lots of false positives, learned data not synched between computers, etc). This inspires me to try using a SpamAssassin + ClamAV combo on the server.
> never put postfix's spool and dovecot's storage on the same physical device and I/O controller. /var/log must live on the separate device too.
This probably depends on the mail volume, no? I host all mail related services for myself and some family on a $9/month VPS (1 VCPU, 512 MB RAM, and pedestrian I/O capabilities) without any issues. My mail volume is typically under 20K messages/day.
The problem with things like this is, since I'm not super confident with any of the technologies involved, it leaves me with an uneasy feeling of having messed something up somewhere.
It is extremely easy to set or forget one extra parameter or something and everything falls apart.
I've been doing web/mail hosting since 2002 and it amazes me that it is STILL this hard to do a mail server. I gave up and started using iredmail 2 years ago, so far it's been great.
I would change that Courier or any Dovecot with a Cyrus IMAP implementation... it really rocks in terms of stability (and works smoothly), is fast and replication :)
Really? My experience has been the complete opposite.
Almost every time I've even been involved with or ran any medium/large mail infrastructures (200K active mailboxes) I've always found Dovecot to be a better choice.
Good to hear that Dovecot scales. I personally went with Cyrus because at the time I was picking an IMAP server, Cyrus was already a well tested solution known to scale. As I recall, Dovecot back then was still young and developed by one person. I've heard a lot of good things about it since though. I'm sticking with Cyrus since it works fine for me, but it's good to have options.
Nevertheless, I think you shouldn't do this. This takes a lot of time and is basically copy/pasting things around. For something as crucial as your own mail server I'd go with a solution that is automated: If your server goes away you fire up a different one in a couple of minutes, restore backups and change DNS records, done.
Reasons for this might be, in no particular order:
- VPS provider goes away
- hardware failure
- user error (think "Oh, I probably shouldn't have issued that command as root on my mailserver")
- maintenance: This guide is (initially) from 2009. How are you making sure that your system is up to date, after you followed the instructions/installed it? Either you'll leave the system to bitrot or you need to actively maintain it and probably will run into issues upgrading the machine. Or .. you migrate to a brand new machine every 6-12 month, which gives you a decent excuse to verify that your backup/restore process is working as well.