I was a very happy FastMail customer until a hacker asked them to reset my password. After _incorrectly_ answering a handful of questions asked by the FastMail support, the recovery email address was changed and a password reset link sent. From there, the hacker attempted password resets on other services.
Initially, FastMail was dismissive that this was a simple "mix-up" and didn't disable access to the hacker for 7.5 hours after my report.
To their credit, FastMail gave me a list of the email accessed and the message headers of the messages the hacker sent from my account (and then deleted -- unrecoverable).
Until and unless FastMail addresses the human factor of security, their technical security mindset is of secondary importance.
Again, ghouse, I'm really really sorry about what happened to your account. It was wrong and we screwed up. As other comments have already noted, it was during the transition to a new security system which was designed precisely to remove the human factor from decision making.
I'm an Australian, and I'm a great fan of our "100 points of ID" system, which is designed to remove the human factor from identifying people.
While I wasn't aware of your account's issue at the time (and we'll be having some discussions internally about why not!), FastMail management were all aware that we needed to get the human factor out of decision making about account access, particularly since somebody tried to pull a similar swindle on our domains!
We spent a lot of 2016 and 2017 working on an automated account recovery system which allows recovery of locked accounts via a carefully audited set of automated steps, which includes a 24 hour lockout to allow the owner to notice an attempt on their account.
If this had existed in 2016 then we would have sent you there rather than having a human make a (poor in this case) judgement call!
I use Fastmail and I like it but this is extremely disturbing. I appreciate you being relatively candid with us, but it doesn’t change the fact that you allowed a customer’s account to be compromised by the most basic attack out there.
Complexity and vulnerability go hand-in-hand. A product providing a critical service like email should be opt-in to any form of recovery not requiring pure secrets provided directly by the user (or provided to an already logged-in user for this specific purpose). Failing that, there should at least be an opt-out for such dangerous recovery methods.
I’m going to watch this thread and your blog for a while, and I hope you can provide some real assurances for security-minded technical folks like me for whom email really is the “keys to the kingdom”. Failing that, I may have to look for another email provider.
I don't understand this response. I'm glad you're working to minimize human factors. Can you explain how exactly you're doing that? I asked some specific questions here:
Please consider adding a option to never ever allow recovering of the account without password, similar to how gandi does it.
My email account / domain is my central hub for all my accounts. All of them can be taken over through fastmail (with the exception of my domain and other extremely crucial services) if an attacker happens to obtain access to it. I want to have the security that this attack can not happen to me.
It sounds from what you're saying like you (and at least a few other hacker news posters) want is an even stricter "no seriously, I promise I won't ever screw up" mode.
We try not to have those kinds of modes, because (for example):
It turns out, black and white security models lead to massive losses of availability when people screw them up - and people do. Though I have to confess to being amazed to see Tony Finch amongst the recent "oops". NASA is maybe not so much of a surprise.
Having said that - if there's enough demand, that would be a worthwhile feature. Accounts that aren't being used are cheap for us to run, and that flag would make the security team's job really easy - just say "no, go find your own way in" without having to review anything!
Bron, I think your concerns are justified and understandable. Thanks for entertaining the idea.
I am one of those advocates and would enable such option if given. That said, I did have an instance when I had to call AWS support because of their own screw-up. I closed the AWS portion of my account but not the Amazon.com shopping portion. I later found out that I can no longer remove 2fa on the AWS portion because I no longer have it. I no longer have it because I already closed the account and thought it was safe to remove. However, because of their faulty system design, a closed account was enforcing 2fa on my Amazon.com portion preventing me from accessing it. In this case, the support agent helped me to regain access.
That support agent's ability to fix their faulty system design is both good and a potential liability. I wouldn't want a "I won't ever screw up" mode there.
In the case of email though, when certain conditions are met, it becomes a safer thing to do compared to getting screwed over by support staff.
The pre-conditions are:
1) The user is using custom domains only
2) The user has past emails backed up on his/her own devices
When these conditions are met, the user has complete control of their email destiny. In the case of losing FastMail account access, they can continue to receive email because they control the domain. They also have complete email history because they back it up.
That said, I believe your clearer response elsewhere in this thread is good enough for me personally. I was concerned before because of the vague responses. I think for FastMail, the risk perhaps outweighs the better security for me personally even if I would welcome it.
There is no demand for password only protection without recovery because it is not available on the mass market. Just like there was zero demand for cryptokitties a few months ago and now there is significant demand. You can only see a demand if there is and option for something and people use/don't use it or after conducting a poll.
though I'm not sure that cryptokitties are a great way to sell your idea here. They're the kind of tulip/fidget spinner craze that we'd invest a ton of effort into, sell a few for a while, have to support for the next 10 years and still face a noisy backlash from a few annoyed users when we finally retired it. Overall, net loss.
In the case of "no recovery allowed" accounts, the development effort is minimal, but the number of people who would turn it on "it says higher security and someone on hacker news told me to, it must be good" and then proceed to lose their account... I bet they'd be noisy when the realised they'd not only lost all their email, they'd lost their payment to us, because they'd have no authority to request a refund.
Oh wait, they would - chargebacks. Notoriously hard to fight with an online service, particularly when you're not providing said service any more. And it's always the full amount charged back too, not just the unused portion of the service.
I'll float the idea of allowing people to push right hard up to the "do not resuscitate" tattoo on their account, but I'm not going to pretend it doesn't come with some risks to us.
As a paying fastmail customer, I would appreciate such an option.
However, you do have to make sure that if I lose access to that account, I should be able to create a new FastMail account and have new traffic to my domain be directed to the new account. i.e. you do need some way to migrate a custom domain to a new account, if the new user can prove ownership and/or control of the domain name.
Yes, that has to be an option anyway, if somebody sells a domain and doesn't release it from FastMail. I have a blog post for this advent series (already written and everything) about why we don't allow split billing on a single domain.
> a 24 hour lockout to allow the owner to notice an attempt on their account.
I've been pretty careful to ensure that I don't lock myself out of my account (multiple U2F keys, strong password saved in password manager with backups)
But if a determined attacker kicks this off just as I'm stepping on a flight from Sydney to London, 24 hours isn't going to be enough.
(I should add also - I'm a mostly happy Fastmail customer)
You can't even get to the 24 hour lockout unless you've successfully passed the security checks.
We add the 24 hour lockout as an additional level of protection for 2fa accounts (even though they've given two factors of recovery by then) or if we can't confirm that you are resetting from a computer which has successfully logged in to that account before.
It sounds like if I use Fastmail, and I go on vacation (and thus go a day without checking my email), someone can max out the automated system and then get a human being at Fastmail to potentially reset my recovery email. Is this the case?
Our procedures have to balance the concerns of very different groups of people.
Some people have explicitly directed us to enforce stringent account security requirements by enabling multi factor authentication. For those people, we assume that they have their own security practices and are diligent in maintaining them. Those people are aware of the risk of losing access to their mail if they lose their credentials.
The other, much larger group of our customers, come to use because they want email that has support. Many of these customers forget their passwords and still need to get to their email (which is more common than you might imagine if you are surrounded by a hacker-news demographic!)
Our procedures have to balance between those two sets of needs, and they evolve over time. This incident came up in a period of transition. It should never had happened, and it's a great object lesson to us about how to do better in future transitions.
Having said that, based on this conversation today we are reviewing all our processes around re-establishing access for regular people who haven't requested additional security by enabling second factors. We absolutely can and will do better than we did in 2016.
For an attacker to exploit this, they will have to know that you are going on such a trip. This means that attackers who don't know much about you already are less likely to bother, and also raises the bar for even the focus attackers.
Nothing is foolproof, but many things can be useful.
If a well-resourced attacker was targeting me specifically, it wouldn't be too difficult for them to find out about my short-to-medium term travel plans. A bit of social engineering with the airlines could tell them exactly which flight I'm on.
They could also compromise other people who need to know my plans and don't have the same security practises as me.
I think about this stuff and minimise as best I can, but my account security shouldn't be dependant on it.
This is like the employer I used to work with who said "well google has been hacked so therefore us storing passwords in plaintext is okay". Maybe I'm toast if the NSA takes an interest, but there are an awful lot of bad actors out there without the level of funding or resources of the NSA. Of course I'll never be "100% secure", but making it as impossible, or at least as difficult as possible, for someone in Russia to socially engineer their way into my email is worth spending time and money on.
What I'm hearing is that the human aspect remains and there is absolutely no prevention of this happening in the future.
The other response contains weasel words like "For instance, _some cases_ take 24 hours before the reset password goes into effect". Why "some cases"? Why isn't it all cases?
I think we as customers deserve complete transparency on this and know what prevention will be in place.
I've had conflicting advice about complete transparency - if we give the entire algorithm, then that helps attackers find the exact surface that will get them in, so we don't publish the full ruleset we use.
Here's an example of some inputs that go into it though: we store a cryptographic token in a cookie which tells us the first time your account successfully authenticated from a computer. If we have a history of you using the same computer over multiple years, that's different than a new computer. But cookies can be cloned, so it's only a signal, not proof in itself.
If it's from the same IP address as multiple successful logins in the past, that's a signal.
We're not the only site that uses methods like this to help identify people when they've lost their password. People make mistakes. Taking a hard line "you lose your password, you lose your entire email account with all its history, and you don't get your money back either" might sound attractive to a certain demographic. They are not the bulk of our userbase. Even locking people out for 24 hours is a pretty big imposition that you want to avoid if you're really confident (algorithmically) that it's the same person.
If people in the "security is more important than easy recovery" demographic haven't turned on 2FA yet, then they certainly haven't signaled that they want things locked down in case of doubt. Even of those who HAVE turned on 2FA, you'd be surprised at how many lose one or both of their factors.
It's easy to say "I won't mess up", but people do. Which is why our post today says in bold "If that happens, you will lose access to your account permanently."
The blog post was in 2014. This security bypass happened in 2016.
I think what we're witnessing here is that despite best intentions and past experience, humans are going to be humans. I actually felt good after reading that blog post in 2014 thinking that you guys are going to be better than most companies here.
Nope.
But I think a lesson can be learned here. The lesson is simply that humans are the weakest link. As much as you might try to add process and try to minimize, the best is having zero human capability at all. So when tptacek asks _who_ has ability to change things about an account, we really do want to know. Because those people are the weakest links. (don't mean naming names, but understanding who in general has those powers)
I mentioned elsewhere. I own my domain. I backup my emails. It's way more likely for a FastMail human loophole to screw me over than for me to need human assistance on login (which is never).
https://blog.fastmail.com/2017/12/06/security-account-recove... - it's currently 3 people who have that ability. It had to be more before we had the automated tooling, those three people couldn't handle 3 figures per day (I'm not kidding) of regular password losses by regular users.
'I'm an Australian, and I'm a great fan of our "100 points of ID" system, which is designed to remove the human factor from identifying people.'
According the page you linked, a birth certificate and bank statement (or even a 'Document issued by <SNIP> or registered corporations.') would be enough. So if I get your birth certificate and have an Australian corporation, I can issue a letter saying you're a customer for a year. So I have 100 points and can pretend to be you?
That doesn't seem secure at all. The birth certificate has no photo (or, if it does, it won't be useful except to determine ethnicity), and the document from a registered corporation can be trivially faked.
We don't use the 100 points of ID system of course, because we're an online service. The 100 points of ID is something that's used in person to decide whether you can open a new bank account using that name.
The concept behind the 100 points of ID is that there's a fixed standard and it's not a per-time decision made by a human, it's a consistent set of rules applied without fear or favour.
"We don't use the 100 points of ID system of course, because we're an online service.
Right, but you said you're a 'great fan' of it. My reading of the wikipedia description that you linked suggests that the system is wholly inadequate (mainly as you can satisfy the 100 points without photo ID).
So I'd like to know: are you really a fan of that system (the particulars of its rules), or just a fan of the idea behind it (that there exists a consistent set of rules)?
Tell me how you bootstrap photo ID in your country, and I'll tell you whether I think photo ID means anything.
In our case, we don't care at all what you look like, just that you're the same person we were talking to earlier - and ideally that you're the owner of the method being used to pay, though that's not always true or necessary. So the photo is meaningless to us.
Besides: who is looking at the photo and confirming that it's the same person as the one in front of them? Yep, an human. The whole point of this discussion is stopping the human making human-factor judgement calls.
I think we're talking past each other. I'm saying the Australian 100 points system isn't adequate for bank account opening, because it doesn't have photo ID, so I'm not sure why you admire it.
You're saying that you don't need photo ID for your use case. I agree with that for your purpose, but it's not relevant to my criticism of the 100 points system for its purpose.
I'm wondering how exactly you GET a photo ID in the first place. You need to identify yourself to whoever is taking that photo.
I lived in Norway for a couple of years. There I just opened a bank account online, giving them my person number - and they posted something to my address as registered with the government. But in Australia our privacy advocates killed the "Australia Card" idea, so instead we have a tax file number with all the disadvantages of a national ID number and none of the advantages...
Anyway, back the main point. To be totally frank with you I think photo IDs are largely bullshit security theatre. You're asking a human factor[tm] to look at a fuzzy photo taken 10 years ago and confirm that it looks similar enough to the person in front of them.
"I'm wondering how exactly you GET a photo ID in the first place."
In the UK, for a passport, there's a chain of trust. You need a professional or some other trusted community member (vicar, doctor, lawyer etc.) to sign the back of the photo saying it's you, and to provide their contact info for further verification.
Not perfect, but I don't think many people are skilled enough to successfully procure a passport where the photo isn't of the named person).
"To be totally frank with you I think photo IDs are largely bullshit security theatre."
They're not 100% reliable, sure, but they're extremely useful in establishing whether the person in front of you matches a particular identity.
One excellent use case for photo ID: consumer lending. If you lend someone money, you need to establish that the person you give the money to is actually agreeing to pay you back, and to pay you interest.
This is death. Your email provider absolutely cannot under any circumstances have this vulnerability. Wow.
Just the idea that there's a human in the process making subjective decisions about security questions and answers that can, on their own recognizance, change a recovery email address. Forget the immediate mistake that one rep made, and go down a couple levels deeper into the company policy design mistakes at play here.
Unless you have a set of objectives that are very different from what I consider "as secure as e-mail gets", please consider GSuite and not Protonmail. (I don't speak for 'tptacek, but I'm pretty sure he'd agree.)
As a corollary: if you really care, use Signal for stuff you can't say over e-mail. Whatsapp's fine too. But they solve a very different security problem than the one you need e-mail to solve, which is mostly "don't leak my emails" and also "don't reset my password for attackers who ask nicely".
Just gonna drive by mention https://landing.google.com/advancedprotection/, which is a physical-2fa-security-key-only version of gmail. To my knowledge it also disallows mail forwarding, and the account recovery procedure in the event of losing both second factors is intended to be a long process that involves proof of identity and multiple attempts to notify the account owner.
(I work on gmail, but I'm not intimately familiar with this option, other than knowing that it exists and is intended for high value targets like celebrities and politicians).
Yep. I don’t recommend it by default (most people I work with use GSuite in a work context, so recovery is normally administrator-mediated), but the fact that this exists is pretty awesome.
I know that GSuite would like to differentiate its enterprise products by features, but allowing basic/business plans to force U2F would be great. Since it's also available in GCP's Cloud Identity product (which is free), I hope this is coming down the road.
Sure! Signal and WhatsApp are good at having private conversations. Email is very tough to add private conversation capability to, for a variety of reasons. What you do need your mail provider (and by extension your DNS provider) to do is to not give up access to an attacker who asks nicely, because for most services, email access is account takeover.
This makes discussions about email security confusing, because most security professionals I know are thinking about a very different threat model (pop all of your services) than what a lot of people think about (confidentiality). Google is pretty good at not letting random people auth to GSuite as you. (Still turn off SMS recovery, though.)
I get the impression that when non-security people talk about "security" these days it's almost always in the context of preventing government surveillance.
So even though Google has a great track record of keeping hackers from taking over your accounts, the news stories about them cooperating with governments makes them seem less "secure" to some people.
What's weird is when it leads to a fallacy where people trust services that are less verified and tested in terms of security just because there isn't the association with government cooperation.
This is irrational. It might be a complicated question if the foreign-jurisdiction alternatives were more secure, rather than drastically less secure. But since that's not the case, switching from Google Mail actually gets you the worst of both worlds: a mail service that is materially less secure, operating in a jurisdiction where there are literally no rules preventing USG-level adversaries from exploiting it.
Even allowing for some hyperbole, I think this is an overreaction. I agree that they made a mistake, but we only know of one user it affected. They didn't leak an entire database of user data or expose a vulnerability for which the attack can be automated.
You've commented many times that email is inherently insecure and that (IIRC the conclusion precisely) there is little point in focusing on securing it. Instead, use a secure messaging system such as Signal. Email just isn't going to be secure. Fastmail seems to put more effort into their security than most mail providers. For example, the proxied images [0] sound fantastic and they address a threat that affects almost every email user daily.
The inherent insecurity of e-mail doesn't change the fact that popping someone's email account means popping most of their services. Therefore it makes sense to hold e-mail providers to a higher standard than a median company.
Furthermore, while we only know 1 user affected, in the rest of the thread, Fastmail has been cagey at best about answering what they feel the process is now, let alone what it was back when this incident occurred.
You're aware this is how the vast majority of email providers legacy operated right? (And sadly a few still do)
E.g. in this case likely a 2 point auth system (security question and e.g. payment details (last four of latest payment meth/etc))
Seems you're shocked that a lower tier support agent can auth this kind of request when the reality for most email hosts is that they can.
They(likely a new employee) got socialed.
Yes, they should have systems in place to prevent this from being possible in the first place; no, I do not find your incredulity genuine, albeit rational.
FastMail isn't some random legacy email provider. It's a premium one that bills itself as secure. It's not some free mailbox you got with your budget domain registrar. Hence, it's reasonable to hold them to a higher standard rather than fatalistically observing that the median email provider sucks.
Good morning. I'm going to be here to answer specific questions, and I owe you a personal response to this as well, which I'm about to start working on!
There is no doubt that in this specific case our human factor screwed up, and I'm really sorry about that.
First I'm going to post the standard response that our team has written for any new support tickets that come in about this today, then write my own personal apology and response here.
---
Thanks for getting in touch with us about the report on Hacker News about our security procedures.
As we say in our recent post about security at https://blog.fastmail.com/2017/12/05/the-fastmail-security-m..., security is a process, not a checkbox. We do our best to be continually improving and upgrading our security procedures, and offering our security-minded customers the most robust, industry-standard options possible.
However, we have been less diligent about forcing older accounts to upgrade their security settings. With a range of possible security option states, customer support is occasionally placed in a position to make a judgement call. As the post indicates, the incident in question happened immediately after a major round of security changes. There’s no way around it; someone made an exception they shouldn't have.
Social engineering is always one of our biggest concerns. As any number of well-known break-ins have demonstrated, the "best" security hack is often to sidestep it. Since that incident, we have taken substantially more aggressive steps to close off avenues of attack and human review. We are constantly trying to narrow the number of accounts that even can go to a human for review, and for those that must go to a human to provide as much notice as possible to the account owner before possibly allowing the attacker to have access. For instance, some cases take 24 hours before the reset password goes into effect. If you are a legitimate account owner, this has the often frustrating side effect of locking you out of your account for 24 hours. But, if you have been attacked, this gives you the opportunity to keep the attacker out.
Thank you for sharing your concern with us, and I hope we’ve addressed yours.
Exactly which employees in your organization have the ability to alter recovery email settings?
How many of those employees are there?
In what fashion do you audit and track the activities of those employees?
What training are these employees given to avoid social engineering? What firm provides the courseware?
What's the escalation process for complicated, non-no-brainer reset situations? If a support person isn't absolutely sure whether they should reset something, how do they get a second opinion?
Are the support people who are entitled and able to make these changes incentivized to close tickets as quickly as possible?
Do you monitor "out-of-process" changes to recovery email and password settings, so that you can see trends over time and by particular staff members?
Has any third party security firm assessed your service recently specifically for this attack vector, for instance by conducting social engineering testing against your support staff? What's the firm?
How are you MINIMIZING, rather than just improving, the "human factors" involved in assessing whether accounts can be altered based on anonymous incoming callers and requesters?
This is a HUGE, TERRIFYING vulnerability. Email providers are the single most important security service people use; if your email is compromised, many (most!) of your other services are compromised as well.
Wow, that's a lot of questions, and I can't answer all of them without creating security risks!
Our absolute focus is on minimizing the human factors.
In the past year and a bit since that incident, we have improved our escalation policies and support training, as well as let some support staff go.
But more importantly, we now have an automated account recovery system which can be used to verify ownership of the account using a number of different factors (not all of which I'd like to talk about in public - again, if an attacker knows the full algorithm it helps them game it).
Wow, that's a lot of questions, and I can't answer all of them without creating security risks
Questions like these are not unreasonable for a customer to ask a service provider with respect to identity management and protection of that customer’s proprietary and confidential information.
With respect to the first question “Exactly which employees have the ability to alter recovery email settings.” Not being able to have a prepared answer for this question suggests that you don’t have a formal policy or standard procedure around role based capabilities in your operation.
The second question is an extension of the first.
“In what fashion do you audit and track the activities of these employees?” Not being able to answer that question suggests that you don’t have an auditing process around employee actions with respect to account changes.
“What training are these employees given to avoid social engineering?” Not being able to answer this question suggests that you don’t have such training in place.
“What’s the escalation process for non-no-brainer reset situations?” If your processes are written down and staff are trained in them, a very simple description here would not create a security’s risk of any kind. Not doing so suggests that the process is not formally specified or is quite ad hoc.
“Are the support people who are enabled and entitled to lose the tickets incentivized to close the tickets as soon as possible?” It seems that your internal security posture would make that clear, and it is unclear how stating that correctness is more important than speed in user account modification poses a security risk.
I’ll pause here and summarize. Answering any of these questions is not going to pose a security risk unless such answers expose to your users reasonable measures that you are not taking or haven’t thought of.
Which employees? At the time of this compromise, that list was all support staff as well as the technical staff in Melbourne. It is a specific role that's granted to specific people, to answer your question about having a procedure or policy.
Today, that role is granted to a much more limited set of senior security staff (currently 3 people). Regular support staff can not alter security-sensitive details about accounts. If your account is owned by someone else (e.g. family or business, or part of a resold package) then they can still alter recovery options, as they own the account.
In 2016 before we had automated account recovery, lost password was in the top 3 categories of ticket every single week! Every member of the support team dealt with multiple account-loss tickets per day, both forgotten password or stolen account.
Stolen account losses are way down now we have app passwords, we often only have to block a single app password and notify the user rather than locking the entire account. Forgotten passwords have not reduced, but most people are able to recover using the automated tooling.
---
In what fashion do we audit and track? A few ways - we log every API call at the lowest level. We log each override when the support person accesses user accounts against the ticket that they come in through, so we can see why they were accessing that user.
We could always do with better tooling to introspect logs, but the data is all captured and can be followed through after the fact. Support staff have no way to wipe their audit trail.
---
Training - in 2016 we didn't have much formal training for our support staff - they learned on the job from each other. We are very aware that this was a failing at that time.
We have more training now. We did a lot of work at unifying our support teams across the FastMail and Pobox/Listbox family throughout 2017, and that led to better training and induction materials, as well as better internal reference material for support staff to use.
Early in the induction process for all new support staff is a description of how social engineering works and a warning that urgency is often used in social engineering attempts, so when in doubt, slow down and get a second opinion (which leads to complaints about slow support, but that's the tradeoff here.)
---
Escalation process - as mentioned earlier, if you had 2fa enabled then it has always gone straight to the senior security team, which is based in Melbourne and consists of our most experienced and trusted people. Neil (author of the blog post this HN refers to) is of course one of these people.
With lost passwords no longer a highly common support request, all support tickets requesting manual account recovery are escalated to our senior team for review.
---
Support people have no incentive to close tickets quickly. Absolutely. That is a bad metric, and it's not a metric we have ever used.
Time to first response and time to followup responses are tracked, but there's no incentive to close tickets.
This answer is a no brainer and I should have answered it in the first response - sorry. I was still rushing through initial responses at that time, and there were too many points in that post to think about them all at once and still respond quickly. The real-time nature of this hacker-news medium encourages fast answers above complete answers. I hope this longer response helps clear up remaining questions, at least to those who see it!
---
There's another blog post coming soon about the account recovery system in particular, which addresses exactly how we've minimising human involvement in recovery decisions while not excessively punishing real human frailty amongst our customers.
hi Bron, thank you for this response. Much clearer and I think this is what everyone wanted to see.
Can I just clarify some things for peace of mind?
1) When you say regular support staff cannot alter security-sensitive details. How is that done? Do they only perform changes through a limited set of UI?
2) When you say if 2fa is enabled it goes to senior security team, is that an automated process such that support staff don't see that ticket at all? The support ticket interface doesn't seem to have anything that helps to automatically route password reset requests.
3) Was the security incident involving ghouse through support tickets?
4) Do the senior security team have direct data access? i.e. do they also change things through a UI or do they have capability to directly change data?
1) yes, support staff have a limited UI. There is always a balance between limiting support access and having them able to provide meaningful help. I have the same level of access as a support staffer, and I still get tagged into to work on some issues (particularly calendaring issues, a lot of people have died on the hill of calendaring and I'm currently still our primary expert on some parts of it), and often I need to view people's calendars and the emails related to scheduling in order to debug their issue. The nature of the job is that many issues can only be understood and resolved "in situ". Have I mentioned yet how horrible calendaring is? Thanks for reminding me :(
The UI given to support staff doesn't have the ability to update security credentials for users because they no longer have the "can update security credentials" role like they did in 2016. I don't even have it any more.
2) front line support still see all the tickets first, and they route them as appropriate. Sure this takes longer, we don't have 24 hour coverage of senior security staff (not entirely true, we have 24 hour coverage for emergencies. Somebody forgetting their password is not an emergency in this context)
3) the security incident involving ghouse was entirely via support tickets. His description was accurate, front line support send the pro-forma "we need a bunch of these details", got back some pretty half-arsed details that didn't meet the bar of what was supposed to be provided, and helpfully made the change despite our policy. The helpfulness of humans is a major bug with any security system, and this particular human tried to be too helpful.
4) The senior security team also use a UI. Operationally, they all have the ability to write code that directly changes things under the hood, but that code also has an audit trail and goes through review. It's always quicker and easier to use the UI, so that's what they do.
The UI is not just available to those three people, it's also available to anybody who has a multi-user account and needs to administer their own users. It's still a standard part of our system, just restricted in who can use it at an "any arbitrary Fastmail customer" level.
I'm a FastMail customer. Your response is troubling to me in that it didn't answer most of tptacek's questions. It's troubling enough for me to start looking at other email providers. :-(
I would like FM to provide something akin to Google's advanced protection program. Those of us who are careful not to lose our login credentials should not have to suffer a weak recovery process for the convenience of those who do. I personally would rather opt my account into a stronger recovery process even if I can't access my account for several days or a week or more.
I have now responded in more detail - at the time I was busy trying to spread the love around, and also support my team as they dealt with the support requests and digesting the response on here.
Which of these questions can you not answer without creating security risks? I didn't ask you anything about your automated system.
Is it possible under any set of circumstances for your human employees to alter accounts? If the automated system fails, are accountholders out of luck?
If the automated system fails and you have 2fa, then it gets escalated to the two most senior members of the security team.
In some cases we haven't had sufficient information on the account to ever verify that account's owner, and they never got their account back. Some users refuse to give us enough information to allow us to later positively identify them - so yes, those people will be out of luck if they lose their credentials.
I agree with hitekker, I'm feeling pretty nervous about being a FastMail customer right now and will start looking for a more secure alternative now. The main reason I moved to FastMail is because I stopped trusting Google to keep my mail secure.
I've just switched away from Chrome (because I'd like to support Firefox) and am a FastMail customer.
But I've started to think about moving back to Chrome for "high security mail".
My private mail is pretty bland and uninteresting, so I don't care too much about not using GMail there, but for my Apple account, Google account, Microsoft account etc. it might be a good idea to compartmentalize those "high value" things from everyday mail and go to GMail with the Advanced Protection Program (so no access from smartphone or iPad, I guess).
And looking at their web site I've learned that GSuite Business is affordable and allows adding domains hosted elsewhere. Good.
What do people think about this?
But then the next step: what about losing my domain? My registrar is a reputable German domain hoster, but certainly no Google. On the other hand, Google doesn't register domains, but has "domain partners" like "domaindiscount24" (that I've never heard of before), so I guess not much to win there.
They have one of the largest information security teams in the world, that team includes what is probably the best corporate vulnerability research team in the world. They're one of a small number of companies that is actively defining modern TLS and thus modern transport encryption; their operations and security teams are almost certainly the world's most sophisticated users of TLS. They ship the most secure browser in the world (if it's not, it's a dead-even tie with Edge --- but, since Google outclasses every other major vendor in vulnerability research, I doubt it's really a tie) and thus have a far better understanding of browser security and the interaction between serverside applications and clientside JS/HTTP applications than any other company. They spend more per year on external vulnerability assessment than most startups do... for everything. They're a constant state-level adversary target and have, over the last decade, evolved a secops and monitoring team to match those adversaries.
How many engineering employees does Fastmail even have? How much better would each of them have to be than one of the best-paying security teams in the entire industry for them to match up?
I could go on, but to me, you don't really even have to think hard about this.
So it's because Google has deep/best skills in security? It automatically applies and makes all their products more secure than everyone else's, even if their design is weakened as a result of their business model? e.g. Does Google's 1st class security team + unencrypted emails + tracking makes it more secure than a company like Proton Mail that's focused on providing Secure mail?
${All the things I said previously}. And, Google Mail is one of their flagship products.
Most of what is on that ProtonMail page is nonsensical. The claim that is relevant to the discussion here --- that ProtonMail has a "smaller attack surface" and is thus structurally more secure than Google Mail --- assumes significant facts not in evidence.
See downthread for my response to the claim that using a mail services outside the US somehow insulates you from NSA snooping.
They have every incentive to ensure the highest security possible. Their entire business model and most of their revenue is predicated on consumers and businesses moving not just some, but all of their data, straight over to Google's custody and control. Indeed, it damn well had better be secure.
But I think they're compromised by those same business models. Google wants to provide intelligence, and probably more important to them, marketing data. This requires that the consumer is an open book to them, and their business decisions incorporate that. Up until recently, they were actively scanning email for marketing insights. In addition, Google's operating complexity, both business and technical, increases the opportunity for failure. And their other business objectives compromise their security work. That's glaringly apparent for their Android platform. There's more surface. And in a Google world, the email account grants direct access to everything — location data, purchasing history, passwords, documents... everything.
For another dedicated email provider, what they have to protect is also simpler. There are fewer moving parts. There's less to protect, which means that there don't need to be as many engineers. That means a careful and well thought out email provider /can/ be as secure, by carefully limiting their exposure, doing one thing, and doing it well.
There's something to be said for careful application of open standards and open source software, a smaller and more responsive team, and not building a massive single point of failure. I am a current Fastmail customer, and hope to remain, depending on the outcome of this review.
I'm not looking to discredit the claim, I'm genuinely curious to learn about what they've done to earn the Gold Standard from @tptacek
Google were previously reading our emails for Ad purposes and some of their employees are still able to read our Emails, their privacy policy also indictates they will hand over our emails if requested by law enforcement which suggests it's weaker than protonmail.com end-to-end encryption:
> All emails are secured automatically with end-to-end encryption. This means even we cannot decrypt and read your emails. As a result, your encrypted emails cannot be shared with third parties.
If this is the case, how is Google being held as the Gold Standard?
Elsewhere in the thread I mentioned advanced protection[0]. Gmail/Google is also the only company to my knowledge that gives you a warning like this one[1], and it was certainly the first to do so.
A lot of this comes down to your threat model. If you are most worried about
Unless your threat model is "The NSA gives my hosting provider a court order" or "an employee of my hosting provider goes rogue", its pretty clear that GMail is categorically the best option. And in those two cases, its not clear that there are significantly better options.
I get and am not questioning that. It's just that your curiosity doesn't seem to have motivated you to do a first pass of, I don't want to call it 'research', but just basic poking around on the topic. You want links and info from some dude on the internet because what he says contradicts stuff you know from... something a vendor said about their product.
It's a totally sensible question but it's not some particularly arcane mystery to dig into. In tptacek's case, in a jiffy, you can bring up the 60-odd comments of his that mention 'Gmail' and get a reasonable idea of what he thinks of it and why. And if you think he's got it wrong, you can say, hey, tptacek, I think you're full of poop when you said [...]. And then maybe you can hash it out and one or both of you will learn something. But 'Citation, please', especially on trivially searchable topics mostly says 'I'm kind of curious, but I don't really care'. The person you're asking probably isn't going to care either.
I was hoping there was a quick resource of someone having done a deep analysis dive into advanced techniques Gmail does that makes it more secure than everyone else but judging by tptacek's response it sounds like it's because they have the best security team and by extension all products they make are naturally more secure.
If all we have are the same claim being repeated with the only way to learn about what makes Gmail the most secure email provider is having to trawl through 1000's of comments. It means Gmail is always going to perceived as more secure even when they may not be, because relatively no-one is going to trawl through 1000's of comments to make an informed assessment otherwise.
trawl through 1000's of comments. It means Gmail is always going to perceived as more secure even when they may not be, because relatively no-one is going to trawl through 1000's of comments to make an informed assessment otherwise.
60ish is not 1000s. 69ish if you add the 9 about Protonmail. The guy posts on HN so much you can fairly safely go to https://hn.algolia.com and type author:tptacek [topic of interest] and find out what he thinks about it. If there was, inexplicably, a comic universe about HN mutants, he'd be The Citation.
I think you're conflating several different things here. Their vulnerability to hackers is not at all related to the extent to which they are willing to cooperate with the US Government or to exactly how their GMail ads work. You have to define exactly what your threat model is, and no service can really be the best at all of them. It's perfectly consistent with the worst interpretation of your other assertions that Google is still the gold standard for making sure that no hacker can ever compromise your GMail account, reset your passwords to your services, and hold your data and accounts on other services hostage.
This can be enough for me to consider leaving depending on how it's fixed.
This response says absolutely nothing about how the vulnerability is prevented in the future. It's just a bunch of vague promises and mumbo jumbo. What specific procedures are in place to prevent it? At a minimum, I expect to see something specific like when you guys almost lost your domain because of Gandi [1].
And even then, can I have an option to select absolutely no human intervention possible? Having any human intervention is simply not acceptable.
I already have multiple ways of recovering my account, and I never, ever want human assistance on this. I use a password manager, and I will never, ever need FastMail assistance on login.
At a minimum, normally when speaking of the other big mail providers, you wouldn’t get an explanation on a public forum at all.
Especially because technical folks like us don’t know to communicate well, words can be misinterpreted, etc. It’s actually not a good strategy to respond to such concerns in public.
Also in my opinion, people that make threats of leaving in public unless certain demands are met usually have their mind set already and can’t be swayed.
As for never needing human assistance, never say never — if relying solely on your password manager, I hope you have a digital last will for your spouse or children.
Oh, and also, I use my own domain on top of having a backup of my emails.
What this means is that if all recovery options are not working and I'm actually locked out, I can fix it.
I own the domain, I own my past emails, I can still get emails. Maybe I'll lose some emails for a day, but that's it. If I want to prove my identity to FastMail, I can also prove that I own the domain.
But the point is, getting locked out of something as important as email is not gonna happen due to my screw-up. It's more likely for a support loophole to screw me over.
I understand what you're saying, but I think perhaps we can agree that the current response is insufficient?
Have you taken a look at the link I supplied above where FastMail wrote about how 2fa protection could be bypassed at Gandi? They were very specific and clear about the recommendations being implemented.
Now, compare that to their current response. I think definitely the difference can be seen.
This is serious stuff.
And, I'm absolutely serious about never needing human assistance. I already have mechanism setup for my family to retrieve my digital assets should I disappear tomorrow. I worked on this together with my wife. I know most people have not thought about this and you're right in your skepticism, but I'm serious.
For what is worth, I’d also like a toggle in the settings to never involve human assistance.
But we technical folks are very odd and I’m assuming they have users that really need human assistance.
I do agree the response is insufficient, my point is that such discussions in public are dangerous for the company and it’s not the norm for company reps to give detailed explanations without prior preparations.
> And even then, can I have an option to select absolutely no human intervention possible?
> I already have multiple ways of recovering my account, and I never, ever want human assistance on this.
Yep, I'll second this feature request. Put as many disclaimers and confirmation mechanisms on it as you need to in order to keep people from accidentally enabling it. I will happily assume responsibility for it.
That's a possibility. We've talked about using something like twillio to read the message out. We haven't had the feedback that this is in high demand.
It also brings issues of its own. Home phones are hard to block number on, and it could be used to troll people in the middle of their night. We need to consider those risks too - it's not a simple and obvious win.
Hi Bron. I'm a customer of both Fastmail and GSuite, and I have enjoyed your service for a few years now. I still use Fastmail for some things, like sieve, and very much will continue paying just for the ongoing development of open-standard email like JMAP. But there are definitely a few things that I can't shake when I learned about them that very much pertains to the security mindset that prevents me from moving my primary emails onto Fastmail.
Security paradigms have been steadily moving beyond a hard-boundary-soft-center, to a defense-in-depth, distrust-your-own-services model. I was alarmed to learn last year, for example, that you use OpenVPN with fixed symmetric keys (--secret) rather than TLS with any forward secrecy (--tls-auth) for VPN between your NYI and AMS datacenters. https://blog.fastmail.com/2016/12/19/secure-datacentre-inter...
Presumably, running datalinks like this means you would have to have perfect trust in your long term key management and rotation. Is that something you plan on improving in the future?
Similarly -- I stumbled on this entirely by accident after your blog post about moving datacenters -- that your head of security ops & infrastructure tweeted "I will probably root my phone soon because Samsung's emoji set is worse than not having convenient OTA updates" https://twitter.com/robn/status/919194089920311296
I don't want to conflate anything -- a tweet on an engineer's own time about their personal devices isn't by itself a security problem. But it does reflect on the security mindset. If you had a BYOD policy, and this phone did end up being flashed to Lineage and be 3 patch levels behind (esp with Android's track record of RCE-via-media CVEs), this could definitely become a weakness on your entire infrastructure, and thereby on all of us as customers.
This is the type of thing I couldn't shake after learning about it. Of course, trust has to be placed somewhere. You have to be able to place trust on your ops and your infrastructure, but that's also a process, not a checkbox. People and devices can be trusted a little less in the overall security system, to provide redundant security. Could you clarify your position on how your staff is trained about the human weak points, security as a lifestyle if you're security and ops, and how your security mindset incorporates defense in depth?
If an Android phone connecting to the company’s WiFi or the user’s email and whatnot is enough to compromise the infrastructure, then the company has bigger problems.
I’ve worked in companies with liberal BYOD policies for portable devices, but also tasted really restricted environments and such environments are basically highly regulated security theaters.
Users do stupid things of course and in corporations it’s worth it to restrict their devices, but restricting developers on what they can install and do on their own devices has a negative ROI and doesn’t go well. If you can’t trust a dev to manage his own phone, you can’t trust him to build your infrastructure either.
And yes, we make mistakes as we are only human, which is why a phone should not be enough to compromise that infrastructure anyway.
PS: your mention of that Twitter account is creepy.
Absolutely! Our wifi network in the office is treated like an untrusted network. All authentication is done directly from our work laptop or desktop machines and requires a second factor (TOTP, not SMS!)
> PS: your mention of that Twitter account is creepy.
With no context, I agree. But I'm not exactly stalking engineers here - there was literally a direct link to that twitter from the Fastmail updates mailing list that went out, when customers were notified of the NYI datacenter move. Made me do a double take.
We don't consider looking at our staff public twitter accounts to be creepy FYI. We mention that we're at FastMail, and we do indeed link to our own twitter accounts occasionally.
Flippant comments on twitter definitely don't reflect security policies! That phone doesn't have production access for obvious reasons.
You're right that security is a process. We're always working to harden and segment our internal services, as is best practice these days.
Ongoing professional development and training is important for our security staff (indeed, all our staff, because everyone matters for security). The security landscape is always changing, and it's not something that's ever "solved" - it's a situation to stay on top of.
No, but if you visit your Threads page (link at the top of every page) you can see any replies to any of your comments. There's nothing special that marks a new reply, though.
I have a habit of upvoting nearly every reply anyone makes to any comment of mine, as a way of thanking them for the comment. This also happens to help when I skim my Threads page, since it's easy to spot comments that still have the voting button(s).
Likewise. Social engineering is a big concern. I understand the risks of getting locked out of my account, but would much prefer a stricter system -- along with published guidelines on Fastmail's process for handling these cases.
Being able to persuade a customer service rep to provide access to an account (even if indirectly by changing a recovery email) should never be possible.
Just curious...was this incident before or after they re-architected their authentication system? I believe that was done last July[1]. The new system is really nice, now implementing separate app-specific passwords as well as new emergency recovery mechanisms. I wonder if they updated their internal support policies with respect to assisted account recovery when they implemented the new system...seems like they should have made the bar higher...
I don't actually recall when the new system went live for all users, but the old system was live simultaneously for at least a couple months to ease the transition. I wonder if customer support actually lowered the bar during the transition because of a perceived or actual increase in customers locking themselves out.
That is a very weird thing to do, and easily fixed. Just do an average of log-ins per day/week, and do not accept any reset passwords (from customer support) before that avg time has elapsed (+ an uncertainty) since the last time you checked the email.
How come they accepted the reset? Were you not logging in your account?
I've been a long-term customer of them, but I'm continually under-whelmed by them.
They admit, if you push them, that they economize on front-line support. I think what you relate is a consequence of that.
In a previous thread, I went on a massive whinge-fest about how they had "sun-setted" the one-time $15 payment member account that I set up for my father and that they had previously advertised using the words "never expires". I stand by that because they were in breach of contract.
On the other hand, I don't think they are charging enough really. I would probably be prepared to pay more than I do if I had confidence that they weren't using low-skilled labor for front-line support.
That's very unacceptable, and enough to make me consider leaving Fastmail - I've used them happily for over 7 years and have recommended them to many people, but their support having the ability to do that is giving me pause.
> I was a very happy FastMail customer until a hacker asked them to reset my password. After _incorrectly_ answering a handful of questions asked by the FastMail support, the recovery email address was changed and a password reset link sent. From there, the hacker attempted password resets on other services.
However, 2fa would not have prevented the problem. The problem is twofold -- 1) account recovery (using email, SMS, or anything other than a secret key) is an effective attack vector. Especially SMS. 2) a human who will change the account recovery settings (in my case, FM changing the account recovery email address).
Hmm you think they would have bypassed your 2fa as well? I wonder if FM can comment on that - it would be concerning. The "sms backdoor" is the same with gmail, etc. unless you explicitly disable it.
Our account recovery process won't allow you through at all if you lose your password, and your 2FA, and your recovery key, then you're not getting that account back.
"FastMail has always been an engineering-focused company, from the top down. As such there is a strong culture of no-bullshit, and an intense dislike of security theatre."
After OP comment, that's hilarious. And yes, I am a fastmail user as well. How long will it take to an "engineering-focused company" to understand that humans are humans?
I see the usual comment about Fastmail (comparison to Gmail, ProtonMail, web interface, spam filtering performance, servers in the US, ...) but still nothing about the TOS, which seems more important to me
So here it is again:
- Fastmail can immediately cancel your account for any reason: "The Service Provider may terminate your access to any part or all of the Service and any related service(s) at any time, with or without cause, with or without notice, effective immediately, for any reason whatsoever, with or without providing any refund of any payments."
- Fastmail can disclose your info/data if it thinks it's in the interest of the company: "The Service Provider will not monitor, edit, or disclose any personal information about you [...] unless required or allowed by law, or where the Service Provider has a good faith belief that such action is necessary to: [...] (2) protect and defend the rights or property of the Service Provider; [...] (4) act to protect the interests of its members or others [...]
By comparison, mailbox.org TOS are much better.
Also mailbox.org offers GPG encryption, which Fastmail doesn't (AFAIK).
No, that TOS was amended several months ago, to among other things, get rid of that clause about termination for any reason. "We used to be able to terminate your account at any time and for any reason. Now, we can only do so if you: fail to comply with the Terms and Conditions; if we are required to by law; or if your account is inactive for an extended period of time." [0] [1]
> - Fastmail can immediately cancel your account for any reason: "The Service Provider may terminate your access to any part or all of the Service and any related service(s) at any time, with or without cause, with or without notice, effective immediately, for any reason whatsoever, with or without providing any refund of any payments."
Other than the last clause about "without providing any refund", I would expect this from any service provider, and I'd certainly never want to run a service that didn't have this in its terms.
I do agree that the disclosure terms are more permissive than they should be.
>and I'd certainly never want to run a service that didn't have this in its terms.
that may make sense if it was for a free trial, or it was for a cat sharing app, but I certainly would not want my business to be dependent on a service that can be yanked away from me at any time.
>> - Fastmail can immediately cancel your account for any reason: "The Service Provider may terminate your access to any part or all of the Service and any related service(s) at any time, with or without cause, with or without notice, effective immediately, for any reason whatsoever, with or without providing any refund of any payments."
> Other than the last clause about "without providing any refund", I would expect this from any service provider, and I'd certainly never want to run a service that didn't have this in its terms.
You expect from any service that they can cancel your account for any reason ?!? We must not have the same set of requirements.
Anyway the point is not relevant anymore as they have changed the TOS (it's much better now).
> You expect from any service that they can cancel your account for any reason
Yes, absolutely. "We reserve the right to refuse service to anyone." I expect to be able to do that for any service I run, and I expect others to be able to do the same.
I also expect that doing so lightly, without a very well-justified reason, would get reported on and lead to a massive backlash. So, in practice, I expect such a clause to be used as, effectively, 'if you try to find a "creative" way to weasel your way out of our specific terms like "don't be disruptive, don't spam, etc", such that your activity meets the letter of the ToS but not the spirit, we'll kick you off anyway". Personally, if I were writing a ToS, I'd write the relevant term along those lines instead.
In almost all cases, we're very happy to provide refunds - particularly early in a subscription period. We also automatically refund if we believe accounts were opened with stolen credit details (happens more often than we would like despite all the checks in place at payment time).
Wow what a coincidence — I switched from Gmail to Fastmail exactly 1 year ago today.
I couldn't be happier. I mostly use native clients, but the Web client is a joy to use, and everything I've observed about Fastmail gives me confidence in their service.
I never used the Gmail-exclusive features like labels, so switching was pretty easy. I highly recommend it to anyone considering it.
Interesting. I switched a little over a year ago too. I like not being the product but find the web client painful. Specifically:
1. No Send and Archive
2. Sending is slooow. E.g. compose email, hit Send, wait several seconds, go back to Inbox. Gmail is instantaneous.
3. Hitting Reply is SLOOOOW to bring up the Reply pane. Fastmail does a POST that takes from 500ms to 5000ms (usually on the lower end but even that is noticeable. On the rare occasion it's longer it's incredibly frustrating).
4. Replying to a message in a thread requires the mouse. In Gmail, the keyboard shortcuts will act on the highlighted message in-thread. In Fastmail the keyboard shortcuts only act on the most recent message in the thread.
5. When viewing an email that is a response to another email, Gmail collapses the initial email and lets you expand it with an ellipsis. Fastmail does no such thing, which means you need to scroll (and scroll and scroll...) when looking up through long threads
6. Search is just OK vs. Gmail, but that's not a huge surprise.
In general Fastmail hasn't impressed me with the pace of development on the web app at all - the issues I had with it a year ago are still exactly the same issue I have today. I don't know how anybody who works on it and dogfoods it can't notice the speed issues every time they send an email.
Hmm, that’s odd. In my experience, FastMail’s web app is much faster than Gmail. I also tend to like that the FastMail web app is simple in comparison to Gmail - fewer things to slow it down or break, and few things for me to ignore :)
Interesting, I've only ever run into these kinds of speed issues on the iOS client. I've found the web client (and usually the mobile client) loads a large inbox (hundreds to thousands of messages) much, much faster than Gmail; in fact, that was one of the first things in testing that told me I'd like using it.
I will say that Gmail handles threading better, as you said. Fastmail goes for a more traditional native-client-like approach; it didn't take long for me to switch back to that paradigm, but someone who has only ever known Gmail would definitely see it as a pain point.
The FM web client loads messages faster, but is slower to use in my experience. I’d rather pay a one-time loading cost up front for an app experience that feels native-ish and snappy like Gmail vs having the app feel laggy when I do mainline use cases like send email or compose a reply.
Re threading I’ve had email for 25 years now and def. feel Gmail’s approach is superior, if only because you don’t have to scroll so freaking much!
> 2. Sending is slooow. E.g. compose email, hit Send, wait several seconds, go back to Inbox. Gmail is instantaneous.
Gmail is optimistic about it, while we’re actually sending the message before confirming to you that it’s been sent. (There are sound technical/historical reasons why it’s done the way it is; it’s not trivial to change.)
Once the JMAP spec stabilises (hopefully by the next IETF meeting in March), our web UI will switch to using JMAP, and then I think that sending messages will be done in the background. Not certain, I’m not the one that’s been doing the JMAPification of the FastMail web UI.
> 3. Hitting Reply is SLOOOOW to bring up the Reply pane. Fastmail does a POST that takes from 500ms to 5000ms (usually on the lower end but even that is noticeable. On the rare occasion it's longer it's incredibly frustrating).
This is definitely fixed with JMAP. In a JMAP world the client takes care of creating drafts, parsing MIME messages, defanging potentially malicious HTML, &c. rather than the server as our current implementation does.
> 4. […] In Fastmail the keyboard shortcuts only act on the most recent message in the thread.
Not true: use n/p to focus the appropriate message (same as in Gmail), then r et al. will apply to that particular message.
> 5. When viewing an email that is a response to another email, Gmail collapses the initial email and lets you expand it with an ellipsis. Fastmail does no such thing, which means you need to scroll (and scroll and scroll...) when looking up through long threads
FastMail does collapse the messages that have been read. You can expand individual messages by clicking on them or pressing e (provided you’re using n/p to switch between them), or Shift+e to expand all (Alt+Shift+e collapses all). In consequence of these things, I’m not sure what the issue you’re pointing out is; if you can provide more info we can look into improving it.
For web UI work, we’ve been focusing on Topicbox and JMAP this year rather than FastMail; Topicbox has been a simpler staging ground for various improvements that we intend to bring to FastMail (mostly internal tooling stuff—I’ll be writing a bit about it later in our Advent series), and JMAP will enable various long-desired features (e.g. snooze, delayed/undo send). Next year will see more effort put into the FastMail web UI; my favourite item that we have planned (and I called dibs on implementing most of it!) is service workers for offline support and substantially improved performance (building on top of JMAP’s improvements).
Any plans for a PWA for the mobile version? I use the app on Android, but I recently tried just loading the site in Firefox (Beta/58/Quantum), and I think it runs faster, plus it doesn't have keyboard/autocorrect issues.
We’re still planning all the details around service workers and the likes, but we intend to continue our tradition of having as much as possible in the normal web app, instead of in the app wrappers (the app being mostly just a wrapper around the web interface).
(BTW, the Android app is in the process of being revamped to use the now-sufficiently-capable WebView, which will fix certain issues like the keyboard problems you mention. Not sure what progress is on that.)
FastMail’s web interface is already practically a PWA (from a few years before that term was invented), lacking only the ability to start offline (it copes with transient network connections pretty well), and use of the Web Notifications API (which we’ll probably support at some point, but not use in the app because it provides only a subset of the native functionality we currently use).
> "Wow what a coincidence — I switched from Gmail to Fastmail exactly 1 year ago today."
Same, in fact I just got my renewal notice over the weekend, which means either yesterday or today was the day I turned Gmail off for good.
> "I couldn't be happier. I mostly use native clients, but the Web client is a joy to use, and everything I've observed about Fastmail gives me confidence in their service."
I vastly prefer Fastmail's web client to any modern native client, though Claws mail comes close if only for its abundantly configurable interface. I really enjoy the extras with Fastmail too; the Notes and Files apps are perfect for quick access from any device. The fact that their iPhone app is a near perfect mirror of the desktop web client helps too.
> "I never used the Gmail-exclusive features like labels, so switching was pretty easy. I highly recommend it to anyone considering it."
Coming from over a decade of IMAP folders, I hated Gmail labels and found them awkward. Fastmail uses folders and I'm in my happy place with them.
I never thought I'd pay for email service beyond hosting my own, but Fastmail is definitely worth it to me.
> the Notes and Files apps are perfect for quick access from any device
Thanks for this heads-up! I've been looking for a super-simple notes taking method that would span work, home, and mobile. I never realized FastMail has one just under my nose!
They're generally great, except for the fact that the iOS client does not allow to send plain text emails. The settings are there, but broken (and have been for about two years). I've reported the bug to them, but they literally told me "it's too much work to fix".
When I was moving away from Fastmail, I noticed that Rainloop[1] is a pretty slick, Gmail-inspired webclient, and it's FOSS, but I couldn't find any provider who was actually using it. Can anyone comment?
I'm using Rainloop for my personal domains and so far love it. There's very good integration of GPG and (e.g.) Google Drive. Didn't take me long to move away from SquirrelMail or Roundcube after that. Highly recommended.
I'm considering switching (in fact I just registered for the FastMail trial). I'm especially interested in the ability to use catchall addresses with a custom domain, which would allow me to give out an address like <hackernews@mydomain.tld>, and thus determine who shared my email address if I start receiving spam at that address.
This is partly possible with Gmail, as you can use addresses like <myname+servicename@gmail.com>, but not all sites support emails with a + in them.
What differences have you noticed in your year since switching? I'm especially interested in any downsides. My biggest worry about Gmail is the lack of privacy from Google.
I do precisely this with Fastmail and it works a treat. Setting up Fastmail with a custom domain was a joy, I can easily filter based on the To address, and you can setup a wildcard identity so you can trivially send email from whatever name you like at your domain.
To me the only downside is the mobile app isn't quite as polished as Gmail. It doesn't work offline, and I notice occasional bugs or awkwardnesses. But it's still very usable, and I much prefer the Fastmail web interface to Gmail.
Edit: Also, the Fastmail importer didn't work very well on my large number of Gmail messages. It failed a few times and restarting it resulted in duplicate messags.
> To me the only downside is the mobile app isn't quite as polished as Gmail. It doesn't work offline, and I notice occasional bugs or awkwardnesses. But it's still very usable, and I much prefer the Fastmail web interface to Gmail.
But FastMail supports other clients, right? I don't want to be forced to use their web interface or their app; I'm happy with the Apple-written Mail apps.
This is the best part of a standards-compliant service like FastMail. Gmail is extremely challenging to use with third party clients because it's custom features aren't supported. I've long had Gmail randomly archive (in no folders) email I told it to delete via an IMAP client, or duplicate items in multiple folders (i.e. labels) or other weird behaviors.
> But FastMail supports other clients, right? I don't want to be forced to use their web interface or their app; I'm happy with the Apple-written Mail apps.
It does, but at least for me the problem is using my separate forwarding-only e-mail address instead of fastmail.com as the sender. As far as I see, I need the native app for this.
FastMail.com can send via a forwarding email address. You'll need to set up authentication at FastMail so it can authenticate to the forwarding service of course.
I don't use the forwarding service to send. I use it to forward incoming mail to FastMail, and in FastMail I just fake the sender address when sending e-mail.
Sadly your use-case is dying. Asymmetric mail flows are becoming harder and harder to support, as DKIM/DMARC alignment and SPF become stronger anti-spam signals, we're going to have to lock down on egress for domains which we don't have proof of control for.
I use aliases (FastMail's term for custom addresses that go to the same inbox) for that reason. They work very well, and I've never had any problems setting up a new one on the fly.
I should have known they offer this sort of thing. I tried spinning up an alias in the past but was unable to find where to do so. Maybe the UI has improved since I tried ~year ago.
I've been a FastMail customer for about four years and overall I'm really pleased with their service. Like you I have my own domain, previously used Gmail, and provide custom email addresses to every site I register with. Interesting to see who has sold my email or may have been hacked when a rogue email ends up in my inbox (hi, Sunspel!).
I have one issue with FastMail that I didn't have with Gmail. Every few days/weeks I'll get a wave of backspatter from someone spoofing my domain to spam. I've researched and it doesn't look like there's anything I can do about this. Most of the backspatter ends up in my spam folder, but not all of it does. I don't know if Gmail automatically removed backspatter -- not even sending it to my spam folder -- or if this only started after I transferred my domain.
Backscatter is (I think) when the target server of a spam mail bounces the email back to me, the rightful owner of the domain, typically because the address is not valid (though I sometimes also get out-of-office messages or mailbox full errors).
It works like this: the spammer forges their headers to make it look like the from address is under my domain. My domain has DKIM/SPF set up, so a good recipient will compare the email to the authentication records, see they don't match, and then trash the email. But there are still a lot of mail servers out there that don't have that set up, so they accept the email as valid, process it, then return it to me when the account doesn't exist on their server. Like I said, annoying, but not a lot I can do other than set up rules to trash messages with a subject of "Undelivered Mail Returned to Sender."
Did you also setup DMARC? I've had good luck with that to reduce the amount of backscatter. Although some recipients don't check it, those that do will fast fail anything coming in that doesn't pass SPF or DKIM.
> which would allow me to give out an address like <hackernews@mydomain.tld>, and thus determine who shared my email address if I start receiving spam at that address.
I've been doing this for over 15 years, but with a much simpler setup: I just forward it to another account, which for the last 10-ish years has been an @gmail address. The mails show up in my Gmail inbox as From: the original sender and To: the custom domain.
As a caution: don't forward a top-level domain. You'll get all kinds of dictionary-style spam attacks and it becomes flooded with noise. Instead, use a sub-domain, so you get for example <*@something.mydomain.tld>.
Nice tip, I'll implement that for my setup. I had to stop a catchall on a TLD for the reason you mention and it'll be trivial to switch to a sub-domain. Thanks.
The problem with this is SPF reject domains. Which means your legitimately forwarded email will simply disappear into the void. Hosting your domain at something like fastmail will not have this problem.
Hmm... I suppose, but I've never not received anything I was expecting. It's possible I've just not run into anything with SPF reject rules, or that Gmail is allowing them anyway.
I don't disagree it's better if fastmail (or whatever) can receive directly as it saves another MX server in the middle, but it's still doable without the end provider explicitly supporting it.
I have the same experience. I'm using a mail-forwarding service, so I can't host it with FastMail. FastMail is my third mail provider I'm forwarding to, and I'm not aware of any missed e-mail with any of them this far. Then again I guess I wouldn't be :)
I used Fastmail for a few years. I switched last spring due to two interactions through support channels that left me really despising the apparent company culture, attitudes about intellectual honesty, and general jerkishness.
You're probably considering switching based on your mail needs, so this might not apply to you, however:
In addition to mail, Fastmail also advertises[1] their plans come with their FastMail Files feature. It can be used for one-off file sharing (akin to Google Drive) and even static site hosting (explained in their support docs[2]). However, I learned from the CEO's comments in my support ticket that they're apparently overprovisioned and don't expect everyone to actually use the storage included with their plans.
I just took a cursory look at their landing and pricing pages and see they aren't touting Files loudly right now. (My initial thought is a hope that they took my remarks to heart from the thread that led to my decision to leave, when I said they should "go tone down the hosted storage aspect of your marketing". However, given that the promotional blog post from [1] occurred within the month that followed my comment, I suspect not.) The blog posts and Fastmail documentation[3] about the Files feature are still available, of course, whether or not it's still on the landing page.
This itself would have been only slightly disappointing, but the bad taste came from the passive aggressive responses in the support thread, that this was the same sort of attitude adopted in a previous (unrelated) support thread from a couple years ago, and that in this most recent incident, they decided to go on the offensive and compare using the 25GB storage space for making backups to abuse. (Their choice example being someone trying to abuse an ISP's lax limits on DNS to tunnel IP over it.) I half expect someone to show up here to try and apply a layer of spin in exactly the same way. Bonus points if they ignore my actual use case, try to make it out as if I was trying to do something I wasn't, and then call me the "disingenuous" one again.
> However, I learned from the CEO's comments in my support ticket that they're apparently overprovisioned and don't expect everyone to actually use the storage included with their plans.
Isn't that true for every file hosting service everywhere? Most people only use a fraction of their available storage, so there's no need to actually provision enough space for everyone to use 100% of their quota. Doing so would be a waste of space.
I switched about 6 months ago. I love having the catchall address although I had that when google managed my custom email domain as well. The web interface is really fast as it does operations in the background so it is really responsive, even at times when your internet connection is spotty. The only real downside is that the phone app (at least the android one) wasn't that good, I currently just use IMAP and k9 mail instead which works well enough.
I did use inbox before (and I still get some email to my gmail account) and it does have nice features like snooze that I miss. But on the whole I am happy with my choice to use fast mail.
I should also add that I seem to get slightly more junk mail than I did before, but it is close and hard to tell, I have not had a real email marked as spam yet, which did happen from time to time with gmail, so this isn't a complaint, just an observation.
Indeed - but replying from an arbitrary address is one big downside of G Suite’s implementation. In FastMail you can add <star>[0]@example.com as an identity. Selecting this when composing an email allows you to edit the localpart entirely. Similarly, when replying to an email sent to your catch-all and you have an <star>@ identity for the hostname FastMail will automatically set your from address for you.
In Gmail (with G Suite) one has to manually add every identity they may wish to reply from - there’s no <star>@ option. Additionally, it’s impossible to hide your real email address/Google Account address. It’ll always be disclosed in the email headers unlike FastMail.
I’d love if G Suite would make such a workflow easier..
[0] escape characters don’t seem to work here, <star> refers to what’d you think it does
Selecting this when composing an email allows you to edit the localpart entirely. Similarly, when replying to an email sent to your catch-all and you have an <star>@ identity for the hostname FastMail will automatically set your from address for you.
I didn't know that, but it's fantastic news! I was doing things the awkward "gmail" way.
Speaking of which, do you know of any mail client (on iOS or Mac) that supports this? I am using the web interface for replying on most emails because of exactly this feature
Looks interesting. It seems (from a quick look) like it tries to be a replacement for a lot of Google products, including Docs. It's nice to see that Google and Microsoft have smaller competitors in this area.
Personally, I'm only interested in email. But thanks for letting me know about other options!
I tried out ProtonMail for just a little bit around the time I switched, and personally I found the focus on security and encryption to be at the expense of user experience.
If I recall correctly, ProtonMail was using RoundCube as the webmail interface when I was looking for a service. RoundCube was the reason I left my previous e-mail provider, so I had to give it a pass. Though now their website is showing a rather nice web UI, perhaps they've switched to a new one since then?
I'm gonna piggyback on your comment to ask a question I'm not clear on. Am I correct in that if I don't set a recovery email there is absolutely no way for anyone to get access to my account unless they get my password? If that were to happen I could recover my account by proving to you I am who I say I am (perhaps gov ID plus DNS records), but my previous email will be forever encrypted?
I ask because that is what I want. I want my account to autodestruct if compromised and when recovered I want to be assured that all my personal data has been safeguarded by effectively throwing away the key into the depths of Mordor.
I had a quick look -- I was thinking of Posteo. I think they aren't aiming for the exact same customer segment, but I think of both as a small Central European service that values privacy.
I haven't had any downsides at all, for my use cases. I've heard from a few folks that lack of "labels" is a bummer (if you rely on them in Gmail), and their mobile clients aren't quite as good (though I've never used them).
Out of curiosity, why do you consider folders to be sane, and labels not?
To me I always found labels to be the exact same as a folder, but not bound to a single instance. Ie, it is everything a folder has, and more. What am I missing?
Labels are one of the very few things I miss when I switched from Google Apps to Fastmail, but labels sometimes do not work well with IMAP. Google Mail exposes labels as IMAP folders, but in an IMAP client, you will have identical messages in different folders. And since traditional IMAP clients are not really equipped to deal with them, label management gets annoying.
Are you happy with the FastMail native mobile apps? I've tried to use Android FastMail but it's quite clumsy compared to Google mail or apps like AquaMail.
I did switch to AquaMail and configured it to use FastMail through IMAP but there is no fast search. A good and fast search for mail is super critical.
Because of the lacking mobile app I'm unfortunately considering the move back to Google Apps.
I've thought about switching many times, and would see myself using the web client, however I wonder if they support the "undo send" feature as GMail does. I cannot see it listed in their features page [1] at least. Can someone shed some light, please? TIA.
CalDAV works great! I use their calendar on my Windows Mobile device without a problem[0]. FastMail even, when they had a change that might negatively affect some calendar clients last month, bothered to test large number and notify users who might be adversely affected. I got an email telling me how to resolve the issue with my Windows Mobile device, for example.
[0] Actually Windows 10 has a strange quirk of CalDAV/CardDAV support: It's only supported as part of iCloud account support. The solution is to create a fake iCloud account, and then change the advanced server settings to your CalDAV and CardDAV URLs instead of Apple's URLs. And then put in the correct credentials. This is true of desktop and phone Windows 10 alike.
Fastmail can both send and receive your gmail address (even proxying through Google's SMTP server). So it's easy to switch over gradually, and quietly keep your existing address indefinitely
Why not POP forward? You can also set a custom From...I use a custom domain instead of my old gmail because I was tired of hopping from juno to yahoo to gmail to fastmail.
Fastmail does this thing where it uses a global set of filter rules for everyone, and then after you've spammed/unspammed something like 150 messages, it switches to a more personalized ruleset based on the spam you actually get.
I wager I get one, maybe two spam messages in my inbox per month (out of hundreds captured), and maybe one or two false positives in my entire couple years of having mail there.
In any case, that's a better record than Gmail - they've got a huge issue with directing legitimate mail into the spam can.
It is worse than GMail. It tends to come in waves, there can be weeks where I barely get any spam in my inbox and then there is sometimes a couple of messages per day. I guess marking spam (Fastmail also uses a naive-bayes-based filter) and/or servers being added to a blacklist stops those waves.
I don't recall the opposite (false positives) ever happening.
I would say the numbers are generally small enough that it does not bother me.
When I was using fastmail, I got lots of spam and I couldn't figure out how to get less. The spam was the reason I stopped using it and went back to gmail primarily. (This was a year or 1.5 ago.) I liked everything else about their service, though.
> Just as important as what we do do is what we don’t. For example, we don’t do full message encryption (e.g. PGP) in the browser. In theory it means you “don’t have to trust us”. However in reality, every time you open your email you would be trusting the code delivered to your browser. If the server were compromised, it could easily be made to return code that intercepted and sent back your password next time you logged in; it could even just do this for specific users. It is very unlikely that a user would notice.
I don't agree.
I don't want full message encryption because I'm afraid that my email provider is reading my messages, but because I'm storing years worth of emails in my mailbox. With a provider such as ProtonMail that encrypts incoming messages with my personal key I know that if someone manages to get unauthorized access to my mailbox that person would only be able to read new emails, but none of my already archived mails. Of course it's possible that the intruder also manages to change the JS code returned to the client, but that's not the case for all of the possible scenarios where someone gets access to my mailbox. Full message encryption does not provide perfect security, but is able to significantly raise the provided level of security.
It would be fine if apps with in-browser crypto only made this sort of claim, but many/most of them are either stating or implying that users don't need to trust the service.
This is a dangerous mismanagement of expectations, and it can be argued that the risk of creating a false sense of security far outweighs any of the benefits you mention.
For most providers, like Protonmail, the decryption password is the same as your login password. I'm curious what scenario you see allowing someone other than the provider to get access to your mailbox but not also your decryption key.
The decryption password is not the same as the login password for ProtonMail. Logging in at minimum requires entering your username, your login password, and your mailbox password.
The result is security at rest, which fastmail does not have. ProtonMail's web app is open-source, and can be deployed locally if you wish to remove the chance of an evil app deployment.
If you use the official deployment, an evil update can obtain your mailbox password, in which case the the adversary (that is, the one capable of pushing the update) observe a security level equivalent to if security at rest was not implemented. However, even in this case, the data on the mail-servers is still protected from everyone else, so while a single adversary has observed a security level identical to that of fastmail (i.e. no security at rest), everyone else still observes a secured mailbox.
Not having security at rest is, in my opinion, dangerous.
That's wrong. You no longer need a third password in protonmail. All you need to have, in order to login, is the username and a password. If you've 2FA enabled, you need the 2FA code of-course.
I think you mean second password rather than third, but as a user of ProtonMail, I need one username, two passwords and one 2FA token to get in, with only login username/password being kept in a password manager (and all password managers get confused by multiple passwords, so I couldn't keep them all even if I changed my mind and wanted to).
ProtonMail may have the option (I am not aware of this) to have login password and mailbox password set the same (and not prompt you twice if this is the case), but they are still separate passwords. You, as user, control whether you want them to be the same or not. If you chose this, the application then has an option for convenience to use the same input for both tasks. This is opposed to a service where they are always the same, so that the password send to the backend is the same used to decrypt your data.
The "client" is a webpage that exists as one of many assets delivered to your browser by the ProtonMail webmail server. The server has access to the password at any time if it wants it.
One password is now the default for new ProtonMail accounts. For accounts that were created before this authentication was released, you will remain on 2 password until you update it in your settings.
That could happen via a breach of the provider's servers, or through dumpster diving for a discarded drive that the provider didn't properly wipe (in the case the data wasn't also encrypted at rest).
Email at FastMail is encrypted at rest in this sense (full drive encryption).
It's not encrypted with a separate password per-user. We don't see any security benefits there, given that every user logs in almost every day, and if they have linked a device (many of our users use IMAP from mobile clients) they will connect and sync every time there's an update.
Which changes the vector to "hack server, passively monitor for a couple of hours, gain access". The logical backflips and single-minded security outlook required to consider that significantly different from "hack server, gain access" are the kind of security theater we studiously avoid.
Full disk encryption is a clear win with no significant downsides (slightly higher CPU consumption). Per-user encryption while still providing a full email service is not a clear win, and it has significantly higher downsides.
You can use PGP then. However using PGP well turns out to be hard. You can to have the client local (and built by a trusted source), not a web client. You have to ensure you didn't forget your private key. You have to understand how it works and what the limits are to ensure that you don't accidentally break something.
For what fastmail is doing providing PGP is the wrong answer: there is no way they can provide it safely. In particular a government can force them to replace their web PGP with a hacked version. (and some hacks can be very subtle such that you are unlikely to notice in a code review - remember we have a government's resources created it)
That isn't to say PGP is bad. PGP is better than what they offer when you use it correctly. However there are many ways to use PGP wrong which make it seem like your messages are secure, but they are in fact not. This is probably worse than not using PGP at all, at least if you know your messages are not secure you won't do anything that requires security.
All of my email is encrypted using PGP on the way in. I can read it on my laptop, desktop and phone because I use Evolution, Mutt and K-9 Mail, all three of which support PGP and all three of which I can use with my Yubikey.
If you compromise my mailbox, you can't read any old or new email, and you can trigger as many password reset emails as you want, you wont be able to read them.
Deploying a browser Javascript application locally does not automatically protect you from serverside malicious Javascript; you have to know a lot more about how the application is structured to know whether it's even helpful.
Deploying any application locally puts you entirely at mercy of whoever wrote it, and those that know how to abuse it. That holds true for any type of application.
However, in this context, deploying this particular self-contained application locally protects against the hypothetical attack where a genuine application is later modified to turn malicious. It is relatively easy to look for and identify any execution of server-side content.
To prove that an application is not intentionally malicious, you would have to inspect the source. To prove that an application cannot be malicious, directly or indirectly, intentionally a not, you will need full formal verification of the application. And that verification only holds if you have formal verification of what it runs on.
No, it does not. You've missed my point. Deploying a browser JS application locally would help you if you could be sure that the application never loaded any additional Javascript from the server during execution. But browser JS applications can in fact do that, and so local deployment does not help as much as you think it does.
You missed my point. In multiple ways. I already accounted for remotely fetched JS in my previous comment.
It is relatively easy to find JavaScript execution points (there are only so many ways to parse and execute text string from a server in JavaScript—eval, new Function, script element, on... DOM attributes, ...), and thus it is quite easy guarantee that an application does not intentionally execute remote code (intentional is what is inside the control of the source code—an image decoding bug in the browser causing code execution is outside the scope here).
A local deployment (with proper protection of the deployment) guarantees that an application remains static, and cannot be changed arbitrarily for malicious intent. Combined with a relatively easy inspection for intentional remote code execution, you can conclude that there is no direct way for the application to turn malicious.
If you do not inspect the source, a local deployment still reduces the chance of a malicious modification being possible from "100% guaranteed" to "maybe, if the application is written in a specific way, or if there is an unintentional code execution bug somewhere in the browser".
You will of course never be able to reduce the chance of any application turning malicious to 0 without full verification of the application and platform it runs on (including all other applications running with permissions to interfere).
Not to endorse the argument that a local app written in JS is inherently any more worrisome than one written for, say, .NET or GTK+ (because it's not), but this statement really sticks out:
> It is relatively easy to find JavaScript execution points
Have you ever been tasked with actually trying to guard against these things in security-critical situations? Because even with a deep understanding of the ECMA/W3C/WHATWG specs, years of experience with the arcane internals of a specific browser and JS engine, those engines' quirks wrt the way those specs are implemented, and the way they extend the specs, this was really tough even 5 years ago. The fact that JS is now an even faster moving target with yearly updates to the spec means that it's harder now. I don't think anyone who's worked on browsers would get behind the statement you made there.
Yes, I have. The list is not very long, and it is comprehensive.
At a previous place of employment, we implemented a full in-JS sandbox. The project was nasty, but as it hooked all JS execution, I do in fact have the fairly short list—what was not hooked would fail, so we were 100% sure that the list was comprehensive. Some of the entries do surprise slightly, mainly due to arcane APIs accepting both strings and function objects. New ECMAScript revisions didn't result in new execution points, although they did complicate the project in other ways.
The project also means that I could whine for ages about browser engine quirks, terrible APIs, awful specs and the likes, and will absolutely never look positively on browsers ever again.
I don't follow, since you said none of those things --- eval, script elements, DOM attributes --- in any previous post.
I also don't understand why you think it's straightforward for people ("relatively easy", in fact) to verify that a browser JS app is server-proof. As table stakes, you'd need a comprehensive understanding of every way in which the server gets to update the DOM of the client.
They were implied as "execution of server-side content" in the sentence: "It is relatively easy to look for and identify any execution of server-side content."
It is correct that it would be complicated to do for an arbitrary hosted app that may inject server-side rendered content with scripts in them, but this is not the case here.
There are very few places where "external" content is inserted into the DOM (decrypted mailbox content that may be HTML, account info), and those should all employ proper script stripping techniques. Finding DOM append or assignment (including attribute assignment) points is relatively easy.
With the DOM out of the way, you only have places where the application intentionally executes JavaScript through eval and new Function (potentially wrapped in whatever frameworks they use).
It really isn't very hard. And yes, I have done it before—and no, I couldn't have missed something when I did it, as only interaction I had found would succeed in my sandbox.
That will break the app if it wasn't written for it.
It doesn't change the argument, though. I'm arguing that it is relatively easy to deal with within the limits of what can be dealt with. If a strict content security policy can be applied, it just gets even easier.
I'd say that a csp is the only reasonable way to verify for a non-trivial app. A deep audit of every line of the app plus the whole dependency graph (that must be repeated on the diffs for every update) is not how I would define 'relatively easy'.
And yes, I know it will break if not written for it--I'm saying that coding to fit a strict csp is the only way to have a verifiably secure js app. Without it, you're in the jungle.
I don't mind the CSP approach, I was just stating that with it, the app either works, or everything breaks.
However, two things:
1. Nothing here requires a deep audit. Finding every execution point can be done with a bit of patience and "git grep". Some terrible frameworks, such as jQuery, can obscure things a bit, in which case knowledge of them speeds things up.
2. This may be a simple slip of words, but CSP does not in any way make your app verifiable secure. Coding towards a strict CSP is more about responsible coding and risk reduction/threat containment, by making foreign JavaScript execution non-trivial. There is nothing in a browser that lets you make something verifiably secure.
1 - Theoretically yes, but this is highly error prone and needs to be repeated on every update. Taking human factors into account, it's not a reliable strategy imho.
2 - My point isn't that a strict CSP makes the app magically secure, just that without it there's realistically no hope, and there's no point in wasting energy on trying to run a nontrivial app that doesn't have a strict CSP locally with the hope of getting around the security concerns of remotely hosted html/js.
If you load a local html file with a strict CSP into Chrome with no extensions installed, that's actually a pretty darn secure execution environment. It's much easier to verify than a native app, for example, and it can do a lot less damage. Based on a single line at the top of the file, you can be certain that the app doesn't load remote js or css, doesn't use eval, only communicates with specific domains, etc. It isn't magic, but it can get you a long way and it does offer numerous clear, concrete benefits vs. a remotely hosted app.
You are correct but from lawyer perspective (IANAL btw) it is important they use the word "believe". As I was explained by a friend who is attorney, if you never had experience with certain law, like in this case perhaps never been subpoena for records by US, then you have a right to say you "believe" something is not the case. This is still not deceiving statement. But the moment you have been proven wrong by US Gov for example, your claim would have to be removed.
on FastMail alone I did not like how slow it is. I was testing them and Protonmail at the same time and was very impress how simple it is to setup my multiple domains/users on Proton and how fast encryption/decryption works. And Protonmail "[...] is outside of US and EU jurisdiction, only a court order from the Cantonal Court of Geneva or the Swiss Federal Supreme Court can compel us to release the extremely limited user information we have.[...] [1]
Full disclosure: I don't work for Proton; I'm just their happy mailer :)
EDIT: Slow I mean their GUI comparing to Proton. It might be more the number of extensions I have to block or limit different shenanigans such as AdBlock etc.. but needless to say, Proton does not have that problem.
> We do not participate in, or co-operate with, any kind of blanket surveillance or monitoring. (We also point out that Australia does not have any equivalent to the US National Security Letter, so we cannot be forced to do something without being allowed to disclose it.)
So while they cannot harvest data and then share it in bulk, they can access data in individual cases and share it with law enforcement.
I don't know if it will answer their specific claim, but you can read Protonmail's explanation for being based in Switzerland here if you're curious: https://protonmail.com/blog/switzerland/
They can get the data, but the data is encrypted so there isn't much that can be done. (unless they can guess your password, which is possible but hard).
At the very least, anything that the US would try would be noticed by someone who is not subject to US gag orders. Potentially they can get Australia to provide those orders, but now it is an international thing, which is more difficult than the US going alone.
This is an entry in FastMail's series of Advent Calendar blog posts that they do every year. I'm glad to see them continue the tradition this year, and it's valuable to get this level of insight into a company that I trust with my mail. If you're interested in seeing more, check this year's first Advent Calendar post which has links to their calendars from 2014, 2015, and 2016, which are all worth reading if you're a FastMail customer or just interested in how running a mail hosting company works: https://blog.fastmail.com/2017/12/01/fastmail-advent-2017/
I've had to dump Fastmail. I was getting 10-15 very (sexually) explicit spam emails daily slipping through the filter daily even after 100's of training emails being identified. Queue weird looks at work if I left my mail client visible.
Moving back to G Suite was painful. I had to manually do it after the G Suite 'Migration' tool missed 1000's of messages. But so happy to have decent search back!
That's how RFC2822 defines the email headers. That's not a leak, that's just how email works. When you send from the web app it uses that client as the originator.
This would be one of the cases the same text talks about "when users misunderstand the security characteristics". The journalist example can be used, but instead of checking an image sending a reply to the e-mail.
I'm unsure how many journalists know that their replies using an e-mail client will send their ip address, neither if they can understand why there is a difference between the mail client and their web interface.
I only know one organisation using fastmail services today, and I asked if they knew about this today, which of course they didn't. There surely are reasons to not break the RFC as gmail and others did, but the expectations from users need to be addressed somehow
They should do PGP on the way in, for people who want it. It's trivial to set up. All they need to do is let people paste in a public PGP key and encrypt all incoming email with that key. Here's how I've been doing it for the last 7 years:
Well, it is "trivial to set up" as per my link, and they don't offer it as an option, so apparently not.
Perhaps they don't think enough people use PGP to make this a worthwhile option to add. But given the service they are offering, it seems like an obvious feature and quick win to me.
"trivial to set up" on your own mail server with you as the only user is slightly easier than rolling it out to thousands of customers, testing it, make sure backups are all working, write documentation for users, collect public keys of users, ...
You can't just add a single line with your perl script to production and hope it works for everyone...
Another data point: I have had a FastMail account before Gmail before Opera and Kaggle. Why pay for email? when everything was free ...Yahoo, hotmail, etc. Word of mouth. Reputation. Though times were less sophisticated back then along with security; Fastmail kept up. I used my YubiKey with them way before gmail u2f fido support and they fostered my trust over the years keeping it clean and simple. Nothing is foolproof but at least I know their track record and commitments to their users despite dropping the ball in some cases. That said, I'm glad to read about the horror stories, provider alternatives and fastmail responses; hopefully we are all the better for it.
I don't know if any Fastmail employees read over this, but thanks for finally adding TOTP to the list of 2FA methods! I had been (uneasily) using SMS and wishing you guys would up your game, and I'm glad to see that you did.
I only see FastMail and ProtonMail mentioned on Hacker News, never in real life.
To those who made the switch away from free,conventional mail services like Gmail and Outlook, what was the appeal ? What's your case for making the switch ?
I did switch after just another chilling story about person losing his gmail account because of some machine learning security system false positive. There is essentially 0 user support from google in such cases.
And paid custom domain in google suite costs exactly the same as fastmail.
Plus email is fastmails primary business and I am their real customer.
I had to send more than 500 emails / day. So i switched over to FastMail.
It works well, and i like having unlimited aliases that i can kill at any moment. But there's no way of disabling deleting messages. I wanted to be extra sure i wouldn't loose any messages and what support said was basically "just don't delete them and you are all set".
What is worse, they accept the default deleting of messages of some email clients. Gmail won't allow deleting from a POP email client, which is much saner in my view.
I use ProtonMail for the simple reason that there is less of a chance they are selling my data and building up a user profile of me for advertisers to target.
And, secondarily, because I set up a GMail account for both of my kids when they were born, and I would occasionally email things to those accounts that I wanted them to have a record of. Nothing earth-shattering, just stuff I thought they might like to read when they were older. Then, one day without warning, Google shut my daughter's account down, for being under-age. I had no ability to retrieve all of those emails, and there was literally no one I could contact. Google has NO customer support, because you aren't the customer. So I decided I would be willing to pay $30 a year just to know that I could get an actual person on the phone.
FastMail is tempting. I'm currently moving over to hosting my own E-mail, since Gmail is failing to deliver a significant number of important inbound E-mails to my account, rejecting them as spam (and fails to deliver almost all of my wife's E-mail). I could be convinced to pay for E-mail, but I'm concerned with customer support and the "black box" nature of online services. For something as important as E-mail, I grudgingly feel I finally need to bite the bullet and do it myself.
The simple reason I haven't switched email providers: all my online accounts, as well as many offline ones, are tied to my gmail account.
Yes, I can set up forwarding, but that defeats the purpose of switching providers IMO (for me, the purpose would be to move away from Google completely). I don't want Google to read any of my emails period, so forwarding is not a sufficient solution.
I think the only way is to start switching gradually. If you don't want Google to read your email, keeping it as the main accounts isn't really going to help.
If you do change, get a forwarding service or your own domain so that the same mistake doesn't happen again.
Isn't vendor lock in great? :) This is why I started using my own domain for emails a long time ago. Still have some stuff on my gmail, but I've mostly broken free.
I did the switch a few years ago. I setup forwarding after importing all my emails from gmail, then i spent months changing email on services whenever i used them. eventually you finish and can stop the forwarding or close the gmail account completely.
Initially, FastMail was dismissive that this was a simple "mix-up" and didn't disable access to the hacker for 7.5 hours after my report.
To their credit, FastMail gave me a list of the email accessed and the message headers of the messages the hacker sent from my account (and then deleted -- unrecoverable).
Until and unless FastMail addresses the human factor of security, their technical security mindset is of secondary importance.