Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Easy Email Encryption inside Gmail (Alpha, Open-Source) (streak.com)
170 points by alooPotato on July 2, 2013 | hide | past | favorite | 98 comments



We built this extension to see if it was possible to prevent webmail providers from handing over your email data to other parties. We encrypt the email message on the local machine and also prevent gmail from saving a draft of the email to their servers so they never have a record of the raw email text.

We think the user experience is pretty great - works inside gmail, users need to know nothing about security or keys and you can send to any other gmail user (you don't have to worry if they have the correct software installed).

With any attempt at security and encryption, you usually trade off ease of use with how secure it is. We decided to use symmetric encryption because it made the user experience better. The only downside of course is that the security is only as good as the password the user chooses to encrypt the message with.

  - we're using the Stanford javascript crypto library here: http://crypto.stanford.edu/sjcl/
  - we open sourced our extension here: https://github.com/StreakYC/StreakSecureGmail
There have been a few other attempts at this, but they usually require users to understand how encryption works and require that all the users you send emails to have signed up for something (or have a public key somewhere). We think our attempt is ridiculously easy to use.

This is a super early experiment/alpha/whatever so we see this as a starting point. Would love to get feedback!


What's the suggested method to securely exchange the key?

It feels a bit like this is a step backwards in many ways. Being a non-webmail user, how can I benefit from this? If this were an implementation of SMIME/GPG it would be more widely adoptable since anyone with a compatible GMail plugin, desktop, or mobile client could use this.

Edit: Good read about JS encryption and its downsides: http://www.matasano.com/articles/javascript-cryptography/


The intention of this isn't to be the most secure solution on the planet - you're not going to be using Gmail if that's the case.

Instead we wanted to build a starting point that was A) easy to use, and B) easy to extend.

Our initial plan was to use a GPG solution, but that introduced a lot of complexity to the UX and also had other security holes. But the main thing is that if something is too complicated, it's not going to be used by the broader masses - kind of like GPG right now.

With this solution only the sender has to have the extension installed to send an encrypted email.


"With this solution only the sender has to have the extension installed to send an encrypted email." Hmm... but to read the mail, even the receiver needs to have the extension. I don't see much point in advertising "only sender needs the extension" bit.


I think the distinction is that the receiver need not have signed up or installed anything at the time the message is sent. That is, the sender need not worry if the receiver is currently using SecureGmail, they can send the message now and the receiver can install the extension later.


I also couldn't find a link to the source to sort-of audit it, so, caveat user.


You mean this? https://github.com/StreakYC/StreakSecureGmail

It's right there under "Open Source"


Contextual blindness, thanks.

EDIT: Audit results: It just uses the SJCL, looks good to me.


This is a good idea, and looks promising. Running it going on other browsers than Chrome would be nice.

Is there any way of getting it compatible with the Syncdocs program which does local machine encryption of Google Drive files? It looks like Syncdocs does AES256

http://www.syncdocs.com/encryption-settings-help/


Looks slick.

Would it be a good idea to link this with a service like https://onetimesecret.com/ for transferring passwords?


Seems like it would be better to go with S/MIME or OpenPGP for this. S/MIME would potentially be able to re-use some of the browser's own code for handling certs.

Symmetric crypto (like this) is relatively easy to implement, but a usability nightmare.

A browser extension is still a lot better than a general webapp for security.

I wish someone issued "mail-from" auth personal certs for ~free to end users, integrated into apps for cert request/signing/etc. StartCom's StartSSL sort of does (for "level 1" personal certs), but their cert issuance UI is a huge pain. It really needs to work with the "I have no cert, I'm going to use an app for a specific purpose, I get a cert incidental to signing up for the service, I then can use the cert for other purposes", which none of the x509 crowd really support.

WoT on the certs would be cool too (which startcom is trying, but since it's a pain to use, there aren't enough people, so it doesn't work.)


Actually we went down the gpg public/private key path first, and realized that was way worse in terms of usability. Even though with symmetric you have to enter the password, at least average users know what that means.

Try to get an an average user to do a key exchange, backup their private key or even know what a key is as some other solutions require.....


We're trying an approach with http://parley.co that involves storing a symmetrically encrypted OpenPGP keyring on the server for our users. That, combined with a UI that automatically takes care of key sharing and discovery, results in a system that's easy for the average user, integrates with the existing PGP ecosystem, and is at least as secure as a symmetric scheme (like symmetric systems, the strength is limited by the passphrase, but at least users can each have their own).


I'm all for anything that makes moving one's private key between devices easier, and I understand that you've made a conscious compromise between security and usability, but wouldn't it be possible to achieve both goals, for example by enabling private key syncronisation between devices without the key having to be uploaded to a third-party server?

This would also eliminate the need for the additional passphrase, as the point-to-point key trasfer could by programatically encrypted / decrypted without user intervention.


That's a great idea, and I see no reason that it couldn't be done. Removing the centralized target would obviously be beneficial, but I'm not sure it would be possible to increase security for individual users that much: If the only common information between the two devices is what's in the user's head (email, password) then you'd still need a server to negotiate the exchange (by telling the new device where to find the first one) which would leave the whole system vulnerable to many of the same targeted attacks as the centralized design (MITM or malicious actor at the server trying to brute force individual keys).


I agree, but it should still be possible mitigate those sorts of threats without the user needing anything else in thier head (at least more than temporarily :-), even if the password for the negotiation was successfully brute-forced / MITM'd so that a foreign device could initiate a private key transfer, and even if the private key exchange itself was MITM'd.

Perhaps for example using a one-time shared secret either displayed on one device and entered in the other, or entered by the user on both devices as a temporary pass-phrase. This could then form part of a D-H style key exchange, or even be the encryption passphrase as in your proposed model, but only used temporarily while the private key is transferred, so not having to be remembered by the user for a centrally stored copy of the private key.

I appreciate you're probably well down the track on your current model, but it would be really nice to see something like this developed, and ultimately standardised, as like you mention in your manifesto, private key management (and especially transfer and backup) is one of the larger usability barriers to wider adoption of public-key cryptography.


now THAT is the way to do it! Yes, the average user has trouble grasping the CRITICAL IMPORTANCE that the user, AND ONLY THE USER, must physically hold the private key. There's no problem with having the public keyring in the cloud, and I don't even see any reason to keep it encrypted. It's called the PUBLIC key for a reason, to paraphrase Whit Diffie. You guys are on the right track...tools that make REAL 4096-bit PKE EASY are what are needed (Certainly it is an understatement to say that a 4096-bit elliptical curve DH cryptogram is "at least as secure as symmetric keys" - indeed, symmetric schemes are several orders of magnitude weaker, even with the same size passphrase) . But users will have the "inconvenience" of having to store their PRIVATE KEY on all their devices; but hey, that's not much work to be virtually impregnable. I applaud Parley's efforts!


Thank you for the kind words, but just to be crystal clear: we are storing the users' private keys on the cloud, hence the symmetric encryption. It's certainly a trade-off, and not the right option for everyone, but it lowers the barrier to entry for OpenPGP enormously.


Really admire what you're doing here, but proposed pricing seems off (just my opinion). If you think we'd be better off moving to a "encrypted by default" world, then limiting paying users to sending 150 messages a month isn't a good way to do it! Especially when I know it doesn't cost that much to do so.


I appreciate the feedback! Pricing is tricky, and we're open to suggestions--our challenge is basically to earn most of our profit from business users while keeping it inexpensive for consumers. Maybe the number of messages is the wrong way to separate the two camps, though. Any ideas?

(Is 150 messages/month really limiting for average users? I don't think I've sent anywhere near that much personal mail this month, but I may be an anomaly.)

EDIT: I forgot to mention that whichever way we end up doing it, the personal plan will be free for anyone who signs up for the mailing list before August 1.


I would try to differentiate based on features, vs. messages. Businesses care about:

* third-party administrators being able to create/update/etc. accounts (either consultants for small companies, or IT staff for larger companies) -- this is the #1 way to differentiate a consumer service from a business service.

* document retention policy (which is really document destruction capability

* escrow (corp escrow keys)

* escrow for in-line checking of messages (either DLP, or AV, or archiving for compliance with SEC or other regulators)

* certifications of compliance

* support

* maybe custom domain names and other trickery (including stuff like splitting domain names to both work with existing exchange servers or google apps plus your service

* service level agreements, indemnity, etc.

* integration into stuff like Active Directory (LDAP) for account provisioning and credentials

* Maybe 2FA (although I'd argue 2FA should be a base feature, but for mail it isn't very meaningful except if you do cert auth or cookie/ip blessing of systems + stored imap or web browser password to log in + your passphrase stuff for the crypto, and then a non-cert 2FA thing is only meaningful to management

I don't think you need to worry about marginal 5-10 user businesses using the free service instead of a paid service; I'd focus on making sure you can make real money on 100-500+ user businesses, or small businesses which will grow into those, and then on getting as many free users as possible to try to drive conversions. I'd rather sell 10 companies with 500 users each than 50 companies with 5 users each.


Awesome feedback as usual. Thanks Ryan! That gives us a lot to think about.


My first thought is number of recipients. If you allowed low-tier users to send messages to e.g. 5 people at a time, and any more required the business plan, that seems like a good way to separate the two.


Number of recipients seems like a good idea for separating personal and business to me too. Personally, I only use PGP for emails with my partner, but we definitely send way in excess of 150 messages a month.

I'm all for a world where people start using encryption as a matter of course, so I really the OP's product works out. It'll finally get rid of that "you must have something to hide" mentality, when all you really want to do is protect yourself from the odd security vulnerability or a grumpy sys admin with an axe to grind!


That's a great idea, thanks for the suggestion! I'll be making some changes to reflect that.


The open source webmail application "Roundcube" combined with the "openpgp_rc" plugin which utilises the "openpgp.js" library, works as follows:

You can generate your keys in the browser, or paste in an existing key. The key is then stored in HTML localStorage in the browser. If you ever clear your localStorage, you have to paste it in again. But this way, the key never goes near the server.

Your method is suitable for most people who can't be bothered backing up their keys. Choice would be a good thing though.


That's a cool way of doing it! Of course, the usual caveats about in-browser crypto still apply, but still neat.

I agree that choice is great, and we might expose manual key management in Parley at some point, but realistically people who know how to do that (and have the desire to) already have many options.


minor nitpick - it sounds like parley is storing the encrypted private keys on their server, encrypted with the user's passphrase.

This method is the best tradeoff with cloud encryption IMO - convenience of having the key file accessible anywhere, even if it is not as secure as copying it yourself to all your devices. Also, much much better than using symmetric keys.


Our thoughts exactly :)


>> storing a symmetrically encrypted OpenPGP keyring on the server

> now THAT is the way to do it!

Um, no. No, it absolutely is not.

See also: Hushmail.


One big difference with Hushmail is that they are a web app, so they can be (and have been) compelled by law enforcement to serve malicious code to certain users. Parley is a standalone desktop/mobile app, so it would be difficult for that scenario to occur. The passphrase is also PBKDF2'd on the client, so the server never sees it.


This is so true. At work, I never cease to be astounded at the number of people (supposedly computer literate, e.g. CS graduate students) who are just flummoxed by ssh keys. And ssh keys are relatively simple compared to GPG.


As a supposedly computer literate software developer, the only place I ever hear ssh keys mentioned is on HN. I haven't encountered them anywhere else. I doubt I'm alone.


Do you develop exclusively on Windows?


Correct! And there are many like me.


Users do not need to back up their secret keys for your system?

Update: I wrote my question hastily; originally this read "private keys" instead of "secret keys."


I don't think they do any key management.


It's symmetric. There are no private keys.


If everyone teaches one person, it will not be too long before everyone knows.


I looked into S/MIME when researching email crypto options. Someone please correct me if I'm wrong, but with S/MIME, you have to get a cert issued from a "trusted" provider--i.e. your employer, or Verisign, or Comodo, etc. If you don't, then a lot of apps either issue scary warnings or even just refuse to work with your self-signed cert. But if someone like Verisign or Comodo issues your cert, how can you be sure the cert they issued you hasn't been stored and shared with (for example) the NSA?

S/MIME seems to have the same problem as SSL, which is that to be really usable you have to trust a big company to provide you with encryption, and that company can be hacked, coerced, etc. But whereas SSL traffic is typically transitory in nature which makes it tougher to meaningfully capture and store, your emails are not transitory and can be accessed much more easily.


That’s not how public key cryptography works.

You don’t go and buy an SSL certificate from a CA. You pay for them to /sign/ your public key, presumably after verifying your identity. You generate the public/private key pair, and then you keep the private key private.

The CA could in theory sign Eve's private key along with metadata saying that private key belongs to you, but that just gives someone the ability to impersonate you. It doesn’t give Eve the ability to read emails that Bob sent to you with a key wrapped with your public key.


At least with Comodo's s/mime service, they provide you with the private key [1]. IIRC, Symantec/Verisign did the same thing.

- [1] https://secure.comodo.com/products/frontpage?area=SecureEmai...


No, they don't. The keys are generated using in-browser controls (XEnroll/CertEnroll and HTML 'keygen' tag).


That's the one I used, and if I recall correctly they sent me an email with a link for me to download the cert. This suggests to me that it was generated server-side and that therefore they could have kept a copy for themselves. But I might be totally wrong on how it works.


I think the free personal certs are server-generated, the business ones are generated client-side.


> Symmetric crypto (like this) is relatively easy to implement, but a usability nightmare.

Conversely, it's the only type of crypto the general public understands. "Enter your password." It bothers me that this is the case, but there you go.

The biggest problem OpenPGP has right now is that you need to explain what public and private keys are to someone before they even have a hope of using it.


StartSSL is also not a widely accepted CA which means Apple Mail (and others) will likely complain the certificate is not issued by a trusted party. It's an easy fix, but one that shouldn't have to be done.

@rdl who is your go-to CA?


StartSSL's roots are in a lot of things, they just don't have the class1 intermediate cert in places like Apple, and as a result it doesn't work by default. Which is pretty annoying, but easily fixed (provide the intermediate cert along with the end-user cert; since the root is already trusted, it does not bring up a warning.)


Not rdl but I use Comodo[1] which is free for personal use.

- [1] http://www.comodo.com/home/email-security/free-email-certifi...


Apple mail and Exchange Server already support S/MIME, so that might be a good option, though as you point out certificates are the issue.


Browsers just need to actually get on the security bandwagon and build support for public key crypotagraphy into an api. Then you could generate keys, export them to back them up / use them on different machines, and these solutions would actually be usable. As in, I could type in someone's email address, and it would find their public key and encrypt the message to them. People have known since the 70s that symmetric encryption is just not easy enough to use for normal people (key exchange is hard). We have local storage - why can't be have local keys?

Realistically, I bet the way this will mostly be used is that people will exchange keys through an unsecure medium (like, a non-encrypted GMail message, or if people are being clever, GTalk...), and the whole thing will feel good but be nothing that a little bit of work on Googles end couldn't undo.


It doesn't matter if public keys are exchanged through an insecure medium as long as there is a way to verify them. Someone else seeing the key is not a problem, so Google being able to find them after the fact would be fine. The problem comes from being able to MITM them and swap it out with a fake key controlled by the attacker. Verifying the key fingerprint (using GPG terminology here, don't know how it applies elsewhere) after via another means (ideally voice or video) would confirm it's a legitimate key.


A cool idea I saw a while back, http://robohash.org

Generating an easy-to-recognize image from the key fingerprint would be a lot more approachable to the casual user than trying to verify a long hexadecimal number.

I guess you're then trusting whatever 3rd party handles the image generation.... but at some point you have to trust software that you didn't write yourself.


Collisions and subtle differences that make the two robots look close enough would make this a vector of attack.


From the Stanford JS Crypto library's page:

> (Unfortunately, this is not as great as in desktop applications because it is not feasible to completely protect against code injection, malicious servers and side-channel attacks.)

The browser is not a secure environment and cannot be. You cannot guarantee security in the browser even if you control the server-side of the application, let alone when someone else does. In other words, Google can still read every bit of email you send/receive in cleartext.

The only way I see how one could make something like this work is by creating a browser plugin where the cleartext runs outside the context of the browser process (perhaps in Java?). If cleartext ever exists inside a JavaScript interface, it's game over.

I upvoted this post because we do need more discussion of security and especially usability of security. However, unfortunately, this provides no real security.


While the security granted by this may be imperfect, "no real security" seems like a bit of a stretch. This eliminates the main pathways by which governments obtain access to email (subpoena of the service provider and straightforward interception of the mail as it is transmitted). Combine that with a webmail service like Gmail where the company providing it has a very real incentive to prevent code injection, and with browser vendors being exceptionally fast at finding and patching exploits, and it's hard to say that this isn't far better than nothing at all.


GMail has a very real incentive to read all your mail. As soon as this plugin gets any real traction Google will trivially circumvent it.


Seems unnecessarily pessimistic. AdBlock has been on the chrome store without much issue despite being directly opposed to Google's business model.


The US government creating a secret law that authorizes a secret court to order Google to siphon all of its cleartext data to the NSA is also a bit pessimistic. The value proposition of the featured plugin is that it protects you from Google snooping on what you do through the means of technology. While I am very glad there is a discussion of the subject, I am saying that the technology the plugin uses is fundamentally flawed and can never be fixed. Because of that, it provides no real security. It is in the same league as setting the root password on your server to "root" and changing the sshd port: obscure but not secure.


Yeah really the way to provide security in gmail is to compose your message in an external (local) editor, GPG and ascii-armor the message, paste that into gmail, and send it. The recipient does the reverse.

Using an editor like emacs could make this pretty streamlined, but at that point you might as well just use an IMAP client that can do GPG or S/MIME.


FYI - we thought about this extension for a while but it only took a few hours to actually implement. Mostly because we had a lot of the core platform already built for streak.com - anyone hear have any interest in us opening up a Gmail platform to outside devs? i.e. you could write apps that live inside Gmail using a clean API?


I think this is the major thing that Google is lacking, an true application development platform that anyone can build an application, host it inside of Google, and use internal data sources.

App engine feels like it's close to that, but it doesn't confer any special properties onto applications developed on that platform. Why can't I register a widget inside of Gmail or Google Calendar? Why can't I integrate a workflow form or something else into apps without developing a standalone application that only integrates through an API?

I'm hoping this is where they're going with Dart, creating a language that can be compiled into Javascript and share the backend connection with official Google apps. SPDY and WebSocket will enable these types of applications, as they won't need independent connections for each API request channel.

I can see how Google is cautious about this type of solution, given that it opens a huge XSS footprint, but I think it's the future of the "cloud" applications. For these platforms to become substitutes to native platforms they need to support applications that integrate as deeply as the first-party offerings.


Bigger than XSS, I think that would create API rot (though I think Google is going in that direction already.)

The only thing worse than having a proprietary API to interact with your application is to have a proprietary API that can only be run from a proprietary sandbox. On the other hand, Google could make APIs publicly available so someone running on whatever platform could easily call them. This doesn't solve the security problems, but if Google can't trust external people with the API a sandbox isn't going to do a lot of good.


Google App Script might be what you're looking for it's an app development platform that can use internal resources (like GMail)

https://developers.google.com/apps-script/


Yes! That would be great!


Just curious, but how can we be sure that GMail doesn't continue to do voodoo like uploading a copy of draft without complete transparency?

i.e. will this also disable any scripts that would send data to GMail while drafting is in progress, because if not I could see that as a potential hole for a future breach. Something like "while typing a draft, block all upward data triggered by this page" seems appropriate, rather than targeted draft-saving


Yeah, was thinking the same. At the end of the day any type of client-side javascript encryption is insecure since you'd need to prove the browser + runtime + js + algo are all kosher.

It's also sort of ironic that this extension is trying to protect your privacy against the same company making both the browser and the email service:)


We do a combination. We add a marker to the secure compose body (only compatible with new Gmail compose) and intercept the draft savings to prevent the drafts of the secure compose from going through to Gmail's servers.

You'll see that in the secured compose when it tries to save a draft it says "Draft saving..." "Save failed".

And un-secure composes and replies don't get blocked.


My thanks to you and your team for having developed this. The cost of crypto is, at present, far too high for mainstream use - or so it seems. Undoubtedly, the challenges inherent with its correct implementation are a part of the reason for that high cost, but the media's depiction of the field is certainly no help either.

How technical is your audience? It's been posed elsewhere in these comments, but one area that I'm particularly interested in is your proposal for key exchange. For example, your site warns the user to never email or IM their password: that's great advice, but I don't see any suggestion of a suitable alternative. Granted, I would consider your primary audience to be proxy: that is, the recipients of encrypted messages compelled to install your extension on their machine. Your take may be different, and of course these individuals may simply use-it-and-forget-it, but I'm sure that some of them would want to become your users as well. In those cases, the copy for this section may be confusing or thought incomplete.

What happens to my plaintext once its recipient has decrypted it? From the source, after passing the encrypted data to sjcl, it looks as though the plaintext is simply appended to the DOM in place of the encrypted message. Are we certain that Google is not performing any kind of DOM analysis? I don't know how Google behaves in this regard; what assurances do I have that my decrypted messages are still secure?

With a lack of deeper technical details regarding your implementation (understandable considering your proposed audience), should I assume that you're using the sjcl default of 128 bit AES in CCM mode? Would you consider publishing these details on your website? My apologies if I've overlooked them.

I'd like to echo IgorPartola's sentiments and upvote, however little that may be worth. Usability is crypto's silver bullet and any reasoned discussion which furthers our attention to this problem certainly has value in my opinion.


What is the legality of this? I'm concerned that in general cryptography web products are potentially illegal http://stackoverflow.com/questions/5308601/export-compliance...

Also, how secure is AES 256 really? My understanding is that something encrypted with AES256 could be decrypted in my lifetime with a few million dollars worth of hardware and dedicated decryption workloads. It is a bit paranoid, but to me somewhat unsatisfactory that in theory someone with a large enough botnet could decrypt 'securely' encrypted data.


If someone modifies the server side of gmail they can still snoop the email before it gets encrypted.

That and being non-standard, so basically forces users to use the same service and gmail


We block the draft from going up to Gmail's servers. Gmail only sees the encrypted version of the body.


Gmail can just edit your code if they want.

In a comment on encrypting gtalk, I explained how this can be done: https://news.ycombinator.com/item?id=5858375


If Google decides to change how they save drafts you're compromised. They also have complete control of the page so any key press or mouse movement you do they can catch.


Yup, you're right. I tried to inspect and intercept every XHR call that gmail was making to ensure we weren't missing any holes. But it's entirely possible that there's analytics/tracking data that got passed.

Like we said, this is a starting point. It's open source. As with any type of security product that has a hope of being good, try and break it and let's fix it together :)


You could hash the code delivered by Google and throw up a big warning if it has changed.

I believe that Google does releases of the entire application at once so this method should detect when a roll-out has occured.


Why can't Google implement something like this that is very easy to use, themselves - since they "care" about their users' privacy so much?


A whiff of those advertising dollars and the "care" flies out of the window. Also, search would not be possible.


It breaks email search and context sensitive ads.


How is key reuse is avoided? It's even harder for users to maintain (separate random) key for every message. All the classical OTP key issues are here. Aka different message, same key, or same message different key (multirecipient). Using passwords is also guaranteed to produce weak keys. For AES128 you'll need at least 20 char completely random key and for AES256 you can double that.


Agreed - quality of a password is the key. We wanted to start with something that regular people would actually use. Its current;y not the most secure solution - but we were hoping the community could help us with that while preserving a good user experience.


NB: This is Chrome-only.


Mailvelope http://www.mailvelope.com/ uses OpenPGP and is nicely integrated with gmail.


Mailvelope is excellent. It was the first time that I'm able to convince my non-geek friends to use PGP encrypted emails.


Can't wait for it to support signing of mail.


Can't wait for a Firefox release!


Yup, totally secure. Unless Google's secretly forced to add something along the lines of

$(document).on("change", "input.streak__password", function () { $.post(...); });

Somewhere inside ton of JavaScript code. Or some three letter agency gets a private part of certificate for mail.google.com and MITMs it all the way they want.


It's too bad that firepgp couldn't keep up with the gmail api changes and eventually was aborted. The few times I used it felt pretty good.

This chrome-only alpha software obviously is useless for me. Hopefully something more cross-platform comes along at some point.


http://www.mailvelope.com/ is chrome-release, firefox-alpha – slightly less useless?


Same here. I'd pay money for something like FireGPG that worked with gmail.


Heads up - I can't click any of the links in the header to learn about what Streak does.



whoops. fixed.


Doesn't look fixed for me?


Not fixed for me. user-agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.116 Safari/537.36


how would you do attachments and search? Big stopper for me for using something like this.


Well you can't do search without at least having the metadata and the title. If you use other PGP methods, you can have that - but then the NSA can still get your metadata and e-mails title.


How about adding an "Encrypted reply" option along with the built-in "Reply" and "Reply to all"?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: