Shameless plug: SJCL is a great library and easy to work with.
I've used it to build a non-profit decentralized encryption tool that can be used to send and receive files that will self-decrypt using SJCL, JavaScript FileReader, and HTML5 download attributes.
User A creates a password to encrypt a file using this client-side mechanism - which produces a self-decrypting HTML file. User B opens this HTML file in their browser which will ask them for the password to decrypt the file allowing them to download the original file all without a server. The homepage can be downloaded and self-hosted at will.
So, decryption prior to authentication of the executable/ciphertext? So it's more likely to be user A encrypts M with K producing C, uploads to server, where E(ve) swaps out C for C' that records the password B types in and sends it to E - C' then claims "an error occurred, please reload the page/app" and then submits C'', a message M' chosen by Eve encrypted with K?
And since C' and C'' are delivered as (js) executables - Eve can choose what to do - try for a compromise when B runs C', show the original message but run additional code for C'' ... ?
As with anything - the result is only as secure as transmission and execution - so isolation is the only guarantee. If you're truly paranoid - run without network connectivity and transmit through physical media.
That being said, this was a fun project that demonstrates encryption at the lowest common denominator - versus the complexities of setting up PGP for email. This is something my grandmother still using AOL could use.
Right, please don't take the above as critique of writing up a project like this - it's a fun thing to do.
But even if your grandmother could use this, she wouldn't be able to use in any meaningful way in order to increase security around secret messages.
The only case in which such a project (or an equivalent one that generated a self-decrypting executable that asked for a password, or something like[1]) - is that if the file received isn't a trojan, the data in the file remains secure until decrypted for the first time. But as there's no way to know the file isn't a trojan, unless it's been transmitted via secure channel, it doesn't really add any security.
[1] I could see this working for a shell-script and a here-document using gpg:
#!/usr/bin/env bash
# this is makemsg.sh
cat > msg.sh <<eof
#!/usr/bin/evn bash
# this is msg.sh
gpg --decrypt <<eof
eof
echo "You will now be prompted for a passphrase."
echo "After entering the passphrase, typer your"
echo "message. End with end-of-file (ctrl-d)"
gpg --symmetric --armor >> msg.sh
echo eof >> msg.sh
echo "Encrypted message msg.sh created"
echo " - run msg.sh to read message"
Note that msg.sh is rather easy to audit:
#!/usr/bin/evn bash
# this is msg.sh
gpg --decrypt <<eof
-----BEGIN PGP MESSAGE-----
Version: GnuPG v1
Obviously, "msg.sh" is only "easy" to use on something like a Ubuntu/Linux desktop that comes with (ba)sh and gpg -- and it's only "easy to audit" for technical users.
It's still easy enough to swap out the contents of "msg.sh" with something nefarious that sends the contents of ~/Maildir to a remote server or whatever.
The point being, if the user can't trust the framework (in this case, operating system, shell and gpg executable) there can be no trust in the handling of data either.
But blindly running code (be that a shell-script or javascript code) is generally a bad idea...
I think C is the code and C'(') is some evil attack code. Several blog posts about JS crypto mention that given all the past attacks on TLS, it's not unlikely that someone can perform a Man-in-the-middle attack to swap out code.
C "executable" cipher text that is (presumably) made by prepending a js decryptor (executable code) to encrypt(M,K) - ie: C=make_executable_js(encrypt(M,K)).
C' could be anything (B only sees a minfied blob of js) - but at the very least is likely to be runnable js code. In the simplest case, it's simply a modified message M', encrypted with K and prepended with the (presumably) benign decryption code -- the change now being that when A wrote ("HI MOM, PLZ SEND MONEY TO ACCOUNT #121"), M' reads similar except for the account number (etc, etc).
The main difference (apart from implementation details of encryption and so forth under a js runtime) from Gnu Privacy Guard - is that one can/should/would get a trusted gpg binary that should be safe(ish) to run on potentially untrusted ciphertext.
Note that in the example above, Eve doesn't start out knowing the Key, K -- but as B runs any random js-code found at the "drop" point, Eve might be able to use said code to obtiain K (B types it in and allows untrusted code to handle the secret).
Contrast with getting a cd-rom from a trusted source with an OS and gpg on it - feeding gpg some reandom cipher text C' that Eve created, barring gpg bugs, a) doesn't give Eve access to K, and b) doesn't trick B into reading some bogus message M', different from the message A wanted to send in the first place.
In short - it would make more sense to distribute the decryptor js-script in a public but trusted manner - and then run the decryptor on the cipher text that could then be obtained in a different manner.
What an executable self-decrypting file (weather .exe or .js file) might be good for, is if A hands it to B on a USB key or transfer the encrypted archive in some trusted manner - the data would then be "safe" until B sits down and types in K. How useful this actually is, is up for debate.
All that said - I think it's great fun to make little tools like this, but we should be a careful about teaching people (even for fun) to run random code - be that js or other code.
There's no "non-plaintext password", encryption keys are encryption keys. I don't know if SJCL includes asymmetric crypto, but there's nothing wrong with AES.
Ah, thanks. Last I used it, I only needed the symmetric part, so I couldn't remember whether it had an asymmetric part as well. It is a very good library indeed, and was (at least partly) written by Dan Boneh, whom I regard highly.
I was just wondering that! Half the contributors on the Github repo have the default avatars and barely any info on their profiles, so I couldn't figure out who was who.
It is important to separate three security concerns:
1. Crypto delivered to the browser over HTTPS depends on the integrity of HTTPS.
2. A browser is a very hostile environment (injected JS, other browser extensions, etc.)
3. JavaScript may not be the best language for coding certain things (e.g., it is hard to remove strings from memory)
Depending on your use, some of these might be larger concerns than others. For us, 1Password.com, (1) is the biggest concern for those using the web-app. Our approach is to be very strict about TLS (TLS 1.2 only, HSTS, etc) and to encourage use of the native clients over the web-app.
The place that stores and manages your passwords for things like financial institutions has the right to decide if they would prefer a higher level of security.
Banking industry doesn't have much of a choice, they have to provide an online portal to service their customers. Customer's don't want or need a banking application to check their balance or do a bill pay.
When will we have end-to-end encryption in the browser, is it even possible?
Can this be achieved with extensions?
What makes this unsafer than, lets say SSH?
I mean, it is a software I download from somewhere, just like a browser, so if I trust SSH to encrypt stuff I want, why can't I trust the browser to do the same?
With service worker, you can have a similar setup to a traditional application (install once, and have the first installed version verify all other versions are signed by a correct key).
It's abstracted away from you, but with some work I feel a browser UI could be made to help with this process if people wanted it.
Again, the problem lies in the initial setup. How do you authenticate the very first load, in-browser? You can verify the loaded script, but how can you verify the first page? (Spoiler: You can't, unlike installing a binary through a modern install system)
You are relying on HTTPS for the initial "install", but with subresource integrity you could check that the hash of the initial script matches a known hash that you verify out-of-band (and then of course the browser verifies that the SRI hash matches what is executed)
It does take some knowhow, and it's not a good ux, but it's possible.
That's where my point of browsers making this easier comes into play. It's possible, and could actually have a pretty good UX, but they would need to build it into the platform.
I have specifically mentioned SRI - but matching the initial script out-of-band borders on the impossible (or at best "highly improbable"): you need two things, one is easy and one is hard. The easy part is verifying the hash of the initial page - a browser extension could do this (running JS again, oops); and the hard part is a trusted way of obtaining the hash OOB. "Takes some knowhow" doesn't even begin to describe the issues. Where are you obtaining this hash from, and how are you verifying that it's actually a legit and not a malicious one? (It's signed by the author's pubkey...which is verified how?) That doesn't "take some knowhow" - that takes a whole framework, half of which is currently imaginary.
You're handwaving that away as "oh, it's a simple matter of building it into the platform," where "it" is amongst other things a public key infrastructure and a secure software distribution system built with it. Easy peasy, right? (Spoiler: no) Contrast to a binary that's distributed through the platform's install/update system - all this is already built, and there's pretty good assurance that you're not getting a malicious result (signed packages).
You are preaching to the choir on that second part.
It's actually why I like services like keybase so much, they are actually trying to tackle that problem (with their own set of issues, but at least trying).
I was more trying to point out that we can get to where we are now in the browser.
Solving the problem of key distribution and management is way outside the scope of what I was talking about, and it's far from solved by platform install tools.
I mean they force me to directly use user-events to switch to full-screen, why can't the do such things for crypto APIs, so that no one could mess with this?
Because you're downloading the program at every launch. A malicious program doesn't have to attack the APIs, it can just send the data to the bad guy's server after it's been legitimately decrypted.
If one end is the browser, and the other end is the server, then we already have end-to-end encryption: https (as long as you don't use a TLS terminating CDN).
In every one of these threads, there are inevitable comments along the lines of "in-browser crypto is inherently unsecure", which often follows to a more general "javascript crypto is inherently unsecure".
This question might be slightly off-topic here, given this seems to be an in-browser library, but can anyone who knows a bit more about this topic than I comment on the state of out-of-browser JS crypto (e.g. NodeJS). The browser as environment does seem to introduce a stigma around security to the entire JS ecosystem, and I wonder if its warranted.
Node, V8, and Javascript in general isn't the most hospitable platform on which to do crypto work (Rust, Go, and Java are better for it), but if it's worse than Ruby or Python, it's only marginally worse.
One challenge is that SJCL is a very well-designed library that assumes it has no low-level primitives to work with, but in a Node environment you have native code to lean on, and you're better off doing than than using pure-Javascript crypto. But Node's native crypto is just bindings to OpenSSL --- not great.
> but can anyone who knows a bit more about this topic than I comment on the state of out-of-browser JS crypto (e.g. NodeJS).
What I can say is that NodeJS has the biggest and more mature amount of cryptocurrency libraries. This doesn't obviously imply that the crypto libs used are the best ones or they are secure. For example, OpenPGP.js [1] was audited by Cure53 and the changes to support elliptic curve cryptography (RFC 6637) [2] were done and audited by my company (two different teams).
Browser crypto is really problematic. For one, the client is downloading the crypto JS implementation (almost) every time they are using the app. Server compromised? crypto.js is rendered useless. HTTPS certificate compromised? crypto.js is useless.
And also:
XSS vuln on your website? crypto.js is useless since attacker will just exfiltrate your private key / password through XSS.
The browser is wonderful for UI but mashing together code and markup and then trying to permissively parse and execute it never goes well for security.
As much as the facts you've listed are generally accepted, there's grey areas between the "Browser crypto is good" and "browser crypto is bad" that are worth considering.
I'm working on a form submission that I know is going to be used by a lot of enterprise clients, in many cases with passive MiTM SSL devices.
Browser based crypto can prevent the contents of that form leaking to a passive firewall admin, and the very real issues you've raised don't apply to the threat I'm interested in resolving.
Now if I had a marketing team go off and call it "military grade end to end encryption" we'd have a problem, but I won't be letting that happen.
> For one, the client is downloading the crypto JS implementation (almost) every time they are using the app
Not in the case of browsers' extensions or Electron/NW.js apps.
> XSS vuln on your website? crypto.js is useless since attacker will just exfiltrate your private key / password through XSS.
It's not like buffer overread and other vulns don't exist in the non-browser world. Also, a huge amount of XSS vulns can be avoided by having a good CSP config.
Now I'm not supporting javascript crypto. Just responding to some of your points. Inb4 some crypto SJW quotes me on that.
I think it's important to decouple Javascript the language, Javascript the runtime, and the browser security model from each other. Javascript --- the language and the runtime --- aren't ideal environments in which to do crypto, but they're not untenable. It's the browser --- not the browser shell, running as a standalone application as in Electron, but the actual Chrome browser that fetches things from URLs --- that makes crypto untenable.
Tangentially related (and this conversation might have moved on already) but I'd be curious to hear your view on 1password's sync service. I know you like and use 1password standalone. Do you have a recommendation for how to sync across devices? Thanks!
Do you even trust the site to actually do what it says? After all, a web application is just [someone else's] code running on your system.
Even if it's secure against XSS, HTTPS is intact, etc, you are ultimately putting presumably sensitive data into code someone else wrote and then allowing it to run on your machine with explicit network access.
The crypto implementation may be flawed, or the developer may even send your private key up or other data that you don't expect it to be sending (whether it is out of lack of understanding, accident or malice). It would be nearly impossible for you to spot if it's just sent as part of the big blob of encrypted data you expect it to be sending. The blob being bigger than you expect might be a clue, but the only way to know for sure is to audit the code, and if the code if obfuscated/minimized that could be quite the task in itself.
This is really not a problem of browser crypto at all; this is a problem of trust of the code you're executing and using.
This is true. The value that in-browser JavaScript crypto does provide, however, is that it requires a MITM attack (which is definitely possible, especially for state level actors, but by anyone with enough control of a trusted CA); It is useful against dragnet recording of all communications - as long as the key/password itself is safe.
Somewhat similarly. It's default mode is authenticated. It handles nonces and IVs for users, unlike NaCl, which demands random nonces from the user. It has and exposes a lot more primitives than libsodium, which is probably a bad thing for security. Its authors (and not just Dan Boneh) have impeccable credentials.
On the other hand, libsodium will outperform SJCL on Node.js, which is really the only safe place to use a library like this.
For someone not really familiar with javascript could someone explain how sensitive information is removed from memory when no longer needed? Does the runtime expose semantics to ensure memory is zeroed?
Also given javascript is garbage collected are there concerns about timing attacks? Or can you prevent the GC from running in critical sections like Go (I assume Java offers this too)?
This is my question as well. I've seen plenty of advice about how Python is a bad choice for any crypto application because it's almost impossible to secure Python's memory. Is JS different somehow?
WeakMap just allows you to tell the GC "don't hold on to this memory just because I'm using it as a key in this Map" -- it doesn't let you tell the GC what to actually do with the memory.
I feel WebCrypto (strange architectural choices made by people whose priorities are availability of crypto, not consistency of security) is even more questionable than running SJCL (a good crypto done by good people in questionable environment) in browser.
I agree. Not making this up: the primary goal of WebCrypto was the elimination of Flash and plugins to enable streaming media players. It's not designed for security. It eliminates some of the least worrisome flaws in browser crypto (side-channel attacks against the lowest-level primitives) but leaves all the rest of the problems intact.
It has a built-in cryptographically sound random source, uses modern JS primitives (Uint8Array and Promise), and easily outperforms SJCL at hashing and HMACing.
Additionally, the WebCrypto operations are built into the browser platform and cannot be overwritten by userland javascript, though the interface can be spoofed in browsers that do not support it natively.
SJCL provides a secure random interface (it implements Fortuna). It's not great, and SJCL is not especially performant. But its interface, cryptographically, is (as I said) superior to that of WebCrypto.
I'm not sure it matters that WebCrypto is native and "can't be overwritten", given that all the glue connecting the crypto is pure Javascript and can easily be rewritten.
I meant that the WebCrypto API does not rely on network transmission and that window.crypto and window.crypto.subtle are read-only properties in compliant implementations. Those two characteristics alone would seem to solve many of the problems enumerated on https://www.nccgroup.trust/us/about-us/newsroom-and-events/b..., namely the chicken-egg problem of secure javascript transmission and the malleability of the JS runtime.
I'd be interested in reading about how SJCL's interface is cryptographically superior. Superior/inferior seem to have a particular definition in this context, and I'm not sure I understand exactly what you mean. I know you're an expert in the field and would love some more context on how I should be cautious with WebCrypto.
It doesn't matter that you've securely transmitted the AES implementation if the code that drives AES, sets up its constructions, ensures that its parameters are set properly, manages its keys, and handles the plaintext is itself delivered insecurely.
(I wrote the document you're citing, for what it's worth).
I know you wrote the document; that's why I was surprised to see you agree with a claim that using WebCrypto is "more questionable" than serving and using SJCL based on the differences in the libraries interfaces. I personally find SJCL's interface difficult to work with, since all bytes must be converted to an idiosyncratic format (sjcl.bitArray), whereas WebCrypto uses the unsigned int array primitives introduced to the language after SJCL was released. I'm pretty sure that's not what you mean when you say the WebCrypto interface is inferior, and I'll probably need to do some basic reading to figure out what exactly that means in the context of cryptography.
There's a discussion going on of people who claim doing Crypto in the browser is super insecure. (Crockford is one of them) I wonder to what degree this is true or what needs to be done to do at least some basic crypto in the browser like AES for personal user data.
I've written about this, here: http://stackoverflow.com/a/24677597/19212, in response to the question "Can local storage ever be considered secure?", and in particular using encryption from the SJCL.
Here is the gist, copied for convenience:
> There are libraries that do implement the desired functionality, e.g. Stanford Javascript Crypto Library. There are inherent weaknesses, though (as referred to in the link from @ircmaxewell's answer):
- Lack of entropy / random number generation;
- Lack of a secure keystore i.e. the private key must be password-protected if stored locally, or stored on the server (which bars offline access);
- Lack of secure-erase;
- Lack of timing characteristics.
> Each of these weaknesses corresponds with a category of cryptographic compromise. In other words, while you may have "crypto" by name, it will be well below the rigour one aspires to in practice.
> These concerns will likely be addressed in the WebCrypto API, but that is not here yet.
Thanks a lot for your answer, I also read the one on SO. I'm actually thinking to use WebCrypto API for a current (kind of side-)project. I guess if you wrote a blog post about that, that could be really interesting. Until now I only heard "strong no" towards crypto in JS. Even if it's not perfect, if it can be designed towards "better than nothing". Also given that the Web is likely becoming the major platform for everything at some point in time...
WebCrypto API was released as a W3C Recommendation January 26, 2017. Indeed, the working group has now shut down (they decided not to do anything about any other APIs than low-level crypto stuff...).
JavaScript's random generator is not cryptographically "secure" because it's not random enough. You can however make your own random generator by using keyboard/mouse input, microphone noise, etc for entropy.
Then encryption should be a standard API provided by the browser. We already have a level of trust in the browser for crypto implementation, we should not be relying on independent javascript code.
Imagine a diary Web app, for me that would be a perfect use case. It's platform independent and you have automated backups in the Cloud - encrypted though.
One of my first jobs involved encrypting medical data in the client-side browser. The plan was to generate a new RSA private/public key pair whenever a new user joined. The keys themselves were encrypted using the user's password (w/AES) on the client side. We stored the encrypted RSA keys and the user's password hash on a server. When a user logged in we would validate the user's password hash, and return the user's encrypted keys. The user would have to know their password to decrypt the RSA key. The theory was that no medical data was visible on our server, even to us. We also used standard SSL for all connections.
The challenge was implementation. At the time there were no AES block-cipher JavaScript modules. My solution was to use GWT to compile BouncyCastle's Java library into JavaScript. I had to strip the library of all low-level I/O calls and replace the BigInteger class with a GWT-JSNI-wrapped version of the BigInteger class I found online. That BigInteger class was a side-project of a Stanford graduate student: http://www-cs-students.stanford.edu/~tjw/jsbn/
He also included a JavaScript implementation of RSA. Long story short, within a year there were several cryptography libraries, but I like to think we were ahead of our time.
How likely do you think it is that your implementation was safe? It's hard enough to use RSA safely in the best possibly circumstances --- and BouncyCastle isn't that! (I hope you're saying you compiled down BouncyCastle's RSA along with their OAEP support, and not that you did what a lot of front-end crypto people do, which is to implement RSA in pure Javascript).
If the goal was just end to end encryption, why was there a need for asymmetric keys? Couldn't the medical data have been directly encrypted via the password-derived AES key and stored on the server?
We've been using SJCL for 4 years in our browser extension to encrypt incognito mode bookmarks [1] and one of the things we've really appreciated is its backward compatibility -- version updates have been complete drop-in replacements which has made developing on this very pain-free.
I use in browser crypto to get "better than plaintext" encryption for my login pages. I am dealing with a server infrastructure that won't let me easily or cheaply add https to client sites. I use a 512BIT RSA Key Pair regenerated every 5-10 seconds. I know it could be MITM'ed or Brute forced in about a day or two. Its not real security, but it is better than nothing. I wanted to stop sniffing for http logins (would have mitigated that cloudflare issue last week).
So, hopefully without shooting my mouth off (this time), here are some lessons I have been learning (the hard way) about SJCL:
* The SJCL library appears to be really great. It is super simple to use and has great documentation. Dan Boneh has a reputation that precedes him. I've been pointed in the direction of his work multiple times, independently, by different sources. [0]
* The SJCL library is only going to be as good as its implementation in the browser. For instance, how is it getting in the browser? Is it stored in an extension, or is it being loaded dynamically? If it is loaded dynamically, you are going to be vulnerable to a whole host of attacks, including cross-site and MITM. That aside, even if it is loaded dynamically and it gets there in one piece, is it loaded into the runtime securely? This last one is the real kicker for me. If you are implementing it as injected code onto a third party page, you are leaving it open to trivial manipulation by third parties. I was savaged by Nikcub for this, and rightly so. For embarrassing exchange, you can see my history.
* What is the threat model? Browser-based encryption is only ever going to deal with certain threat models - as we are seeing in the recent days, there is good reason to assume that end-point security on a host of devices may be compromised by state actors. Please note I am not declaring this to be the case based on my authority - but it seems like in a world where we are getting leaks off of handsets prior to encryption by Signal and Whatsapp, assuming end-points are secure is a big assumption. Additionally, I know the wikileaks Vault7 only list compromises up to Chrome ~35 and for Android below Kitkat [1], I've read this list is likely dated, and I don't think it is fair to assume that the development of zero days has stopped there. Accordingly, if you are trying to prevent intrusion from state actors, then there is reason to suspect that browser-based implementation will never get you there.
* Key generation and exchange remains an issue. Lots of people more qualified than I state that javascript RNGs are just not that great, which can significantly reduce entropy on keys. [2] On top of this, I want to talk about entropy more at length, in particular reference to SJCL: SJCL goes to some length to create 'random' (I personally cannot verify this) salts and initialization vectors. However, as far as I can tell, you have two choices: you either store those separately and transmit them separately from the message, which creates a whole host of issues in the transmission and storage of those salts, or you send them with the message. As far as I can tell, if you send them with the message, the extra entropy they introduce is not relevant for warding off brute force attacks or attacks based on trying to compromise the password (e.g. dictionary attacks), but are only useful against crib-based attacks or other cryptanalytic attacks - which, again, as far as I can tell, if you are going up against the sort of entity that has the resources to actually try and crack AES128 or AES256 by attacking the cipher, rather than the key, I suspect you are dealing with some very nasty people and using javascript crypto is not your best bet.
* Importantly, and critically, security is a conclusion, not a feature. Adding SJCL onto a communications protocol is not going to make it secure. In fact, it has been expressed by people better than me that that the author of software cannot self-authenticate that it is secure.[3] It needs to be subjected to third party (and, ideally, public scrutiny). So, in the end, if you are going to be using a library like SJCL, it is important to have the particular implementation tested by disinterested third parties. Though the math and code behind SJCL may be secure, actually getting it into a piece of software that people want to use introduces a gigantic raft of issues.
On background, the reason I know this as a (software) lawyer is because I have been working with SJCL on a node based application for quite some time. I do not represent it is secure - if I have in the past, that was in error and an oversight on my part (hat-tip to all the people on HN who have very rightly pointed this out). However, working on it has been extremely instructive and has confirmed what I always suspected to be true - if you want to be able to say something is secure, you need to be working with people who work on security as a primary occupation, not a hobby or a side-interest. It is too enormous, complex and ever-changing a field for anyone to be an 'expert' at it unless it is their primary concern.
As always, interested in any feedback or counterpoints. Especially on the math.
Yes, kind of. I use SJCL in Turtl (https://turtlapp.com) which is packaged as standalone browser app (nwjs on desktop, xwalk on android). I'd never publish the app as a web app (at least not without significant warnings to the user/an opt-in) but technically yes, we use client-side crypto in a browser.
SJCL is kind of a pain in the ass though, to be honest. It was built before Uint8Array was prevalent, meaning all your crypto data has to be encoded as a string, and you have to be extra cautious of UTF8 data (you have to decode/encode your data as ASCII strings). Someone please correct me if this is no longer the case.
Recently I've been playing with the emscripten port of libsodium (https://github.com/jedisct1/libsodium.js), which seems to be working quite well. I have yet to benchmark, but the nice thing is that eventually I can replace parts of the app with Rust (WIP) and use the exact same library for crypto as used in the js app.
So, I'd agree in general that javascript is a shitty language for crypto, or at least it was before Uint8Array et al. I'm looking forward to seeing what happens with Wasm...being able to replace a JS app with low-level code compiled from Rust or something is definitely a nice idea. My ultimate goal was to provide a comm layer between JS <--> Rust and embed the Rust portion of the app as a library (.dll/.so/etc) but perhaps it just makes sense to compile everything to Wasm and embed it that way.
We are using WebCrypto in 1Password.com implementation and polyfill with SJCL when a browser does not support the algorithms we need (PBKDF2, AES-GCM in Safari, etc).
Yes there certainly is a market for that. I work at Virtru, which is essentially doing encryption in the browser for your content and then decrypting in the browser.
I've used it to build a non-profit decentralized encryption tool that can be used to send and receive files that will self-decrypt using SJCL, JavaScript FileReader, and HTML5 download attributes.
User A creates a password to encrypt a file using this client-side mechanism - which produces a self-decrypting HTML file. User B opens this HTML file in their browser which will ask them for the password to decrypt the file allowing them to download the original file all without a server. The homepage can be downloaded and self-hosted at will.
https://zipit.io/
edit: I have uploaded it to Github today for easy self-hosting: https://github.com/colepatrickturner/zipit