https://getmemex.com/ might be what you're looking for. I've tried to use it, but it somehow managed to destroy its database 3 or 4 times. After that I gave up and uninstalled the extension again.
I'd like to point to another smtp server, which I am using and is much easier to set up than postfix in my opinion, especially for small servers like mine: https://www.opensmtpd.org/
I'm running a personal mailserver (opensmptd, dovecot, dkim, spf, dmarc, spamassassin) for some years now and although I initially had problems with deliverability to google there aren't any (apparent) issues so far.
Also I don't understand why people keep emphasizing google's spam filter being so much better than anything else. For my personal server SpamAssassin has proved itself to be more than sufficient, its spam filtering performance is on par with Gmail's (I have a Gmail account aswell), at least for me.
Of course, Gmails spam filter works better for the billions of accounts they manage, but in order to handle the spam of a tiny mail server I'm probably not the only person who is satisfied with SpamAssassin.
The login form usually sends the password in cleartext and it's then hashed on the server-side prior to comparing it to the hash stored in the database.
So they can just determine the password's strength at the time when the user is logging in
> The Planck time is by many physicists considered to be the shortest possible measurable time interval; however, this is still a matter of debate. [0]
So, if this is true, then does it mean that time does not 'flow', but rather 'chunks'?
In a very real sense, we may never know. The problem of natural language is that it doesn't really work all that well for scientific purpose: talking about chunks and flows comes with a human's lifetime's worth of thinking they know what those words imply, and so we don't use them except in popular science articles. Time could be discrete, or it could be continuous, but while in normal life the difference between those two is easily determined, this is not the case in physics at all: in order to say "which of the two it is" you first need to have an idea where the boundary between the two would have to be.
(even at the macro level, a river that moves a specified number of molecules per time unit doesn't "flow", it's technically moving discrete "chunks" of water, we just don't say it's moving chunks of water because no one cares about the "well, technically..." in normal conversation. We know what's meant and what to ignore. Physics doesn't have that luxury)
We have no idea where that boundary can even be found; it would certainly have to be below the Planck length, but we have nothing that allows for that kind of precision. The only thing physics can say right now, and possible even ever can, is that nothing we have at our disposal allows us to conclude time "actually is" discrete. We can come up with mathematical frameworks that assume time is a discrete dimension, but even if those yield super accurate predictions that are then verified through experiment, all that does is confirm that a continuous dimension can be reduced to a discrete one without loss of precision.
This is just wrong. The Planck time is just the time scale when quantum and gravity effects become both equally important. The wikipedia article even says so. It's definitely not some sort of time quantum.
The question I've always had about Planck time is about the "clock". If you and I observe something move and we were able to do so at Planck resolution, would our ticks be synchronized?
Can you phrase that question in a way that takes relativity into account? In particular, what do you mean by ticks being synchronized between two observers, when there's no such thing as objective simultaneity?
> Combined low concentrations of copper (5 μg/l and chlorine (50,μg/l have been effective in preventing both micro and macro-fouling in over 120 seawater installations since 1987.
It's also done in freshwater (great lakes) to prevent zebra mussels from clogging the intake pipes. Things like to grow where other things bring food by for free, and are sheltered from predators.
Not the OP, but I have a similar setup, however instead of Postfix I am using OpenSMTPd, which I found much easier to configure.
Checking for false positives is a manual process. I've never had legitimate mail marked as spam though, only the other way round.
My experience with Gmail's filter is not as nice btw. At work, I had a couple of situations where important mails from a customer were flagged as Spam, which went unnoticed for two weeks.
When it's going to be open sourced then we could finally implement a proper matrix.org bridge. Currently all messages are relayed through a special gitter user called matrixbot
matrix-appservice-gitter actually supports 'puppeting' already - letting you log in as your own user. But native Matrix support for Gitter would be incredible!
It's not clear from the article whether they train the networks with the same shared-key in every iteration, or if they randomize it. Any info on that?
Since it seems that both adversaries in the network are training in parallel, is it possible that the encryption is only exploiting a weakness in that particular Eve? Would it change anything to have more Eves challenging Alice and Bob?
Also-- being able to generate crypto-algorithms on the fly seems like it would be ideal for small cells of people who want to keep their communications secret from something like the NSA, who might be looking for something like RSA or GPG, but not some ai generated by a neural network that nobody else in the world is using.
Oh- and how susceptible is the generated ciphertext to standard cryptographic techniques like letter frequency analysis and so on.
Yes. Some of this is in the paper, but I didn't try training with multiple eves at a time (yet). It's a very reasonable thing to try. We did test the robustness by doing a final evaluation pass where we trained a fresh eve a few dozen times without modifying A&B. That eve was generally 0.5-2 bits more successful than the one being trained iteratively, suggesting we could do better.
The last question you asked is, well, a good question. There's no reason to think that the current algorithm is very good in that regard. It's probably vulnerable, since we know it mixes multiple key bits & plaintext bits together.
So this is kind of neat, but from skimming the paper I didn't notice anything that goes information-theoretically beyond a one-time pad (even though it's clearly stated and plausible that the concrete algorithm found by A and B is not a XOR one-time pad).
Have you run experiments where (a) the messages are longer than the key, e.g. twice as long and (b) Eve is more powerful than Alice and Bob?
(b) is actually the most interesting thing, because cryptography is supposed to protect against computationally more powerful adversaries, but testing it is only really meaningful in combination with (a), because as long as messages and keys have the same length, you can always find an information-theoretically secure algorithm.
Not yet. For (b), we gave some advantage to Eve by (1) running two steps of training Eve for every step of training A&B; and (2) running multiple independent retrains of Eve from scratch after freezing the Alice & Bob networks. Not quite the same as increasing the capacity of the network, but similar. As you noted - we mostly stuck to the regime in which a solution could be found in theory (or trivially by a human), to explore whether or not the adversarial formulation of the NNs could get anywhere near it.
See answer below re: OTP. Yes. We hoped the DNN would learn something close to an OTP. (The network we used for it is capable of learning an OTP, but the likelihood of doing so by gradient descent is vanishingly small.)
Nothing was shared between Alice & Bob except the secret key. The architecture of the three neural networks was the same (for Alice, Bob, and Eve), but they were all initialized independently and nothing was done to tie their weights together.
Kind of. Except that there's no restriction that there has to be a 1:1 correspondence between the key and plaintext bits (or characters) that get mixed, as there would be in a conventional OTP. And, indeed, the DNN doesn't learn that - it mixes multiple key and plaintext bits together. Probably in a way that's worse than a true OTP -- the adversary is more successful than it should be were the encryption scheme a "correct" OTP with XOR.
I haven't. Interesting - that'd be a nice way to try to probe how strong the encryption is (i.e., "bits recovered vs. key bits supplied to adversary"). I'll have to think about that more - thanks for the idea!
Sort of. The key was only shared once, but over 20,000 messages were sent. In the real world, that would allow you to crack the OTP, since you're not supposed to reuse them.
i have to admit, i don't really see the point of this (i admit not having read the paper though):
> It's a random key paired with a random plaintext for each input. In the experiments, the key is the same length as the plaintext.
this practically means the networks only have to implement XOR for perfect security (a one time pad).
maybe you're studying something different i don't understand, but why wouldn't it be more sensible to limit the key size?
i.e.: why didn't you train the network to create a key stream? i'm not a cryptographer, but in this case you'd only have to train two networks (the keystream generator bob and the attacker carol).
Would it be possible/easy to add the speed of encrypting/decrypting the data as a separate loss function? Potentially this could lead to cryptography being a less expensive computation.
It could, but within a given neural network structure, the speed is going to pretty much be constant. (Barring optimizations such as eliminating zero weights/activations). There's a meta-level above this of trying to search or automatically determine a "good" NN structure that can accomplish the encryption & decryption. That too (determining an optimal NN structure for a problem) is a fascinating research question in its own right! :) In fact, it's one that Jeff Dean called out a while ago as one of the leading-edge questions for DNNs, IIRC.
Moreover, Viber's adoption is highly concentrated in some specific countries so, depending on where your friends are based, it might not be a viable solution.