Hacker News new | past | comments | ask | show | jobs | submit | bren2013's comments login

Are you interested in particular types of scientist? I.e., physicists, chemists, biologists? Computer scientists? :)


We think we can be most helpful for people working on life science companies, so mostly focused on that.


Two quick questions: 1) does your comment above mean that it makes little to no sense to apply if a startup is in different (albeit, somewhat adjacent) domain: mine targets materials science and engineering; 2) how do feel about solo founders?


No, please still feel free to apply, we're still happy to consider and see if we can be helpful.

Solo founders are fine with us.


That sounds great, will apply soon then. Your prompt reply is much appreciated.


Is Cirrus mentioned in the blog post? I can't find it.

Right now, the way key compromise might be detected in RPKI is that a human network operator notices a signed object which is obviously suspicious and posts it on a mailing list. This is the same way that CA compromise was detected in the Web PKI before CT.

CT was useful well before Chrome used their weight to make it required for all new certificates because it was no particular burden on a few people (CA or not) who saw a lot of certificates to add them to CT. This gave the people who might not see a lot of certificates but want to find something weird a corpus to dig into.

Same idea applies to RPKI. But yes, setting up Cirrus took very little of my time.


You're correct, it's mentioned in the next/ accompanying post https://blog.cloudflare.com/rpki-details/

Some small corrections if I may

1. I don't know of any issuer key compromises detected with CT. Such compromises are rare. DigiNotar is an obvious example, but can you think of any recent ones? Mostly CA incompetence is more subtle that "Oops we left the root keys on a train" and I'd hope the same is true for RIRs.

2. Technically CT is still not required. You can choose to obtain certificates which aren't logged from a CA that will issue them without logging. Neither Google nor Mozilla require you to log all certificates as a condition of their root programmes and both recently confirmed they do not intend to make that a requirement. Chrome will mark your cert as untrustworthy if it hasn't got an SCT and so it would always make sense to log certificates for a site Chrome users will access on the public Web, but not every TLS certificate is for a web site accessed with Chrome.

[ Also, Google actually makes use of the fact that certs can exist without being logged for some systems. Suppose they're about to launch amazing.example.com, but even the Amazing name would give away what it is. They can order the cert for amazing.example.com with no logging and stockpile it, and when the brass signs off on actually announcing Amazing they log this cert, almost instantly getting back SCTs, and configure a suitably modern web server to send the SCTs separately alongside the certificate. Where ordering a certificate internally might take hours or days, these final steps can be done in a few minutes and need no special permission. Any subscriber whose CA is willing to issue unlogged certificates can do this, but just make sure you have all the steps correct or you'll look really dumb, even Google got this wrong at least once since they began doing it ]

[edited to fix typo that reversed my meaning]


> I don't know of any issuer key compromises detected with CT. Such compromises are rare. DigiNotar is an obvious example, but can you think of any recent ones?

Not key compromise, but general mis-issuance:

Facebook detected overreach by a vendor with CT: https://www.facebook.com/notes/protect-the-graph/early-impac...

AGL detected certs that were malformed in various ways: https://www.imperialviolet.org/2013/08/01/ctpilot.html

> Technically CT is still not required.

It produces an interstitial, see: https://invalid-expected-sct.badssl.com/


IPFS offers content integrity, which you can use to bootstrap confidentiality.


Confidentiality without central gateways. The topic was, "with stripped down SSL" and evey user should be aware of, cloudflare can read and store every content of every connection made to a domain connected to cloudflare gateways even if they seem to be end-to-end-encrypted.


I believe the IPFS team has already built a browser extension that runs a js-ipfs node. Running an IPFS node is not ideal for power and bandwidth constrained devices like a mobile phone. This is where I see gateways fitting in, long-term. Not having to trust a gateway unconditionally is necessary to make this viable.


No plans currently. I'm not really familiar with Dat. Can it do something that IPFS can't?


Yes, IPFS is purely hash based which is great for static images/movies.

DAT is also hash based, but at least it has support for top-level asymmetric keys that you can put files into, and ADD files to without the root directory changing its hash. IPNS works around this, but isn't ideal.

Neither systems can handle high frequency P2P data that changes - for instance Reddit-like sites etc. those ( http://notabug.io ) and other sites like the Internet Archive (which has IPFS, WebTorrent, and GUN versions decentralized) are built on our system (https://github.com/amark/gun), are already pushing terabytes of P2P traffic.

And don't forget about WebTorrent and Secure Scuttlebutt!!!


Not sure if this fully addresses your use-case, but I like the idea of serving a static bootloader from IPFS. The bootloader would contain all of a website's assets, and code for getting dynamic content from a backend. The backend could be:

- A central API where the bootloader can do arbitrary validation on the API responses.

- WebTorrent, Scuttlebutt, IPFS PubSub, etc.


Yes, that is already what the P2P Reddit does, but without IPFS (although this is a good idea!), and using GUN as the "backend" (fully P2P/decentralized though), SEA for validation/security (no need for a central one), and DAM for pubsub (no flooding problems like in libp2p) which can do WebRTC.

I'm sure people would love to see an IPFS version of a bootloader, instead of HTTP, that is a cool idea. Have a repo for it?


It's both similar and different in many ways :) But I think adding support for it alongside IPFS wouldn't be too much of a pain, as superficially they work in somewhat similar ways. Primarily Dat is targeted towards sharing/storage of research data, which I think would be a cool thing to spread support for.

It has a browser that allows people to easily make and navigate sites, called Beaker. It has some really interesting projects built around it, and as marknadal says, it supports changing the contents of locations etc.

I believe Dat works over Tor currently, which is interesting. However finding info on successes/problems with the various P2P stacks and Tor is a bit hit-and-miss.


Kiwix has an archive of StackOverflow but its gigantic. :( We barely got the Math StackExchange uploaded in time.


Great Scott! 122GB. A weekly update of the local rpi would seem quite irresponsible on a regular basis - how would I use rsync with IPFS? (I could not find Wayne Davison's user id on HN, he seems to maintain rsync) - Are zim files optimized for rsync?


> And ultimately, how can I know for sure that CloudFlare won't play the game where as acting as a proxy they will modify some of the files served?

https://blog.cloudflare.com/e2e-integrity/


There are lots of scenarios where you would want to let an untrusted intermediary handle critical operations for you without actually having to build trust with them.

Typical example: Certificate authorities distribute CRLs primarily through untrusted channels and third parties to lower their operating cost. The CRLs are signed so its a win-win for everybody.


Not for a while. DJB's curves have entirely too much structure and I'd prefer to get something general purpose working first.

I don't even like having to implement NIST's speedup for A = -3, and it's fairly trivial.


About ElGamal: Agreed. I only planned to implement it for my own satisfaction, more than for anything else.

The addition law only leaks general cases--doubling, negatives, one argument is zero. Is that not generally considered okay? Intuitively, it doesn't seem like terribly valuable information--in the case of the Montgomery ladder, the attacker already knows we do one doubling and one addition per bit.

Earlier, I was using Euler's theorem for inversions, which should have been more constant-time than Euclid's algorithm. The only problem is, it took slightly over a minute to do one scalar multiplication. Could you describe an attack that would come from knowing information about the z-coordinate? Is there a good way to do constant-time inversion?

Edit: I just added a second benchmark testing scalar multiplication for a random value vs one with lots of zeros, and they produce different distributions with or without the #[const_time] (assuming I'm using it right). Thank you for bringing this up! I'll look into what needs tweaking.


I missed my edit window...

I actually take the above back. Scalar multiplication on a point IS constant-time (the two distributions are indistinguishable).

Field exponentiation isn't constant-time and I will work on that.


The way your scalar multiplication is performed leaves you open to two attacks:

- Scalar multiplication is variable-time, with the variation being correlated with the position of the most significant bit of the exponent (see https://github.com/Bren2010/ecc/blob/bd75261b6fe7839ddc751d6...). An attack like [1] on ECDSA seems plausible.

- The Montgomery ladder uses different code paths depending on whether the exponent bit is 0 or 1; this makes FLUSH+RELOAD attacks possible, as in [2].

[1] https://eprint.iacr.org/2011/232

[2] https://eprint.iacr.org/2014/140


Issue #1: Yes, it's supposed to be like that. The point is that any n-bit scalar takes the same amount of time as any other n-bit scalar.

Issue #2: Rust has explicitly taken all memory management away from me. There's nothing I can do about that.


Regardless of what seems to happen in practice, none of the BigUint operations (including comparison) are guaranteed to work in constant time. Since the implementation uses a Vec whose size depends on the size of the integer, this could easily have significant timing differences in other situations.


I've tested a scalar of about 50% 1s against a scalar with almost no 1s, measuring at the nanosecond resolution, with 100 samples each.

The p-value of the 2-sample T-test was greater than 1%--there's no evidence that one takes less time than the other. That can be replicated by playing with the benchmarks yourself. If you have any evidence to the contrary, I'd be happy to look at it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: