Hacker Newsnew | past | comments | ask | show | jobs | submit | polvi's commentslogin

Aircraft Owners and Pilots Association | Engineering Leader

Location: Frederick, MD or REMOTE (US), ET timezone

AOPA is a nonprofit that supports and advocates for general aviation pilots through safety programs, training, advocacy, and digital platforms used by hundreds of thousands of members.

You would lead a team of 6–10 engineers and various consultants working on AOPA’s core digital platforms.

Stack: TypeScript, Swift, iOS, Astro, Cloudflare Workers, .NET, Azure

Looking for: former senior engineer turned engineering leader, strong people leadership with solid technical judgment, clear communicator aligned with business outcomes.

If you love general aviation and supporting the GA community, this role is a strong fit. Perks include free flight training.

Email alex.polvi@aopa.org if interested. I am a former YC founder (Cloudkick W09, CoreOS S13), active pilot.


All YC companies build technology, however many (most?) are building technology for markets that traditionally have not used technology in a modern way. These are things like cleaning (HomeJoy), flower delivery (Bloomthat), t-shirts (Teespring), etc etc. Theses technology enabled businesses are the ones that primarily use web/mobile, and thus all the jobs.


Really impressed with their openness on the terms of the deal. He disclosed valuation ($865mm-post) / dilution (7.5%) and their balance sheet ($22mm). Very impressive terms for that matter. Congrats on the round and success with the business.


As far as I can tell, the current implementation of signed images hardcodes the Docker, Inc cert-- effectively locking in users to only Docker, Inc's trusted images.

https://github.com/docker/docker/blob/master/trust/trusts.go...

Ideally docker users could sign their own images and provide their own keys to do signature validations. What is the timeline on this work? With out this, "digitally signed images" means "locked into Docker, Inc- otherwise no security", and is very misleading.

A very basic implementation would be to read certs out of a directory on the filesystem and is how all other package managers handle this.

Edit: I missed the part in the post that even if the signature fails, the container still runs. The signatures do nothing. Got it. Preview.


Absolutely, the full implementation will allow each user to sign with their own keys, and provide user-configurable trust rules ("allow images only signed by this key"). The Docker CA will be used as a default convenience to provide a common namespace if you want it, but users who want to use their own custom PKI will have all the flags to do that, and there will be an "escape hatch" to opt out of the entire trust infrastructure altogether.

The only reason we're starting with verification-only, and only for images produced by the official library maintainers, is because the other side of the tools (signing) are not yet ready to be merged in Docker. By releasing a subset now, we can start getting some feedback and ironing out the quirks, while the contributors finish their work on the signing tools, using the library maintainers as guinea pigs. Hope this helps.

PS. to state the obvious, all of this is taking place in the open on #docker-dev in Freenode. It is being designed by key contributors from multiple companies, and you are welcome to join the fun.



Sounds like a good time to use the equity equation:

  http://paulgraham.com/equity.html


Neat article. I think OP is trying to discern _how_ to map the friend's contributions to outcome of the company.


MiniLock looks like a great option to introduce encryption to my non-technical friends. The alternative to minilock right now, for these users, is to do nothing.

Even if we do not like it, right now state of the art on file sharing (for most of the non-technical world) is an unencrypted email attachment. MiniLock looks like it might be something I can install on my mothers (non-technical) computer so that I can send her a sensitive doc (copy of my tax return, for example). This crypto system is sufficient for that use case, and the alternative is to do nothing at all. The alternatives are not GPG, or RSA, or whatever, because outside of the technical community people have no idea how to use these things.


Exactly! When it comes to crypto apps, I have noticed two kinds of criticism: "This software is not built for the threat models that interest me" and "This software fails to properly address the threat model it claims to". Too often, commenters will act as though their critique belongs in the second category when it really belongs in the first.

(It's great to question the design goals of a project! But that's very different from saying that a project fails to do what it says. In this case, Minilock has very clearly accepted a threat model where, if the passphrase is compromised, that's the game. If you don't like that, don't use it!)


(CoreOS eng here) We no longer have a Chaos Monkey. Since the beta release, the "Chaos Monkey" can be disabled via configuration. See the "reboot-strategy:" in this post. This is the recommended way to do that:

http://coreos.com/blog/coreos-beta-release/


thanks, I actually wrote most of this before the beta and missed that, I will update it.


Lots of people playing with it... it is in a similar state as Docker, in that it is "pre-production" but people ignore that...


You'd actually setup it all up as you would if everything was on your localhost or dev environment. Meaning you'd need your master on 3306, slave1 on 3307, slave2 on 3308, etc. You'd still need to setup a configuration for each of these services, pointing to one another, but the configuration would be fixed.


That seems sloppy when you could just stick to standard port assignments and use special use addresses. If you'd like help getting special use addresses, I'd be happy to help where I can.

At the minimum, hijack 255.255.255.255 instead.


OK. We'd love to get your help getting a special use address: alex.polvi@coreos.com


This is true if the application needs to talk to the slaves directly. If the application doesn't care, then you could do the smarts in the proxy layer underneath the application. I see three scenarios:

* Stateless or Transparent master/master backend

  Example: Memcached cluster

  Use load balancing in the proxy layer
* Failover backend with failover on server side

  Examples: Mysql master/slave

  Use failover logic in the proxy layer
* Failover backend with failover on the client side

  Example: HA RabbitMQ cluster

  Above suggestion from polvi is needed


I don't know how much it helps, but you can just listen on other ips on localnet, with no need to plumb e.g.

  nc -l 127.0.0.2 9999
Then "all" you need to something to map the 127.0.0.n against the docker instance. This even works on windows by the way. Just start up that second tomcat or whatever with the next 127.0.0.n address and enjoy.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: