Hacker News new | past | comments | ask | show | jobs | submit | pliu's comments login

Signal Foundation is a non-profit. It is my understanding this is a 0% interest loan, not an investment. I'm no expert, but after examining their 2019 balance sheet, it seems to me like they are well on their way to paying it back.

I guess the salaries are competitive in San Francisco. Spicy!

https://projects.propublica.org/nonprofits/organizations/824...


MATTHEW CHEN (SOFTWARE DEVELOPER) $585,320

MOXIE MARLINSPIKE (DIRECTOR/CEO OF SIGNAL MESSENGER) $575,275

SCOTT NONNENBERG (SOFTWARE DEVELOPER) $523,376

JOSH LUND (SOFTWARE DEVELOPER) $522,412

TREVOR PERRIN (SOFTWARE DEVELOPER) $514,986

MICHAEL KIRK (SOFTWARE DEVELOPER) $504,466

ARUNA HARDER (CHIEF OPERATING OFFICER) $315,733

-----------

Holy moly, more than half a million bucks each except for the COO.


It would be interesting if the networking model for the end targets could also be inverted, so that an agent (or something) on the end target could make an outbound connection to establish a reverse tunnel to the proxy that user connections could then be sent over.

The use case I'm thinking of is for IoT or robotics, where you have devices you want to manage being deployed into remote networks that you don't have much control over. It's really helpful in this situation if devices make outbound connections only, so that network operators don't have to configure their firewalls to port forward or set up a VPN.

Edit: clearer language


It seems like using WireGuard on the "end target" to automatically connect to (WireGuard on) the proxy would be an easy workaround.

I did basically the same thing years ago for remote console devices deployed inside various customer networks where I had little or no control over the network. At that time, I used OpenVPN to automatically connect back to our "VPN servers" -- providing access to the device even if it was behind two or three layers of NAT (which, unfortunately, wasn't uncommon!).


Second this!

CloudFlare Access allows this, using the cloudflared daemon, which acts as a reverse proxy. It essentially means the endpoint can be closed off to incoming connections from the internet, and you don't need to maintain various firewall whitelist (and hope they don't go out of sync)

Is something like this on the roadmap for Boundary?


Without committing to any specifics, I'll say that we are very aware of use-cases where a daemon on the end host can provide enhanced benefits.

As you can imagine we did quite a bit of research with our existing users/customers while working on the design of Boundary. One thing we heard almost universally was "please don't require us to install another agent on our boxes". So we decided to focus initially on transparent use cases that only require running additional nodes (Boundary controller/worker) without requiring additional software to be installed/secured/maintained on your end hosts.

> the endpoint can be closed off to incoming connections from the internet, and you don't need to maintain various firewall whitelist

If you think about this a bit differently, a Boundary worker is also acting as a reverse proxy gating access to your non-public network resources. You can definitely use Boundary right now to take resources you have on the public Internet, put them in a private-only subnet or security group, and then use a Boundary worker to gate access. It's simply a reverse proxy running on a different host rather than one running on the end host. You wouldn't _need_ to add a firewall to ensure that only Boundary workers can make incoming calls to the end hosts, it's simply defense in depth.


That's really great introspection. I have the same tendency which has led to creating some really deeply unpleasant situations for myself. I hadn't thought about it quite in the same way until I read your comment, thank you for sharing this. I think I've just had a light blink on in my head.


In cases like the example mentioned, this is clearly a malicious entity. I don't agree that "get a subpoena" is the right response, some judgement should be applied to cases where someone is clearly using your service to do harm.


I'm pretty happy with "judgement" being a thing that courts do and not a thing unqualified call center drones do.


> some judgement should be applied to cases where someone is clearly using your service to do harm

This is Facebook's argument essentially. But who decides that it is "clearly" doing harm? Should Facebook have the power to just tear domains away from their owners at their sole discretion? Should Namecheap be deciding if they break their privacy contract (the entire WHOISGUARD product that they offer) because a domain sounds too close to another company's product? Why should Facebook (or Namecheap) have the power to soley make decisions on this manner? Why do they get to "play god"?

These types of Copyright or Trademark issues have a proper and appropriate channel for handling these disputes. Facebook should be using the APPROPRIATE channels (ie the Judicial system) to handle this. The courts could issue a subpoena to Namecheap and Namecheap can take it down or hand over the information or whatever a judge decides should be done. But a sworn judge is the one that should be making these decisions, not a private company. This is where Namecheap is right in its stance and Facebook is wrong. Facebook is big and has lots of money, but that doesn't allow them to circumvent the Justice system. We swear in Judges to handle things like this. The judge can decide if this is "clearly" a violation or not. The judge will also help decide on the gray cases as well. The Judge will look at the facts of each case individually and help to protect Facebook's copyrights and trademarks while also protecting the rights of the citizen that owns the domain in question. He is the impartial authority that is trained and authorized to make these decisions.

Namecheap is doing it right, and this makes me very happy to be registering domains through them. I am happy that they don't buckle to the pressure of a big scary corporation. Facebook is once again proving that they are not a good internet citizen. Another reason the world would be better if they disappeared. Facebook isn't above the rest of us, or our governmental processes. The fact that they think they are is reason enough to never trust them with your data.


And in cases that aren't so clear? One of the three cited in the press release was instagrambusinesshelp.com - that doesn't sound remotely malicious to me.


Still a likely trademark violation. But yes, not something Namecheap should decide.


I recently read Billion Dollar Whale (https://www.amazon.com/Billion-Dollar-Whale-Fooled-Hollywood...) which I enjoyed. It seemed to me to be surprisingly easy to launder money through shell companies like the ones listed here. I would speculate also that a company with many name changes would involved in some weird stuff.


The documentation for kata seems fairly straightforward for a single host install. Install the kata packages, modify docker daemon to change the run time, then use Docker the usual way.

https://github.com/kata-containers/documentation/blob/master...


The thing that stuck out to me about the presser was the quote from the VP of Cloud and Enterprise Group, and that someone high up on Azure team is joining the Linux Foundation board of directors. This indicates to me that it's really a desire to compete effectively with AWS and Google Cloud that drives this co-operation, rather than a desire to expand Linux. Nothing wrong with this of course. Azure is a major cloud platform, personally I think it's better for everyone if Linux runs well there.


I actually was thinking about that too, but automated analysis of video at that scale seems prohibitively expensive unless you are Google or Netflix or something. Serving tons of video through a CDN is one thing, but the compute requirements needed to analyze it all are another.

Not that I have thought too terribly deeply about this, but I have a suspicion that this feature is actually powered by additional metadata sent along with the video by the content provider. It seems logical that if you were to control a vast archive of rapidly growing, extremely similar looking content, you would want to tag just about everything you could about it so you could build product.

I predict there will shortly be some dark future for all of us where we'll be able to don our Facebook nightmare helmets, say a single word, and have a super focused stream of filth blasted directly into our brains like that one scene from Demolition Man. That's where this is all going, right? Gotta be.


This program can classify 1 hour of video in 36 seconds on my low end GTX 960 4GB.


and for PornHub's back catalog, this still take longer than the heat death of the universe


not really. 1 hour of video in 36 seconds it 1,000 hours of video / hour of computation. Assuming you go with a cluster of higher end graphics cards, you could pretty easily perform 100x better. That's 100,000 hours of video processed / hour of computation. I don't know the size of the pornhub back catalog, and I'm scared to search since I'm at work right now, but even if it's hundreds of millions of hours you could go through the whole thing in like 2 months tops.


Isn't 1 hour video in 36 seconds a 100x speedup instead of 1000? Agreed that it's definitely doable if they want.


It's been quite a few years since I was last an Exchange administrator, but I seem to recall that you could secure a distribution list such that only authorized members could send mail to it. Perhaps (hopefully) only a relatively small group at NHS are allowed to mail all 1.2M mailboxes, and within that group there are 120 very silly people.


I don't have an iPhone, but with the Android client at least, you can disable image auto downloading in the settings page. If you hate fun and are dead inside.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: