Hacker News new | past | comments | ask | show | jobs | submit | more marten-de-vries's comments login

Now add syntax highlighting. In Markdown, it's trivial (```language-name), in LaTeX, you're going to have to define a style. LaTeX is more flexible, but it does come at a cost.


Maybe you should take a look here and disabuse yourself of that idea: https://en.wikibooks.org/wiki/LaTeX/Source_Code_Listings

Relevant bit:

    It supports the following programming languages:
    
    ABAP2,4, ACSL, Ada4, Algol4, Ant, Assembler2,4, Awk4, bash, Basic2,4, C#5, C++4, C4, Caml4, Clean, Cobol4, Comal, csh, Delphi, Eiffel, Elan, erlang, Euphoria, Fortran4, GCL, Gnuplot, Haskell, HTML, IDL4, inform, Java4, JVMIS, ksh, Lisp4, Logo, Lua2, make4, Mathematica1,4, Matlab, Mercury, MetaPost, Miranda, Mizar, ML, Modelica3, Modula-2, MuPAD, NASTRAN, Oberon-2, Objective C5 , OCL4, Octave, Oz, Pascal4, Perl, PHP, PL/I, Plasm, POV, Prolog, Promela, Python, R, Reduce, Rexx, RSL, Ruby, S4, SAS, Scilab, sh, SHELXL, Simula4, SQL, tcl4, TeX4, VBScript, Verilog, VHDL4, VRML4, XML, XSLT.
    
    For some of them, several dialects are supported. For more information, refer to the documentation that comes with the package, it should be within your distribution under the name listings-*.dvi.
    
    Notes
    
        1 It supports Mathematica code only if you are typing in plain text format. You can't include *.NB files \lstinputlisting{...} as you could with any other programming language, but Mathematica can export in a pretty-formatted LaTeX source.
        2 Specification of the dialect is mandatory for these languages (e.g. language={[x86masm]Assembler}).
        3 Modelica is supported via the dtsyntax package available here.
        4 For these languages, multiple dialects are supported. C, for example, has ANSI, Handel, Objective and Sharp. See p. 12 of the listings manual for an overview.
        5 Defined as a dialect of another language
If you want to get fancier, and have pygments in your system, you can use the minted package, instead.


Ah, seems like the situation is better than I remembered. I confused colored syntax highlighting with the default settings, which highlight using font weight/italicness etc.

I did try minted once, but never used it in practice. The dependency on pygments making the documents more system dependent is less than ideal.


You can of course redefine the default style if you don't like it, but that is no more work than doing the equivalent in CSS (I guesS?) for whatever MD renderer one uses.

I agree that pygments adds a bunch of dependencies, which may be too much just to get some colored syntax highlighting.


The article seems to argue for a GDPR[1]-like equivalent in the US. It'll be interesting to see how it is enforced in the EU. If applied as intended, it could offer a more realistic alternative to the only other privacy-preserving option at the moment: not using Google/Facebook/etc. 'noyb'[2] is planning to help that along. I just hope we don't get another cookie-law like debacle.

[1]: https://en.wikipedia.org/wiki/General_Data_Protection_Regula... [2]: https://noyb.eu/


I'm curious if developments like this will mean future cars will not require lights at all. After all, the same communication required for an intelligent traffic intersection like this could just as well be used to exchange breaking, lane changing and position information.


I agree it would be great if other browsers implemented this as well, but you lose me at:

> The best thing Mozilla could do is convince Apple and Microsoft to give up their independent lackluster browser implementations and ship Firefox as the default instead.

Multiple independent implementations are of vital importance to the web. We've seen it with IE6 before, and having two browsers is cutting it too close.


The "independent implementations" aren't worth much if nobody uses them. Chrome already has a majority of the market share at ~60%, with Safari, its nearest competitor, trailing far behind at ~15%. Firefox is sitting at 9.3%. [0]

Combining IE/Edge, Safari, and Firefox still leaves Firefox at half of the market share of Google, and that's still a distinct disadvantage, but it's a much stronger fighting position than single-digit market share. Google and Facebook are not naive upstarts and it won't be easy to quash them, especially not when little old Mozilla is running on comparative fumes compared to a couple of the best-capitalized companies on the planet.

Google and Facebook's interests are aligned as both base their business model on profiling and reselling data derived from user behavior, so it's unlikely that Google will feel inclined to implement desired user protections against that business model.

It's very important that we have strong competition representing a diversity of interests, especially when the dominant player's business model is just "slick spyware". Without a competitor in the same league, consumers don't really have an option.

[0] https://www.w3counter.com/globalstats.php?year=2017&month=11


From the article:

> Scripts are delayed only when added dynamically or as async. Tracking images are always delayed. This is legal according all HTML specifications and it’s assumed that well built sites will not be affected regarding functionality.

(emphasis mine)


I don't know about you, but I don't browse the World Wide Implements-The-Specification-Perfectly Web.


If you aren't complaining because it violates the spec, what are you complaining about? Is any change to any detail about how a browser works necessarily bad?


I understand the complaint. Let me put it this way: how would you feel if your ISP was delaying your connections to a subset of websites for a few seconds? It wouldn't violate any specs, as far as I know. But a lot of people have expressed the sentiment that they don't want middlemen messing with websites. It's not clear to me that Firefox qualifies as an exception to the rule, especially if this becomes something other browsers adopt.


It's not even a NN-type middlemen issue for me, though that is exactly what's going on here. The bigger problem for me is causing regressions for users. On top of that they're causing regressions just because they don't like X traffic, and they're not even exclusively affecting X traffic - they're affecting unrelated traffic too. It's just an incredibly arrogant, annoying, bad thing to do to users who never requested this to begin with.


Should only affect pre-broken code. Like complaining that a compiler is doing something with undefined behavior than you wanted: I get that it's annoying, but maybe fix your code so it's not a problem?


This isn't even a broken code issue. This is a totally unnecessary functionality regression issue. Instead of just loading a page, they're waiting four seconds to load the page, because the page uses an asset on a domain they flag as a tracking domain.

This is like if the compiler generated loops with 4000ms sleeps because the app links a library the compiler thinks is annoying.

Technically the compiler never said it wouldn't add random sleeps into loops. It's totally in spec! What's the big deal?

Meanwhile, my app is slow now. Or in the case of some apps, actually broken for active use cases where it used to work fine. Which, again, is totally regression by any QA standard.


> they're waiting four seconds to load the page

You make it sound like Firefox is just adding a wait for no reason.

The reality is that the page is asking Firefox to download dozens or hundreds of scripts [1]. Firefox needs to prioritize those loads somehow, because it generally doesn't want to open that many connections to the server in parallel. So it prioritizes the non-tracking bits over the tracking ones. If all the non-tracking bits are done loading, the trackers start loading at that point.

> This is like if the compiler generated loops with 4000ms sleeps

No, it's more like if your OS scheduler decided to prioritize some applications over others based on how much it thinks you care about them (e.g. based on whether they're showing any UI, or based on whether they're being detected as viruses by the virus scanner).

[1] For example, http://www.cnn.com/ shows 93 requests for scripts in the network panel in Firefox. If I enable tracking protection, that drops to 37 requests.

Or for another example, http://www.bbc.co.uk/news has 67 script requests and only 20 with tracking protection enabled.

Or for another example, https://www.nytimes.com/ has 150 script requests and only 40 with tracking protection enabled.


Much of this discussion is missing that the point is to _speed up_ page load/display. It is NOT like a compiler generating sleeps.

The bazillion tracking scripts loaded by pages is slowing down time to view/interaction on the page. Firefox is taking scripts that are _already_ being marked as loadable asynchronously/delayed, and delaying them until the page is otherwise loaded. That's it. It's not an arbitrary 'sleep', it's an attempt to prioritize UI responsiveness over tracking scripts.

To the extent it breaks or _slows down_ pages, that's an undesired side effect, not the goal. If it does that to a lot of pages, the feature won't be succesful and will be rolled back, I bet.


Your app is only slow now if you are blocking its content and/or most basic usability on the loading of external trackers - a lame yet increasingly common practice that needs to stop.

According to the article, they're only delaying these resources when loaded dynamically or async - so developers should be able to "fix" this by loading tracking scripts synchronously, which is what they are effectively doing already if this new FF behavior causes any noticeable impact.

It's hard to feel much sympathy for devs who have _explicitly_ prioritized the sending of their users' info to external parties, over their sites being baseline usable.


I would go to my package manager and install a new ISP, of course /s


You should expect it, though.


I have both written software for standards-based protocols, and used software written to standards-based protocols, so no, I would not expect it.

Not breaking backwards compatibility for existing users is the golden rule of software support. When an unfortunate pull-requester attempts to break backwards compatibility with Linus Torvalds' software, he has some very choice language to complain about the practice. If they were attempting to break backwards compatibility just because they dis-liked some particular app or service or use case, he might even use foul language.

Fortunately, I am a very well behaved and good little HN user, so I will not repeat that language here. But imagine what Linus would have said, really loudly, with all capital letters. There. That's better.


Linux breaks backwards compatibly all the time. Just not for userland programs. But if you are expecting your kernal module to be low maintenance you are in for a surprise...


The kernel has a clear definition of what will be backwards compatible, and what never will be. In-kernel interfaces are never stable, and kernel-to-userspace interfaces are very stable, with an ABI docs directory breaking out what is and isn't.

https://github.com/torvalds/linux/blob/e7aa8c2eb11ba69b1b690...

If the kernel's user interface just started blocking on open for 4000 milliseconds for no apparent reason, people would not be happy. Firefox expects users to demand that app writers edit, recompile, test, and ship them a new app to prevent the block. This is <insert lots of not very nice adjectives>.


This is an interesting example because the kernel's open() interface blocks for 4,000 ms all the time. Heavy swapping or the ext3 journal is full and some other app just called fsync().

Apps have to handle it. If they don't, because for example, they access the disk while also trying to be a display compositor, then they are simply broken. It does not matter if the kernel is usually fast enough. Because sometimes it isn't.


Do you know what the reason for this distinction is? On Windows, it seems kernel-mode APIs seem to stay quite stable as well... there are exceptions mostly on the device driver side because hardware tends to evolve (e.g. display/graphics drivers), but generic drivers (= kernel modules) generally seem to be able to rely on backwards-compatibility too.


My understanding is that it's to intentionally discourage trying to keep things out of tree, where they will inevitably break in worse ways. It also makes the GPL enthusiasts happy, but I doubt that was Linus's big goal.


I see, thanks. I'd be curious as to why he feels it would "inevitably break in worse ways", seeing as how that's not really the case on other platforms.


Basically, if you're going to have drivers out of tree, the driver ABI has to be perfectly stable, which restricts internal refactoring. Otherwise things break - and this does happen elsewhere; lots of drivers for Windows XP don't work on 10.


Interesting approach. I hope more browser vendors will adopt it. It comes with an additional advantage: if a website still needs to function when tracking domains are delayed a second or so, they likely will function too with said domain completely disabled (e.g. using extensions). Sounds like a win from a user perspective.


Do you mean that browsers implementing this will force website owners to make their sites work with tracking domains delayed? This does seem like a nice side benefit.


Yes, that's my hope. For the income of website owners it is better if they all collectively block their site until the trackers are fully loaded. But if only a few do so, users might avoid their site, so individuals are still encouraged to support delayed loading.

I imagine Mozilla is in a similar situation: if too many websites block, they will have to disable the delay or start whitelisting trackers if they do not want to loose users. That is, unless other browsers follow their lead here.

Prisoner's dilemmas all around. It's going to be fun to see what we'll end up with, although I'm optimistic as I haven't heard complaints about this yet, but then again I wouldn't notice any difference myself as I'm already using NoScript.


I think it's not quite a normal prisoner's dilemma, because it's not boolean. The browser can be less aggressive initially, pushing against only the worst cases, then become more aggressive over time to make site behave better.


With tracking domains disabled!? Website operators will be annoyed by the delays, so they have an incentive to make the display of their website independent from tracking, which then also should make the website work just fine if tracking is blocked completely.


He meant that no browser with a significant market share will implement thi, the sites will remain broken which will only be evident in firefox. The side effect will be people visiting those sites will switch a different browser.


Given the OP's reply, I don't think that's what he meant so I would be careful about making statements on behalf of others.

Apparently Firefox is sitting at 6.1% market share right now[0]. Far behind Chrome and Safari but still quite significant.

[0] https://amosbbatto.wordpress.com/2017/11/21/mozilla-market-s...


Trouble is the amount of popular websites that simply don't care how long the page takes to load, 30 seconds would be acceptable to many as long as they get their advertising dollars.

Users won't know who to blame. They might blame the site but they're just as likely to blame Firefox, Microsoft, their ISP or a virus. There needs to be a display saying "hey, this site is slow because it's tracking you in 15 different ways, would you like to disable this permanently?".


You're not thinking about this from a game theory perspective.

Slow down pages -> more users bounce -> fewer ad impressions -> less ad revenue.


> You're not thinking about this from a game theory perspective.

> Slow down pages -> more users bounce -> fewer ad impressions -> less ad revenue.

Game theory is the theory of how agents deal with each other. You left out the most crucial part of agents affecting each other!


Yet, many big, profitable websites take 10-30 seconds to load on even the fastest connections. We can reconcile the theory later, but the fact is, much of today’s internet doesn’t optimize for load time.


Also possible scenario is

Slow down pages -> more users frustrated with Firefox -> fewer user share -> Mozilla closes and donates all code to Apache Foundation.


That would be great ! Where do I sign for this ? I cannot wait for this to happen.


Or eclipse which gets unloved projects


how is this game theory?


and lower rankings organically and for ppc get a low quality score on your add and it can really hurt your AdWords account


> Trouble is the amount of popular websites that simply don't care how long the page takes to load...

They absolutely do care, though. The longer a page takes to load, the less time users will spend on the site, and the more likely users will give up and close the window.


I think "load" here is subjective as well eg a spectrum from time to first paint and a page being visible and/or interactive, up until completely loading all async and deferred content, some of which might only happen after the user crosses the fold.

I think there's a lot of interest in optimizing that initial time but less so for the full load.


I block ads, analytics and 3rd party fonts. Almost every site works just fine.


I do the same (uBlock Origin) and almost every site works, but there are a still a few I need to whitelist because they do weird stuff, like triggering their own scripts to run from analytics callbacks. Airline and banking websites are a couple of common categories of sites I usually need to whitelist to work properly - that's not really an issue though as they don't usually have 3rd party ads.


Yeah I have a clean backup browser for the few sites that don't work and I still want to use. I'm always reminded how CRAP the internet has become.


I block everything too and sites break so rarely that it takes me a while to link the breakage with blocking of trackers.


I do the same thing too, blocking all 3rd-party junk using uBlock Origin. As one specific example, I've had to let motortrendondemand.com fire off Omniture, because apparently their video player (through Kaltura) depends on it.

It's just lazy design, or possibly malicious, I'm not sure.


As I'm sure you're aware, many content creators that you are reading the work of fund their efforts via ads. I think this is a happy medium to attempt to improve performance of these pages without denying them compensation.


I used to feel bad about it, but it has become evident that any content creator still primarily funding themselves through third-party ads in 2017 is the digital equivalent of a streetwalker.

It's a low-class business model that is rife with disease. There's no barrier to entry-- literally anybody can do it, there is no filtering of who they'll do business with, and it's a race to the bottom to earn pennies at the expense of their integrity. Despite having actual content of value, they share a business model with a blog farm full of Markov-generated malware-laden shitposts.

Given the now-well-known threats to health and safety that third-party ads present, one does not have the right to get upset when clients insist on using protection when dealing with them.

Set up a Patreon for donations, set up affiliate links or offer subscriber perks if adblocking is cutting into revenue. Either option is far more lucrative, ethical and honest than whoring oneself out for the benefit of some shady ad firm that leaves the user with a nasty case of Bonzi.


Fine. Except figuring out a business model is the content creator responsibility not the user's. Business models that involve hostile 3rd party adds/tracking are bad and if your users are trying to evade those, you need to figure out something better. Blaming the user reaction won't help.


> As I'm sure you're aware, many content creators that you are reading the work of fund their efforts via ads.

Not our problem if they bet on a brittle business model and don't want to adapt. Even more: the web ecosystem was fun and a with a lot more freedom before the invasion of ad based companies. I don't see any problem with some culling of the politically correct parasites.


> As I'm sure you're aware, many content creators that you are reading the work of fund their efforts via ads.

I cannot make myself care about that, at all.

I am not the person responsible for finding a viable revenue model, that is the responsibility of the content creator. I refuse all ads and tracking, because I value my own attention span, my time, my bandwidth and my privacy.

I am not obligated to watch ads online, just as I am not obligated to stay on a TV channel and stay in the room while ads are running, just as I am not obligated to pay any attention at all to ad billboards.

As an aside, I support several high-quality content creators through Patreon, and several open source projects through donations.


Their failed business model is not our problem.

I think you are aware that ads are the emerged part of the iceberg, the big part is the profiling, tracking and data collection going hand in hand with ads.

If ads were unintrusive and not gobbling privacy, then I would not block them as I would not mind them.

Excuse me but I have to go now, local girls want to give me a free ipad that I won by clicking the fart button after subscribing to the newsletter that was blocking the content of the page I finally chose not to read because fk it.


If the ads didn't feel abusive then I might feel bad. But they do.


I have an even better approach:

uBlock origin + noscript


uMatrix is also great. It does a lot of the same things NoScript does, but at a greater granularity. I still use NoScript for its XSS protection features but have the main JS-blocking feature disabled because it's redundant.


It is. It does. uMatrix is probably not for the average user out there, but personally, I won't go anywhere on the web without it these days.

Keep the blacklists up to date, and have scripting completely off by default. You get granular control on a wholly different levet than offered by NoScript (which I really no longer see a need for).


>have scripting completely off by default

Does this mean you have the "scripts" column red and turn them on for individual sites?


That is what it means, yes. And click them on as needed, first local scripts, and then various external ones if that's not enough. Can be a bit of a hassle for a while, but once you get permanent rulesets established for sites you frequent and more or less trust, it works like a dream. I hardly ever see an ad, and various sniffer-services (or "analytics") must somehow learn to live without a lot my traffic data.


Not parent but probably yes. That's how I roll as well.


Interesting. I never got around to playing with uMatrix. Is it worth it considering i have a boatload of configs for sites i use with NoScript?


You can convert NoScript configs to uMatrix with [1]uMatrix-Converter.

[1]: https://pro-domo.ddns.net/umatrix-converter


An even better approach is to block all ad-serving domains at the DNS layer. If you associate with my home access point and use my DNS cache, you don't get ads period, and you don't have to install an ad blocker or lift a finger.


Can you explain how you do that?


> Can you explain how you do that?

Probably Pi-Hole running on. Raspberry Pi. https://pi-hole.net


Yea, not exactly Pi-Hole, but the same concept. dnsmasq running on router, with a big list of ad-serving domains and hosts resolving to 0.0.0.0


Wow, nice and simple. Is that list available in some repository? would like to implement that since I run a dnsmasq bearing router at my home border as well.


I do the same - my approach was to take some of the popular /etc/hosts files (like http://someonewhocares.org/hosts/zero) and sed them into dnsmasq format:

    # before:  0.0.0.0  evil.biz
    # after:   address=/evil.biz/0.0.0.0
    sed 's#^0\.0\.0\.0[[:space:]]*\([^:]*\)$#address=/\1/0.0.0.0#'
which goes in a file in /etc/dnsmasq.d/, with this line in /etc/dnsmasq.conf:

    conf-dir=/etc/dnsmasq.d
The results can be trimmed a lot, e.g. if you have rules for a.evil.biz and b.evil.biz, you can usually reduce those to a rule for just evil.biz. I wrote some scripts to help with this, which are now at https://petedeas.co.uk/dnsmasq/. I might write something up about the process later.


Here's a nice repo with a "starting point" for hostnames and domains to block: https://github.com/notracking/hosts-blocklists


I loathe ads and tracking. I run ublock origin/https everywhere/privacy badger (the latter of those 2 are from the EFF).

I run a dedicated pfsense machine (old optiplex 755 with an old ssd) I added a nic to it. All network traffic must physically flow through it (1 nic goes to lan 1 goes directly to the cable modem). It's running pfblockerng and DNSBL with a bunch of sources. It's amazing. I can watch youtube videos on my smart tv in the living room streaming with 0 ads.


Most recently: November Workshop: Running the Pi-hole Network-wide Ad-blocker, and more | https://news.ycombinator.com/item?id=15608052

Related discussions (click the 'comments' links): https://hn.algolia.com/?query=netguard&sort=byDate&type=comm...


Is there much different between uBlock origin vs Ad Block Plus?


Yes, Adblock Plus lets website operators pay them money to be whitelisted, if the ads comply with the "Acceptable Ads" policy.


adblock plus is commercial operated and is paid by advertisers to get whitelisted, akin to an extorsion mafia business. It's an ad blocker coming with tainted default settings. It used to significantly impact performances (not sure if this is still the case).

ublock origin is a one man personal project to improve his online browsing experience that he shared with the public and has become the goto reference. It has no whitelist or acceptable ads policy, is a general purpose blocker (blocks trackers, remote fonts, etc.) coming with sane default settings. It does not have the same performance issues adblock plus has.

In short: ditch adblock plus and switch to ublock origins.

You'll find more details here: https://github.com/gorhill/uBlock


"Unexpected results: Adblock plus does not help with system resource utilisation"

https://twitter.com/adildean/status/936183316134416384


[flagged]


There's an opt-in "tracking protection" checkbox in Firefox you can click that doesn't load them at all.


Hmm doesn’t it send a do not track cookie, tracker JS is still loaded and discretionary


No, it's basically a content blocker like Adblock or noscript.


I'm convinced it's not. noscript is blocking scripts and xss and does all kinds of things, while mozilla intently removed the option to disable scripts in firefox.

I do not know about adblock as I stopped using it a long time ago and now use ublock origin a general purpose blocker, but I certainly remember how mozilla was vocal about never adding a blocker in firefox as this would be contrary to <insert BS PR> (actually their business model).


Well, yeah, "Tracking Protection", as it's called, is far from what NoScript does (which does a boatload more than just blocking JavaScript). But it is quite a lot like AdBlock Plus or uBlock Origin, except not focused on blocking ads and rather focused on blocking tracking scripts (but with how many ads contain tracking scripts, it is pretty much an ad blocker, too). If you know the extension Disconnect, Mozilla uses the blocking list from that.

Tracking Protection is default-enabled in Private Browsing, can be manually enabled in normal browsing.

The "BS PR" reason for them not necessarily wanting to block trackers/ads, is that webpage owners want to make money. If they don't have a way of making money off of Firefox, then they likely won't bother testing, fixing or even optimizing their webpage for Firefox. It would make it a lot more likely for webpages to be broken in Firefox.

Default-enabling Tracking Protection in Private Browsing already caused a huge outcry and by now there seems to have developed an entire new business around privacy-respecting porn ads, so that's always nice.


That is an available option as well. In preferences, under "Privacy & Security", set "Tracking Protection" to "Always".


This shouldn't be up to the browser. He even notes edge cases where this performs unexpectedly -- unacceptable.

If this is such an issue and can actually be a performance increase, then someone should release a script with the same functionality. Or optionally make it an opt-in option in settings.


They do not violate the relevant specification. They just implement it in a way that has not been done before with the user's convenience in mind.

That aside, your position is unrealistic: browsers regularly break non-spec conforming websites. They actually monitor such cases (telemetry) and try to work with popular websites to fix the issue before they ship the breaking update, but it's a tradeoff that is regularly made nonetheless.


While the actions were beneficial in this case, that argument would be more convincing if the spec itself weren't a constantly moving "Living standard" maintained by the exact same organisations that also develop the browsers. And even that spec is sometimes consciously broken in an "intervention".


It shouldn't be up to the browser to implement web standards? What should browsers be doing then?


What is the web standard about delaying scripts deliberately encoded into the page?

Why downvote and no explanation? I want to know if I missed something.


Per the standards, scheduling and prioritization of downloads is up to browsers.

The standard also defines that scripts that are not async have to be executed before later content can be parsed (because they can document.write()). They can still be loaded with any priority the browser wants; they just need to block observable DOM construction.


Thank you.


Full title: "I am Max Schrems, a privacy activist and founder of noyb.eu - European Center for Digital Rights. I successfully campaigned to stop Facebook's violations of EU privacy laws and had the EU Court of Justice invalidate the Safe Harbor agreement between the EU and the US. AMA!"

The AMA is currently ongoing.


Exactly. It's there to (very mildly) confuse you.


No, they're not in it for money. In the end (the foundation owns the corporation), Mozilla is a non-profit.

They need money to achieve their mission though: maintaining a browser (in a landscape of evolving security challenges, performance & web standards) and research (e.g. projects like Rust, Servo & pdf.js originated that way) is not cheap. And currently it mostly comes from search engine deals. If they cannot get a similar one, it all collapses.

I can see why they try to diversify their income. That said, I don't agree with the way they do it here.


"Non-profit" just means that shareholders don't profit. Nonprofits offer many opportunities for executives to personally profit.


If they want to diversify they can work on other projects making money instead of fucking up their current userbase.

IMO coding missile software to pay for those projects would be a lot more ethical than what they're doing. Yeah I consider the ad industry and its privacy crushing consequences worse than weapons almost never fired.


As using webhooks for git repositories is probably the most common usage, it would be really nice if the different APIs converged here. Now everyone is inventing their own (and I'm in fact guilty to this myself for a side-project).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: