Hacker News new | past | comments | ask | show | jobs | submit | jake-low's favorites login

> Transactional Updates : This is the main benefit. Updating multiple files can be done in one transaction. Isolation ensures that there are no broken webapps during the update.

Your server on its own, whether it uses SQLite or the filesystem, cannot save you from having broken webapps during an update. Each page in the browser is a _tree_ of resources, which are fetched with separate HTTP requests and therefore not subject to your server-side transaction/atomic update system. You can change all resources transactionally on the server side but the browser can still observe a combination of old and new resources.

The common solution here is to ensure that all sub-resources of a page (javascript bundle(s), stylesheet(s), media, etc) are named (ie, in the URL) using a content hash or version. All resources (from the root HTML document down) need to refer to the specific content-hash or versioned name, so that if the browser loads version X of the root HTML document then it will load the _corresponding version_ of all sub-resources. Not only that, but if you're updating a page from version X to version Y, all the version-X sub-resources need to remain available after you start serving page version Y, until you're confident that no browser could reasonably be still loading version X of the page, otherwise you can break page version X while someone is still loading it.

This means you actually specifically don't want to put sub-resources for a page into one atomically-switched-out bundle along with the root HTML document, because if you do that you'll be removing the previous versions of sub-resources while they may still be referenced.

Also of course in some cases there might be some sub-resources (e.g., perhaps some media files) that you want to be versioned "separately" from the HTML document that contains them, so that you can update them without busting caches for all of the page/app structural elements like your javascript blobs and stylesheets and so on, and you _might_ need to take that into account in your page build system as well.


Must it be a war? Must there be a winner?

I think it's good that the status quo got mixed up a bit. The introverts don't have to defend themselves or feel like outsiders quite so much.

This article aligns stay-at-home-ness with "fear," "a fettered life," "hardly worth living" and says "retreating ... is an ultimately selfish choice." I believe that's a bit of a poor take. Plenty of people live rich, productive, fulfilling and engaged lives that don't especially involve a lot of interactions with other people.

This author is clearly someone whose habits were impinged by the changes brought on by the pandemic ("... naturally outgoing people – this writer included – have found it that bit harder to get their friends out of the house."), but is that the end of the world?

It almost feels like the author is eager to get back to what they are comfortable with, at the expense of (by their numbers) 1/3 of other people's lifestyles. It's almost like they are the ones afraid of this change — like they are the selfish ones.

But I don't really go for the whole us-vs-them approach at all. It has been a great (if forced) learning experience. Some people got to discover happiness they didn't know before. Other people felt the loss of something they took for granted. Perhaps we should share these lessons with each other and bring some balance and increased awareness, rather than pointing fingers and taking sides.


Evolution is a satisficer not optimizer.

All major trophic level breakthroughs are powered by evolving a reserve of efficiency in which mult-step searching can occur.

multicellular life, collaboration between species, mutualism, social behavior, communication, society, civilization, language and cognition are all breakthroughs that permitted new feature spaces of exploration that required non-locally-optimal transitions by the involved systems FIRST to enable them.

Trust is expensive and can only get bought in the presence of a surplus of utility vs requirements.


The first approach (the 'It’s "obviously" the only way to go' one) is called an adjacency list.

The second (the 'vastly simpler method') i don't recall seeing before. It has some fairly obvious deficiencies, but it is clearly enough in some cases.

The third ('namespacing') is called a materialized path.

And there is at least another way to represent trees - nested sets: https://www.ibase.ru/files/articles/programming/dbmstrees/sq...

All of these were well-trodden back in the days when people took relational databases seriously. For example, see: http://www.dbazine.com/oracle/or-articles/tropashko4/

It seems this is lost knowledge now.


>To query the index, we are going to apply the same tokenizer and filters we used for indexing

From developing a Lucene-based search engine years ago, I learned a very useful trick for things like stemming and synonyms: don't use the same tokenizer/filters that you used for indexing. It's much more flexible if you instead expand the query terms, replacing a given term with a list of OR clauses with all the synonyms.

So during indexing, if the document has "fishing", then you just add that to the index.

Later, during querying, the user can search for "fish" with optional stemming enabled, and their query will get re-written into

"fish" OR "fishing" OR "fishy"

This way, the user has control at query-time, and you can manage your synonym/stemming database without having to rebuild your index.

Of course, I had to write a custom parser for this but it was well worth it imo.


Geoff Manaugh, author of this blog, has a fun book called "A Burglar's Guide to the City", discussing how city layouts influence the types of crimes committed there.

One example that I remember talks about the differences between Los Angeles and New York City: at one point in the 1990s, LA was the bank robbery capital of the world, averaging over 1 per day for a while, but bank robberies rarely happen in NYC. When you compare their layouts, it makes total sense: LA was built around the car. The pattern of "highway offramp, bank next to the road, highway onramp" was everywhere throughout LA. Robbing a bank in NYC would be just so much harder: parking is more difficult, traffic is slower, way more people around to identify you, etc.

It's a good read.


The article has some helpful points. But as a programmer-SAAS-founder-who-took-over-ads operation, I have some tips on some insights we gleaned doing paid ads (and getting it to be profitable for us):

1. Most important tip: is your product ready for ads?

  - Do not do paid ads too early.

  - Do it once you know that your product is compelling to your target audience.

  - Ads are likely an expensive way of putting your product in front of an audience.

    - No matter how good the ad operation, unless your product can convince a user to stay and explore it further, you've just gifted money to Google/X/Meta whoever.

  - If you haven't already, sometimes when you think you want ads, what you more likely and more urgently need is better SEO optimization
2. The quality of your ad is important, but your on-boarding flows are way more important still.

  - Most of the time, when we debugged why an ad wasn't showing conversions, rather than anything inherent to the ad, we found that it was the flows the user encountered _AFTER_ landing on the platform that made the performance suffer.

  - In some cases, it's quite trivial: eg. one of our ads were performing poorly because the conversion criterion was a user login. And the login button ended up _slightly_ below the first 'fold' or view that a user saw. That tiny scroll we took for granted killed performance.
3. As a founder, learn the basics

  - This is not rocket science, no matter how complex an agency/ad expert may make it look.

  - There are some basic jargon that will be thrown around ('Target CPA', 'CPC', 'CTR', 'Impression share'); don't be intimidated

   - Take the time to dig into the details

   - They are not complicated and are worth your time especially as an early stage startup

  - Don't assume that your 'Ad expert' or 'Ad agency' has 'got this'.

    - At least early on, monitor the vital stats closely on weekly reviews

  - Ad agencies especially struggle with understanding nuances of your business. So make sure to help them in early days.
4. Targeting Awareness/Consideration/Conversion

  - Here I have to politely disagree with the article

  - Focus on conversion keywords exclusively to begin with!

  - These will give you low volume traffic, but the quality will likely be much higher

  - Conversion keywords are also a great way to lock down the basics of your ad operation before blowing money on broad match 'awareness' keywords

  - Most importantly, unless your competition is play dirty and advertising on your branded keywords, don't do it.

    - Do NOT advertise on your own branded keywords, at least to begin with.

    - Most of the audience that used your brand keywords to get to your site are essentially just repeat users using your ad as the quickest navigation link. Yikes!
5. Plug the leaks, set tight spend limits

  - You'll find that while your running ads, you are in a somewhat adversarial dance with the ads platform

  - Some caveats (also mentioned in the article)

    - Ad reps (mostly) give poor advice, sometimes on borderline bad faith. We quickly learnt to disregard most of what they say. (But be polite, they're trying to make a living and they don't work for you.)

    - (Also mentioned in the article) Do not accept any 'auto optimization' options from the ads platform. They mostly don't work.

  - Set tight limits on spends for EVERYTHING in the beginning. I cannot emphasize this enough. Start small and slowly and incrementally crank up numbers, whether it be spend limits per ad group, target CPA values, CPC values - whatever. Patience is a big virtue here

    - If you're running display ads, there are many more leaks to be plugged: disallow apps if you can (article mentions why), and disallow scammy sites that place ads strategically to get stray clicks.

    - For display ads, controlling 'placement' also helps a lot
6. Read up `r/PPC` on Reddit

  - Especially the old, well rated posts here. 

  - They're a gold mine of war stories from other people who got burnt doing PPC, whose mistakes you can avoid.

Know that although ExifTool is written in perl, you can run it in "batch mode" which makes it quite fast--only a couple of ms to parse a file. I've written an open source library to manage the subprocesses for you if you're using node.js (and I also wrote the ruby variant ages ago):

https://github.com/photostructure/exiftool-vendored.js


One of the very best development teams I worked with had an interesting take, they always did database migrations first. Any new state that was to be added to the system could only be done so by first adding the new database fields or tables. This ensure that version 1 of the code would work with version 1 and 2 of the database. They would then roll out version 2 of the code, but have the new features hidden behind a feature flags (in the database), ensuring that version 2 could run, without using the new database schema. Once they where confident that everything was still running on version 2 of the code and database they'd enable the new feature. Later the feature flag could be migrated from the database to a properties file, saving the database lookup.

I wouldn't necessarily call this approach simple, but it was incredibly safe and rollbacks was always a none event.


I'm glad to see this here, for two reasons: (1) In general it's nice when people return to the primary sources rather than second-hand accounts, and (2) this particular topic is of interest to me; here are a couple of previous comments that were somewhat well-received on HN:

https://news.ycombinator.com/item?id=22221592

https://news.ycombinator.com/item?id=18699718

For further context you can look at past and future issues of Bentley's column (and its spinoff); a list of them I collected here: https://shreevatsa.net/post/programming-pearls/

I guess it's a long-standing tradition in literary reviews for reviewers to push their own ideas, rather than confining themselves solely to reviewing the work in question. That is what happened here. Knuth had written a program that he had been asked to write, to demonstrate the programming discipline. But McIlroy, as the inventor of Unix pipes and a representative of the Unix philosophy (at that time not well-known outside the few Unix strongholds: Bell Labs, Berkeley, etc), decided to point out (in addition to a good review of the program itself) the Unix idea that such special-purpose programs shouldn't be written in the first place; instead one must first accumulate a bunch of useful programs (such as those provided by Unix), with ways of composing them (such as Unix pipes). A while later, John Gilbert described this episode this way:

> Architecture may be a better metaphor than writing for an endeavor that closely mixes art, science, craft, and engineering. “Put up a house on a creek near a waterfall,” we say, and look at what each artisan does: The artist, Frank Lloyd Wright (or Don Knuth), designs Fallingwater, a building beautiful within its setting, comfortable as a dwelling, audacious in technique, and masterful in execution. Doug McIlroy, consummate engineer, disdains to practice architecture at all on such a pedestrian task; he hauls in the pieces of a prefabricated house and has the roof up that afternoon. (After all, his firm makes the best prefabs in the business.)

There are other points (not mentioned in this article), e.g. the fact that someone had to have written those Unix programs in the first place and writing them with literate programming can lead to better results, and the fact that Knuth's idea of using a trie (though not a packed/hash trie; that's no longer needed) still seems fastest: https://codegolf.stackexchange.com/questions/188133/bentleys... (please someone prove me wrong; I'd love to learn!)

Knuth gladly included McIlroy's review verbatim when he reprinted this paper in his collection Literate Programming. BTW here's an 1989 interview of McIlroy https://www.princeton.edu/~hos/mike/transcripts/mcilroy.htm where he looks back and calls Knuth's WEB “a beautiful idea” and “Really elegant”, and his review “a little unfair”, though of course he reiterates his main point.


Hi! I’m her great-great granddaughter. I actually met the thru-hiker mentioned in the comment above, Dixie, in October 21 and we still keep in touch!

I’m a hiker myself and plan on hiking the AT some day.


It's subjective, but I think a good metric to use is resistance. Whenever you feel resistance towards at task, that's work. Stuff you know you should do, but don't feel like doing.

A common example are textbook exercises in math or physics books. Most people don't mind passively watching lectures or YouTube videos on these topics, but they'll never do exercises that force them to think and produce something from scratch. They feel like work. Doing them is scary because they expose your weaknesses. It's easy to get the illusion of having understood something from watching lecture when in reality you haven't. But when you talk to actual mathematicians and physicists, they will tell you the single most important thing you must be doing are these exercises.

Writing is another common example. Sitting down and writing a book, or a blog, is scary and feels like work. It exposes gaps in your own understanding and knowledge. People have an inherent resistance to this. They'll start, but then drop it and give up early. There are probably 100x more people who have "prepared and organized" a blog or started and outline of a book than people who have kept up the practice or finished a book.


I'm thinking a little bit of empathy doesn't hurt. Reason from Hollie's point of view. She didn't ask for this and was working on cool stuff:

https://holliemengert.com/

Next, somebody grabs her work (copyrighted by the clients she works for), without permission. Then goes on to try and create an AI version of her style. When confronted, the guy's like: "meh, ah well".

Doesn't matter if it's legal or not, it's careless and plain rude. Meanwhile, Hollie is quite cool-headed and reasonable about it. Not aggressive, not threatening to sue, just expressing civilized dislike, which is as reasonable as it gets.

Next, she gets to see her name on the orange site, reading things like "style is bad and too generic", a wide series of cold-hearted legal arguments and "get out of the way of progress".

How wonderful. Maybe consider that there's a human being on the other end? Here she is:

https://www.youtube.com/watch?v=XWiwZLJVwi4

A kind and creative soul, which apparently is now worth 2 hours of GPU time.

I too believe AI art is inevitable and cannot be stopped at this point. Doesn't mean we have to be so ruthless about it.


Firefox has a neat trick to only search in links. Press ' (single-quote) and at the bottom, a text input field appears with the hint "Quick find (links only)". Type two or three characters will focus the link in question, just press enter to navigate.

My email address is temporal at gmail.com. "Temporal" was my teenage gamer tag. It also turns out to mean "temporary" in Spanish. Ever since the Spanish-speaking world started using gmail, people have been signing up for stuff with my e-mail address every single day. Any new service I want to register an account with, I first have to hijack the existing account holding my address and delete it or change the email address.

But it gets worse!

Someone working at AT&T Mexico apparently decided to start entering my address as a placeholder when signing up customers that didn't have one. So I started getting phone bills -- with complete call histories -- for people all over Mexico. After several unsuccessful attempts to contact AT&T, I set up a filter to delete them.

Once a Spanish telecom did even worse, and populated seemingly their entire database with my address, so I'd get hundreds of phone bills all at once on the first of the month. I think they fixed it after two billing cycles.

Once a school in Chile made me an admin of their paid Zoom organization. I was actually unable to remove myself from their org or change the account's address, meaning I basically couldn't use Zoom until they removed me. (I'm unsure whether the school fixed it or Zoom fixed it after I made an angry tweet that went viral; whoever fixed it never bothered to follow up with me.)

The list goes on and on...

Wired even wrote an article about me. https://www.wired.com/story/misplaced-emails-took-over-inbox...

If you run a web service, PLEASE VERIFY ALL EMAIL ADDRESSES.

PS. Just now as I write this, someone in Spain scheduled an appointment for car service using my address. The e-mail contained a link to cancel the service, which I clicked. Oops.


One question I have is what does “correct” mean? If a DCI-P3 image is displayed on an sRGB monitor, is there a “correct” way to do it? I would think that there might be multiple valid ways to convert wide gamut colors into sRGB, and that it really depends on what constraints the viewer wants satisfied. For example, if I want to preserve relative perceived differences between colors, maybe I’d map the boundary of DCI-P3 to the boundary of sRGB, and then interpolate all the interior colors (possibly with some non-linearities). This would allow people with sRGB monitors to see the WebKit logo in the article’s example, and it’s a perfectly reasonable and valid way to convert colors. However, this would make it so that converting from sRGB red to DCI-P3 and back returns a different color (less bright and/or saturated) than what you started with. So a different and perfectly reasonable conversion would be to match the colors within the sRGB gamut perfectly at all times, and clamp colors outside the sRGB gamut to the sRGB boundary. Even when clamping, there are multiple valid choices - clamp to the nearest point (prioritize brightness), or move toward the center of the gamut and stop on the boundary intersection (prioritize hue). A third option would be to preserve some overlapping interior region of the two gamuts exactly, and start to smoothly blend between them near the boundary.

What I think this means is that I don’t know whether I can expect to see the WebKit logo on an sRGB display. Since the browser doesn’t ask me what kind of conversion I’d like, it’s reasonable to assume it’s clamping when converting (in which case the logo is lost), and it’s also (perhaps) reasonable to expect color differences to get preserved (in which case I’d see the logo).

Probably the most correct thing to do would be to let either the developer and/or the viewer have a say in how the conversion should happen?

> you can’t just have the renderer do color management from a single source - sRGB - to a single target

I’m not sure I understand what you mean here, could you elaborate? What else needs to be involved? I would think knowing the source color space, and the target monitor’s nearest supported color space is exactly what you need, so if the source is DCI-P3 and the monitor only has sRGB, a sophisticated but well defined conversion is used, while if the monitor is DCI-P3 then the conversion is the identity function.


I've wondered about the exact time of the collapse.

Apparently there is a seismic observatory only tens of meters away from the telescope dish. The station code is AOPR [0].

After retrieving the mseed data from the web API using curl, and installing obspy from pip, I could plot the events. This is the result: [1]. You can see two "events". One at Nov 30, 23:12 UTC, and a second, larger one at Dec 1st, 11:52 UTC. Upon closer inspection, the first event is actually multiple smaller ones, not sure if they are related to the collapse at all. The second event is clearly one discrete thing though. The image was uploaded to twitter at 11:56 UTC, so minutes within the collapse being recorded on the seismograph. You can still see the dust in the air.

Pretty cool that all this data is out there, in the open, just a curl request away.

[0]: https://www.fdsn.org/networks/detail/PR/

[1]: Note on some of the screenshots I've preprocessed the data with a lowpass filter st.filter("lowpass", freq=0.1, corners=2) to make it look nicer, while in others I haven't. I'm just an interested person, not doing this on a scientific level. https://imgur.com/a/FjrbWWa


I think a consistent multilingual Wikipedia is a fantastic goal.

But I'm not sure this is the right way to do it.

Given that most of the information on Wikipedia is "narrative", and doesn't consist of facts contained in Wikidata (e.g. a history article recounting a battle, or a movie article explaining the plot), this scope for this will be extremely limited. The creators are attempting to address this by actually containing every single aspect of a movie's plot as a fact, and that sentences are functions that express those facts... but this seems entirely unwieldy and just too much work.

What I've wished for instead, for years, is actually an underlying "metalanguage" that expresses the vocabulary and grammatical concepts in all languages. Very loosely, think of an "intermediate" linguistic representation layer in Google Translate.

Obviously nobody can write in that directly in a user-friendly way. But what you could do is take English (or any language) text, do an automated translation into that intermediate representation, then ask the author or volunteers to identify all ambiguous language cases" -- e.g. it would ask if "he signed" means made his signature, or communicated in sign language. It would also ask for things that would need clarification perhaps not in your own language but in other languages -- e.g. what noun does "it" refer to, so another language will know to use the masculine or feminine version. All of this can be done within your own language to produce an accurate language-agnostic "text".

Then, out of this intermediate canonical interpretation, every article on Wikipedia would be generated back out of it, in all languages, and perfectly accurately, because the output program isn't even ML, it's just a straight-up rule engine.

Interestingly, an English-language original might be output just a little bit different but in ways that don't change the meaning. Almost like a language "linter".

Anyways -- I think it would actually be doable. The key part is a "Google Translate"-type tool that does 99% of the work. It would need manual curation of the intermediate layer with a professional linguist from each language, as well as manually curated output rules (although those could be generated by ML as a first pass).

But something like that could fundamentally change communication. Imagine if any article you wanted to make available perfectly translated to anyone, you could do, just with the extra work of resolving all the ambiguities a translating program finds.


It is true that the LaTeX ecosystem as a whole is a mess of packages and macros. But most of its mathematical typesetting comes from the underlying TeX (and a set of macros maintained by the AMS), and it's fairly small and consistent. The Detexify you mention is only for looking up specific symbols provided by various fonts (packages), and has nothing to do with mathematical typesetting or LaTeX macros in general: TeX/LaTeX engines support Opentype fonts now and if you want to use one of them, you can just type ∞ instead of \infty or ℝ instead of \mathbb{R} (actually you can do this regardless with unicode-math), bypassing the need for looking up symbols-a4 or Detexify.

Encoding the aesthetics of good mathematical typesetting is not trivial, and Knuth and others have spent decades on it based on studying and absorbing all the tricks that hot-metal typesetters had come up with over centuries. It would be foolish to throw away all that hard-won knowledge and implement half-baked solutions from scratch: those working in the field understand this (though the original MathML proponents perhaps did not), which is why the linked post mentions “math rendering based on TeXbook’s appendix G”.

More generally, in this conversation (and in any discussion about MathML), several things get conflated:

1. What syntax the user types to get their mathematics. I think it's beyond dispute here that no one wants to type MathML by hand (and even the MathML advocates do not propose it AFAIK). Also, so many people are familiar with TeX/LaTeX syntax that it must be supported by any complete solution, though alternatives like AsciiMath or some interactive input are still worth exploring.

2. How the mathematics is actually encoded in the HTML file, or delivered to the browser. Frankly I don't think this one matters much because it's invisible to the user; any of raw TeX syntax, or MathML syntax, or fully-positioned HTML+CSS, or SVG, will probably do.

3. Who does the rendering and typesetting / layout. The promise/dream of MathML is that a standard will be specified and all browsers will implement it; though this is yet to become reality. Meanwhile, typesetting can already be done server-side (someone runs TeX/MathJax/KaTeX/etc before sending it to the browser) or client-side (MathJax/KaTeX running in the user's browser) instead of being done in the browser's native code.

4. The quality of the typesetting/the algorithms used. I already mentioned this in the second paragraph above so I won't reiterate it, but this has been mostly underestimated/ignored by those advocating MathML. The decisions made by TeX reflected the best journals of the early 20th century and have in turn become the shared aesthetics of the mathematical community; “so-so” typesetting will not do.

5. What the result/output of all this rendering/typesetting/layout will be, in the web page's DOM. This affects things like usability (being able to copy-paste), scaling/zooming, styling individual parts of formulas, etc. Again, already (La)TeX+dvisvgm supports SVG for this, and MathJax supports HTML+CSS, MathML or whatever. Anything other than raster (PNG etc) images is probably fine here.

The main new/useful thing I can see with MathML is with (3); the browser doing the typesetting. But that's hard, and it has a lot of other challenges to overcome too. And as MathJax/KaTeX/dvisvgm demonstrate, the facilities provided by the browser for layout (HTML+CSS for example) are already sufficient for print-quality typesetting.


Time is only a symptom of what's missing: causation.

ML operates with associative models of billions of parameters: trying to learn thermodynamics by parameterizing for every molecule in a billion images of them.

Animals operate with causal models of a very small number of parameters: these models richly describe how an intervention on one variable causes another to change. These models cannot be inferred from association (hence the last 500 years of science).

They require direct causal intervention in the environment to see how it changes (ie., real learning). And a rich background of historical learning to interpret new observation. You need to have lived a human life to guess what a pedestrian is going to do.

If you overcome the relevant computational infinities to learn "strategy" you will still only do so in the narrow horizon of a highly regulated game where causation has been eliminated by construction (ie., the space of all possible moves over the total horizon of the game can be known in an instant).

The state of all possible (past, current, future) configurations of a physical system cannot be computed -- it's an infinity computational statistics will never bridge.

The solution to self-driving cars will be to try and gamify the roads: robotize people so that machines can understand them. This is already happening on the internet: our behaviour made more machine-like so it can be predicted. I'm sceptical real-world behaviour can be so-constrained.


Yeah, I always make an example with coin flips to show how this is true.... lets say heads is success and tails is failure.

Flip 100 coins. Take the ones that 'failed' (landed tails) and scold them. Flip them again. Half improved! Praise the ones that got heads the first time. Flip them again. Half got worse :(

Clearly, scolding is more effective than praising.


Despite the advice to add enterprise pricing — advice which I’ve given in the past too — I think you’ve done exactly the right thing with your pricing.

Pricing is a form a bike shedding. You can spend an infinite amount of time worrying about it, and there is no one right answer.

But one thing that a fixed price of $49/mo regardless of team size will do for you is put a hard cap on how much you will work on closing a single account.

Even assuming a very sticky product, by fixing the price your guaranteeing a whale isn’t going to show up in your inbox, promise the world if only features X, Y, Z existed, get you running in circles chasing that ghost, and then disappear without paying a dime, leaving with a pile of code that adds no value to your true customer base.

This pattern is extremely common for small companies that dream of being the next Enterprise SaaS, and the big companies with high probability will never buy from you, but will have no problem dangling carrots that waste your extremely valuable cycles.

Fixing the price is a very overt way to maintain focus and declare that you are going to own your own feature set and build the product you want to build, take it or leave it.

As a sole proprietor this is the safest way for you to grow your company at this stage.

Ignore all pricing discussions and just focus like hell on features that increase your SAM (the number of customers who could actually use your product), your reach (the percentage of your SAM that know you exist) and your conversion (the percentage of people who know you exist that ever actually pay you money).

Also remember that when you start out, the customers you have are not always the customers you want. The low monthly price is a great way to say, tell me what you like and don’t like, but don’t expect me to move mountains to build something special for you.


> Can we please try to stop talking about this specific language ecosystem as an awful deplorable hell hole or whatever?

Back in the second century BC, Cato the Elder ended his speeches with the phrase 'Carthago delenda est,' which is to say, 'Carthage must be destroyed.' It didn't matter what the ostensible topic of the speech was: above all, Carthage must be destroyed.

My opinion towards JavaScript is much like Cato's towards Carthage: it must be rooted out, eliminated and destroyed entirely. I don't know if I'd go quite so far as to say that the fundamental challenge of mass computing is the final destruction of JavaScript — but I want to say it, even though it's false.

JavaScript is a pox, a disaster, a shame. It is the most embarrassingly bad thing to become popular in computing since Windows 3.1. Its one virtue (that it's on every client device) is outshone by its plethora of flaws in much the same way that a matchstick is outshone by the sun, the stars and the primordial energy of the Big Bang added together.

JavaScript is the XML, the Yugo, the Therac-25 of programming languages. The sheer amount of human effort which has been expended working around its fundamental flaws instead of advancing the development of mankind is astounding. The fact that people would take this paragon of wasted opportunity and use it on the server side, where there are so many better alternatives (to a first approximation, every other programming language ever used), is utterly appalling.

JavaScript delenda est.


An idea not often discussed around this quote is that tolerance isn't actually virtuous, and, is perhaps itself a racist/classist/otherwise-exclusionary idea.

The Reverend Doctor Martin Luther King Junior (in whose honor I had a day off of work today) never once called for 'tolerance', the boycotts and sit-ins were not a demand for tolerance - they were a demand for integration. To tolerate is to "otherise" - you are _allowed_ to continue being as your are, but on the outside. To integrate, you do not require permission, but you do not continue as you are - both "sides" are transformed by the process.

I think that we should abandon the idea of tolerance as an inherent virtue and look for pathways for mutually acceptable integration, rather than building rigid islands of tolerance.


Dependencies (coupling) is an important concern to address, but it's only 1 of 4 criteria that I consider and it's not the most important one. I try to optimize my code around reducing state, coupling, complexity and code, in that order. I'm willing to add increased coupling if it makes my code more stateless. I'm willing to make it more complex if it reduces coupling. And I'm willing to duplicate code if it makes the code less complex. Only if it doesn't increase state, coupling or complexity do I dedup code.

The reason I put stateless code as the highest priority is it's the easiest to reason about. Stateless logic functions the same whether run normally, in parallel or distributed. It's the easiest to test, since it requires very little setup code. And it's the easiest to scale up, since you just run another copy of it. Once you introduce state, your life gets significantly harder.

I think the reason that novice programmers optimize around code reduction is that it's the easiest of the 4 to spot. The other 3 are much more subtle and subjective and so will require greater experience to spot. But learning those priorities, in that order, has made me a significantly better developer.


It's interesting, why do you think individual states in the US "giving up their sovereignty" by way of control on migration was successful, but doing so at any broader scale wouldn't be? That's 325M people -- they're not all in California and New York, are they?

How about within Europe? The Schengen area alone is 400M+, the EU is 500M+ and Europe is 750M+. The amount of complaining about freedom of movement is really quite limited in reality given the scope of the free movement area. They've even developed a program to smooth out bringing in new countries; read up on the accession process.

Why is it that going from 5M (free movement within, say, New Zealand) to 35M (free movement within Canada) to 325M (free movement within the US) to 500M (free movement within the EU) all work -- over 2 whole orders of magnitude -- but expanding it any further would turn the world into a post-apocalyptic hellscape? Why would 1 more order of magnitude cause apocalypse when that hasn't happened yet?

Moreover, why doesn't everyone in the free movement area in Europe pile into Switzerland? Plenty of people move from eastern Europe, but there's still last I counted 38M people in Poland. They're not all in the UK, Germany or France. Local language, local culture, local jobs, economy, opportunities, family -- these all tie you to a place.

If offered a permanent life in Switzerland, would you drop everything and go? Why not? Probably all those things I listed above. I'm an EU citizen. Through EFTA, I absolutely could, forever, without restriction live in Switzerland, and yet here I am, somehow not in Switzerland. Fascinating.

This sounds like tribalism to me.


The web was genuinely revolutionary and it still had a bubble. Just because something is a big deal doesn't mean people can't make an even bigger deal out of it.

You asked about why Python but it has a lot to do with JS too. But first lets compare against Python:

Ruby is a delightful language, but it's dominated by one niche (web development) and one framework (Rails).

Over the last 10-20 years Python has become the main teaching language in Academia. This has greatly contributed to it's ecosystem for math and science, which in turn has contributed to its rise in Data Science and Machine Learning. It also gets a lot of support from Google.

All of these things combined mean that the broad rise of Python is lifting up Python for web development, but if anything I'd say web dev in Python is lagging behind those other uses.

There's one more thing that I think is the root cause of why Rails - the framework - has faded somewhat in the minds of web devs. Rails has always been an opinionated framework, and early on they came up with "The Rails Way" to do front-end assets. The combination of Sass + CoffeeScript + Asset Pipeline with built-in concatenation, minification, and asset hashing was pretty neat compared to the old world of a bunch of es5 files loaded via script tags.

But this hit before the rise of NPM and being able to `require('foo')`.

The JS packaging wars went on for a while and none of those asset solutions are as simple and "just work" as the Rails Asset Pipeline -- but the asset pipeline doesn't hook you into NPM. Rails dragged its feet for a long time on that before adding webpack support.

So I think during that time you had a TON of the trendy hipster open source people moving into Node first on the front end, and then eventually losing interest in Rails because the framework wasn't making it easy for them play in the very latest js ecosystem.

And I think there quickly became some saltyness among Ruby devs about Javascript, that people were overdoing it and that really you only needed "sprinkles" of Javascript to make a great app.

So (1) Python has risen a ton overall, lifting Django, and (2) Rails was slow to embrace the JS craze and that held Rails back.

Those two trends pretty much tell the story as far as I can tell.


About 10 years ago I got the same urge to find or make maps to visually detail humanities conditions. An ostentatious write-up of the idea survives here [1]

I imagined a wiki like effort to crowd source data that scores regions for 5 "Domains of Human Circumstance"

-------

Labour : opportunities, conditions, demands and rewards of productive endeavour

Food : calories, nutrition, contaminants, variety, taste

Material Environment : air & water quality, land quality (toxicity, civil engineering, architecture, ecology)

Social Environment : access to education, entertainment, arts, media, therapists, medicine

Private Environment (shelter,clothing) : allocation, facilities, privacy, preference, private architecture/furnishings, quantity

Physical Security : statistical hazard from criminal predation, military trauma, political/economic upheaval

-------

The category scores would be rendered in colored combinations on maps in a manner which importantly would not average out distributions, so that a region with many people living with low scores and others with high would not deceptively appear as equivalent to everyone with middle scores.

I've regretfully never made any progress on it.

[1] https://web.archive.org/web/20131211161318/http://pericosm.c...


Posted over a dozen times but with surprisingly few comments. The best I found:

https://news.ycombinator.com/item?id=6463466 (2013)

https://news.ycombinator.com/item?id=8648541 (2014)

https://news.ycombinator.com/item?id=15236856 (2017)

Don't miss this great link from that last thread: https://www.flickr.com/photos/ajstarks/sets/7215763147079887...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: