You're awesome dang! Thanks for all you do. That things are up as much as they are is pretty great.
I, and probably a bunch of others, would be pretty interested in a write up of the challenges and how you solved them if you could find time after things settle.
I don't think it's "run by" the YCombinator. If I recall correctly, YCombinator pays for HN costs, in exchange for 3 things: recruitment posts, YCombinator new startups posts, and having alumni see each others as orange.
The tech behind the scenes is pretty advanced though, from what I've read. (At least compared to reddit, along with apparently pretty good bot detection and such.)
lol, it's not dns (fb dns is back), it's not coincidence, it's not people coming to hn specifically to read about fb being down. it's the digital equivalent of the consumer goods shortages resulting from changed behavior during the lockdowns of 2020. the whole consumer/attention-economy internet is creaking under the load and it's kinda hilarious. https://downdetector.com/
I think it really is people coming to hn specifically to read about fb being down.
What kills me is that we'd be humming just fine if the software I've been working on for so-long-I'm-embarrassed-to-tell-you had been rolled out by now. At this point the only solutions are non-technical. A la the Serenity Prayer, and waiting till tomorrow.
Edit: it does seem to be better for community if things break occasionally, though. People can relate to sheepish devs under excessive load. Nobody relates to a perfectly oiled machine.
I have to say, I find it surprising that ycombinator hasn't hired another developer to work on HN. It's great that you're actually well-versed in the codebase, but you shouldn't be the lead moderator, lead customer support contact, and lead developer, that's nuts!
Heck, I can't tell—are you merely the "lead" on those things, or are you the sole person? You often say "we" when talking about HN, but I can't tell if that's a royal we or not.
I know that HN is meant to be low-resource, but if ycombinator thinks it has any value at all, they ought to be able to spare another developer. This is just basic organization management. Of course you haven't finished your software yet, you can't do everything!
How much do you think the long term value gained from HN has been impacted by the slow down today?
My bet would be not very much at all - I doubt there are few regular users who would be deterred from coming back.
Plus there is no income stream to protect.
Thus justifying a non-lean approach seems a stretch. Generally things appear to work pretty well here and the odd day where things aren't perfect isn't the end of the world.
I agree it's not a significant problem if Hacker News goes down for a day. However, paginated threads really suck, and that's also a stop-gap pending performance improvements. And, it would generally be good if dang was freed up for e.g. moderating.
I can't seem to find a better comment to ask about this:
What do you think about the SAFE stack? (I know that the HN backend is written on a completely different stack.) Do you think it will gain some popularity and/or is a competitive advantage over something like a plain lamp/lemp stack? Or do you expect it to languish and die?
Really just curious what you think about it and would appreciate a reply if it isn't much trouble to you.
Just to clarify, HN is written in Arc, a Lisp variant, and (purely incidentally) runs on FreeBSD. As the sibling replier noted, SAFE appears to be hosting-provider-specific.
Also, I'm definitely not OP, but since I'm here, and just in case...
My own knee-jerk reaction goes straight to the value proposition of the cloud vendor it's tied to, which is a very odd/unusual dimension to bring to a programming stack (eg, LAMP isn't tied to Windows, or to GCP, it's entirely neutral). Cloud vs bare-metal vs VPS vs virtual hosting vs etc all have different cost dimensions and optimalities. Given that I can run PHP on App Engine, GCP, and (with a small amount of existential screaming) random Web hosts... PHP is canonically a bit more flexible than SAFE. (Then you've got the argument of "but you can technically run SAFE elsewhere" - to which I respond "okay make the A mean something else then!")
As for my generic opinion on the rest, it's all about what it brings to the table. If it introduces something genuinely new that adds a bit of "essence of 10x", if you will, if that new thing is tied to the rest of the stack, and the stack gets out of people's way to enable that 10x thing, then yeah, the stack'll follow after the new thing and catch on in the process. Thing is, "getting out of the way" is the critical part that is so easily gotten wrong - if you're trying to show people new ways of doing things, and it just feels like how they were doing things before, it doesn't matter if they look different, the magic just falls apart, and the new thing won't catch on. IOW, in much the same way that successful existing products are a million tiny details gotten right, successful visions for new things are those same million tiny details from the perspective of the new thing that doesn't "exist" yet. Does this do that?
Secondly, regarding your curiosity about whether this thing will languish and die... for what reasons might that be especially important? For the sake of debate, I'm not presuming you're actually coming from that perspective, I'm asking what if, and answering that what if by saying that the success or failure of a tool ultimately doesn't matter, because tools aren't ways of doing things, in much the same way considering operating system to be better than another is mushing too much unrelated context together. It's very good to build good designs that excel because they are well thought out, but I've learned (without becoming cynical :D) that success and failure are ultimately memetic and arbitrary, and that bad things can succeed because they're in the right place at the right time, not because the fundamentals of the universe are broken or biased towards bad things succeeding, but because the technical discourse of "A is better than B" confuses the semantic priority of problems vs solutions, and thus typically do not consider (or provide room to consider) the almost-always-limited scope of the situations A and B might be being used in (the problems), which may not give any consideration to the majority of the ways A might be better than B (the solutions). The fitness function isn't about All The Things™ the solution brings to the table, it's about whether the solution fits the problem at hand. As for "but what about the future?", I think that ultimately boils down to understanding the problem space well enough such that all the things a current problem might need to tackle in the future become apparent in the present, so the solution can integrate those considerations from the get-go. To reiterate, it's about understanding the problem space correctly, not the solution space.
The above was written with more than a healthy dose of assumption, and I apologize if I've extrapolated nonexistent signal from your question.
> As the sibling replier noted, SAFE appears to be hosting-provider-specific.
> "okay make the A mean something else then!")
Yeah, the website really makes it seem like that. Personally, I fully agree that it shouldn't be vendor bound at all (not even a mention). Along with that, I actually am somewaht using it on the bare metal.
> As for my generic opinion on the rest, ...
For me, the big draw to it is that it is f# from end to end (for the most part), and that in turn enables FP and type safety. This is, IMO, the 10x value proposition of it. Similar to how LISP and its dialects allow you to do MUCH faster dev cylces. And IMO it does "get out of the way"
> ... Does this do that?
Maybe? I lack the experience to give a qualified opinion on that.
> Secondly, regarding your curiosity about whether this thing will languish and die... for what reasons might that be especially important?
I know that technical merit isn't really the biggest or even a big factor in adoption. I think of dang as almost uniquely qualified in regards to web stacks, as he also doesn't use one of the common ones. And so he may have a perspective on whether people would be likely to adopt it based on usability.
Because I know dang is busy, I added the part about replying if it isn't trouble for him.
Sorry, for this badly worded/thought out comment, I suck at English.
> Along with that, I actually am somewaht using it on the bare metal.
Cool.
> For me, the big draw to it is that it is f# from end to end (for the most part), and that in turn enables FP and type safety. This is, IMO, the 10x value proposition of it. Similar to how LISP and its dialects allow you to do MUCH faster dev cylces. And IMO it does "get out of the way"
I've only played around with F# a very tiny amount, and the majority of my experience consists of noting that its VM warmup time is a tad slow. (I "grew up" on PHP at the CLI, so I'm used to 20-50µs of overhead... woops.) Once I solve the I Have Really Slow Hardware™ problem I expect I'll give F# a proper look (since then I'll have no excuses lol). I've been interested in functional programming for a while.
I was referring to the whole SAFE stack getting out of the way, being that I was criticizing SAFE's vision (albeit without being able to substantiate my critique with any domain-specific knowledge, haha). But if it's just a thin wrapper and a set of conventions, then maybe (I don't have the domain knowledge to discern myself) it actually does that well.
> I know that technical merit isn't really the biggest or even a big factor in adoption.
Sadly :(
> I think of dang as almost uniquely qualified in regards to web stacks, as he also doesn't use one of the common ones. And so he may have a perspective on whether people would be likely to adopt it based on usability.
Well now I'm curious too :P
> Sorry, for this badly worded/thought out comment, I suck at English.
FWIW, I've had both zero issues understanding your points AND zero instances of "...that {word/sentence/thing} doesn't belong there", so... :)
No one said anything about remaking the site! Just finishing up some server-side performance improvements in order to e.g. get rid of the More links in long comment threads.
I have no idea how much extra traffic you had to deal with today, but I'm always impressed with how gracefully HN seems to degrade under high load. There are increased page load times, and frequent "Oops, we couldn't serve your request messages", but the pages do load and trying page loads and comment posts actually works.
Reminds me back in the day of irc.netgoth.org.uk getting flooded into unusability with people asking the obvious question anytime livejournal went down.
Yes, we put it in /topic of the main channel. Yes, naturally, that reduced the flood by maybe 25%.
All of this has happened before. All of this will happen again.
Running in anonymous mode (clear browser cache) makes it responsive, but read-only.
Edit: I suggest using a private tab or two browsers or two profiles, one where you are logged out to read comments for speed, and if you wish to comment or vote then paste a comment url over to the other. Side effect: you improve responsiveness for everyone!
When logged in the site shows you for each entry (post/comment) whether you voted on it or not, thus it has to request a store with per user unique data. (Also custom color, your points, ...) When logged out all get the exact same data.
It was my perception 10 years ago that HN was powered by a single threaded process with the entire post history loaded into memory (not sure how accurate this perception was). I am curious how much it has changed.
The original comment in the thread has that information, and it's nothing to write home about. Maybe it has improved since, but it's not like they're running a network of supercomputers.
It's just people flocking to HN for reliable news on the insane outage affecting global Facebook services, and refreshing far more than they usually would. Data point of one
"People look for alternatives and want to know more or discuss what’s going on. When Facebook became unreachable, we started seeing increased DNS queries to Twitter, Signal and other messaging and social media platforms." from Cloudflare via https://news.ycombinator.com/item?id=28752131
It's facebook outage rubber necking. Everyone who has been involved with keeping services online is like "oh heads are going to fucking roll today, let's go see what folks are saying happened". haha
> General tip: If HN is being laggy and you're determined you want to waste some time here, open it in a private window. HN works extremely quickly if it doesn't know who you are.
Saw this from a user by the name of bentcorner in the "Facebook-owned sites are down" thread[0]. Figured I'd repost it here. It works because HN can cache pages more efficiently when it doesn't have to include user data.
Can't tell, but logging out to get the cached pages did seem to help... so maybe it's a lot of people all logged in hitting the same database each click?
My guess is with facebook properties down people are procrastinating here. Also news.yc temds to be a good place to find timely updates by smart people on outages like this.
I rather suspect that it's because of the Facebook outage, but indirectly. HN is always slow when major tech services (Facebook, Google, AWS, Github, Slack, Fastly, Cloudflare, etc) services are down. The HN community seems to congregate here for updates causing higher than usual traffic.
Apparently DNS services are getting hammered with much higher traffic than usual for Facebook domains (as FB DNS is down so millions of devices are continually retrying the lookup).
For twitter specifically, I also wouldn't be surprised if a non-trivial number of people have increased their use of twitter while facebook is down.
Either there is a massive number of HN users F5ing the homepage for the last few hours, HN is woefully unprepared for surges in traffic, or there is some kind of outage with their infrastructure.
hnstatus on twitter and hund.io show no outage news.
It'll be this, I'd bet money on it. Lots like me who don't visit every day are coming to HN for more info. This site probably only ever sees a tiny percentage of its registered users accessing at once; right now, it'll be a much larger percent.
IIRC HN runs on a single server. So they don't really put much effort into scaling to higher traffic levels. They don't really need to. Even now the site is available, if a little slow.
Telegram is buckling under this too. Only major comms for me not slowing down at the moment are Fastmail (email) and Matrix (which we have the homeserver on our own ASN)
The underlying problem is that HN is near the limit of its hardware all the time, which could happen on one server or on 200 servers. And the reason it's near the limit is because it's single-threaded, I believe. Fix it to use multiple threads, and one centralized server will handle load spikes like this just fine.
Nginx is single-threaded. At least was when I last looked.
Node.js similarly is single-threaded. Both boast pretty impressive performance.
Being single-threaded is not a detriment to performance, rather, it most often results in faster software, per core, due to lack of the need for synchronization and avoiding a memory allocation that happens for each thread (can be a problem, see C10K problem).
Being single-threaded does not mean you can't do things concurrently. As long as you have a set of async primitives, starting with the venerable select(2), and a bit of language syntactic sugar (preferably, or just setjmp(3)) you can interleave handling of incoming and outgoing data in what is called "cooperative multitasking". There are many higher level frameworks built on top of the principle, which is that - hidden or visible - context switches are voluntary only. As HN seems to be written in Arc, which shares the implementation with Racket, then the feature most likely to be used for this is either call/cc (or one of the higher-level abstractions built upon it, like generators) or it would, going past the "cooperative" territory, simply use threads: https://docs.racket-lang.org/reference/threads.html These threads are not "real" threads, only one instruction among all threads ever executes at once, but the points of context switching are not specified in the code being executed (leading to preemptive multitasking but without parallelism). If HN uses those, then it can be called both "single-threaded" (the VM) and "multi-threaded" (the HN code) at the same time.
Being single-threaded also doesn't prevent you from using multiple cores to max out your processing power. In age old tradition you can just spawn off a few worker processes and call it a day. You might need some sort of Inter-Process Communication, but there are many mechanisms to choose from. Nowadays on Linux you can use the listening socket as a simple round-robin scheduler for processes, free of charge, no need for a separate proxy. The processes might be managed by external supervisor, like systemd, or they might be managed by the language runtime itself, and Racket provides Places for this IIRC: https://docs.racket-lang.org/reference/places.html
Note: I'm just guessing based on what I've heard about HN implementation and my knowledge of Racket which may, or may not, be relevant at this point.
Anyway: rewriting the HN backend to "use multiple OS-level, preemptively scheduled threads" (which you wanted to write but contracted to "multiple threads"), would not result in per-core performance increase, it's actually more probable to cause the exact opposite. I'd guess that the backend is already optimized in terms of algorithms and data storage, so the only recourse now would be to rewrite the whole thing in (or compile down to) a language that tends to perform better than what Racket VM can offer. Given that HN is already a weekend project of a hacker (I mean, written in a Lisp? A Lisp implemented on (former) Scheme...), I'd probably give Nginx + LuaJIT a whirl, or maybe one of the BEAM languages (there's a Lisp over there, too, so another fun weekend for porting Arc to LFE) if manual control over distribution over multiple nodes is desirable (but it would be a total time-sink, while things like Kubernetes (I'm told) mostly work well enough... though K8s is a time-sink in its own right...)
Dang mentioned in another comment the GC being too slow. In that case, replacing the GC implementation could help, if possible. It's also possible to write code that avoids as many allocations as possible, but it's hard, and the GC can still happen at bad moments. In some cases, simply disabling the GC altogether and restarting the process periodically can also give better - or at least more predictable - performance. This is indeed a troubling problem to have, but it's completely orthogonal to single- or multi-thrededness. (Other than the multi-threding tends to allocate more memory for the same workload)
So yeah, that was a tangent, but your statement was very wrong, yet very common, so I thought I'd do my duty as a good techie and shed some light on the matter.
That's a lot of words to complain that I didn't explicitly say "and single-process".
My statement isn't "very wrong" at all. If you actually have one single thread, total, that's a harsh limit on how much you can do. And I never implied it wasn't concurrent within that thread.
That's still wrong: scheduling and synchronisation take time, especially synchronization, which has both an overhead and introduces uncertainty in when the lock will be released. As I said, multi-threading results in worse per core performance, and depending on your data it might actually even perform worse on multiple cores (sometimes; most often it offers a moderate speed-up, not what you'd expect).
You also presented the threadedness as a cure-all solution to performance problems, which it is not. That's wrong.
You also never implied that you mean paralelism in general as opposed to just threads. But yeah, in the case you can't use any kind of parallelism, you're kind of screwed. The problem is that threads are one of the worst ways of introducing parallelism.
> That's still wrong: scheduling and synchronisation take time, especially synchronization, which has both an overhead and introduces uncertainty in when the lock will be released.
No matter your method, you need some kind of scheduling and synchronization if you want to go beyond 5 billion CPU cycles per second on a multi-user site.
> You also presented the threadedness as a cure-all solution to performance problems, which it is not. That's wrong.
No I didn't. But having more than one core in use is the only way to get over a certain threshold, no matter how good your code is.
> The problem is that threads are one of the worst ways of introducing parallelism.
Like I said, it would have been better if I was clearer that I mean completely non-parallel code. But parallelism was my point. Everything you're saying about threads in a single process vs. multiple processes is not me being wrong, because it wasn't my argument.
Or to put it another way: Yes threads in a single process are on the bad end of parallelism. But they're also the easiest. If you see a statement that code can't even do that, then that should already imply that you can't run multiple processes on the same website instance.
Ok, let's chalk it up as a miscommunication for the most part, I'm sorry for calling your comment "very wrong". We can agree that it was incomplete/imprecise, right? It got explained, so let's move on :) But, I take issue with one more bit here:
> But they're also the easiest.
I don't believe that's true. Threads are the easiest to work with if you have a lot of shared, mutable state kept within an adress space of a single process and, crucially, cannot do anything about it. I don't know if that's the case for HN backend, but it rarely is in practice. Then there's a problem of multiple different threading implementations out there, and also an issue of how the threading mechanisms interact with the runtime (Python is multithreaded, but has a GIL). Frankly, it's a minefield on all sides, and implementing a system that is multithreaded, performant, and correct at the same time is the exact opposite of what I'd call "easy". That's also the reason why I disagree here:
> If you see a statement that code can't even do that, then that should already imply that you can't run multiple processes
To me, threads are not the first thing that comes to mind when I think about parallelism. Being unable to leverage threads, to me, is a good thing: just let this sleeping can of worms lie. As you know, there are multiple (many) ways of achieving parallelism, and they differ very much: I don't think being unable to leverage one of them says anything about the ability to use any of the others. That may be just me, though, so let's just say I have a different impression here and call it a day :)
Sure, very little to disagree with, and moving on is fine, but I do want to make one small note: the reason I say threads are easier is that in almost every case you can take a multi-process model and turn it into an efficient multi-thread model with minimal effort. Threads give you more tools, and a lot of those tools are slow. And if a type of parallelism won't work with threads, it very likely won't work with anything else.
Seems like the opposite lesson to me - Telegram is kind of iffy, not sure about the others. The only thing that's been really solid today is Google. So I guess depend on multiple enormous centralized services?
This is definitely the correct take. I fall into the 18-24 demographic and I don't think I've used Facebook at all in the past 3-4 years. Neither does anyone else I know who is my age.
This is essentially a reverse network effect: because so few young people are on the platform, few other young people see a reason to use it.
DNS lookups are certainly slower for random sites. I just opened an about zero traffic site that i hadn't gone to in a while (so dns not cached). It took a while (10 seconds?) to load the first time. Insta load the second time.
This seems to happen for .com but not other TLDs so maybe .com is getting hammered?
Most Facebook users don't know what HN is, but most HN users have Facebook accounts and will migrate here for the dopamine-inducing refresh clicks (don't let them try to claim otherwise ;) ).
If most Facebook users knew what HN was and congregated here when FB was down, the site would be utterly crushed.
Yes, that's normal. If you're not logged in, HN serves you a cached version that is a couple dozen seconds old. That's super fast. If you're logged in, HN has to generate the page based on the previous actions and properties of your account (e.g. upvotes, favourites, etc.) with every HTTP request.
It actually feels like a periodical DNS outage. On Friday some of my servers weren’t responding like they should. Half an hour later everything was fine again. However, some VMs running in Azure Cloud were still having problems. Today this huge FB/WA/Instagram.
I bet we will read more about this anytime soon in the news.
I can't load user profiles at the moment, and some pageloads are just spinning wheels. Not sure if that is related to network issues or if just a lot of us are rushing to HN to check/comment
EDIT: We're having some trouble serving your request. Sorry!
So...maybe HN has dependencies on FB (such as tracking pixel, etc.)...and with their outage, timeouts kill things for HN? Not blaming HN, just wondering if second order diminished experiences start happening because of FB's heavy gravitational pull?
I've always found it interesting that this fairly influential site rarely changes and hasn't kept up with modern internet standards.
I get that it's an ambiguously useful project for a venture capital firm and all, but you would think that they would throw it a bone or two given the level of influence it has.
It also seems like they're trying to make some kind of point about dead simple web design and running a web server off of old laptops or something. I'm not sure, but there's definitely some combination of neglect and hubris going on with this site.
Like increasingly long page load times, pervasive use of JS for even the simplest things, and throwing up a million banners before you can read anything?
I, for one, am very happy that HN is, for the most part, just plain old HTML that loads instantly and just works 100% of the time (when the servers can handle the load, that is).
I'm 90% with you, but I'd love them to do something to stop it being so easy to "fat finger" the hide link directly above main links when on mobile - just a tiny bit more space would work wonders.
> The design is perfect. It’s information dense while being really clear and easy to use. There is absolutely nothing wrong with the design.
I’d argue that unformatted quotes is a pretty clear design opportunity, especially when code blocks are misused enough that mobile use is regularly impacted by it.
It is hard to see at first, but the lack of design and features is itself a feature, and perhaps crucial to what keeps HN chugging along. UX emerges not only from what you add, but also what you do not.
For what it's worth, my own desire to see if I can help out somehow has definitely only increased over the past few months as I've noticed the messages at the tops of threads about pagination. I don't have any particular skills that would make me a hand-in-glove fit (eg, I'd need to learn Arc...), but surely there are tons of little boring things I could usefully do that would take the heat off. If I'm thinking this way, surely there are (many?) others...?
For example, something tiny that I'd love to fix is to split vote() in half and move the UI update logic into an Image.onload handler, so that the UI only changes if the upvote absolutely committed. I'm OCD about this because I actually use voting as bookmarking, and recently realized that some unknown percentage of my upvotes/bookmarks have sadly been lost because the auth= value had expired (eg in the case of a weeks-old tab) or because I was on bad 4G on my phone, but the UI state has no bearing on the network response.
Something else that I'd be very happy to put effort into, after being told how to approach the problem :), would be to make the site screen-reader-accessible: https://www.youtube.com/watch?v=G1r55efei5c&t=386s (size-XL volume warning - unfortunately the speaker volume is like 5% of the screen reader volume).
A longer-term project that I also think would also be very useful would be to implement OAuth2, so that users would be able to safely attach external logins to their accounts without needing to supply their actual user passwords (which thankfully none of the alternate UIs have tried to do AFAIK). This could support fine-grained scopes like "can see own comment votes", "can vote", "can post replies", etc. IMHO the best way to do this would be to have a central pool of manually-approved app registrations; this is definitely the most complicated approach :/, but it means the entire system would depend on a human who could go "...that OAuth2 app in particular is behaving weird. [pause]" which would be very tricky to achieve with an autonomous system (where everyone independently creates their own tokens that have no semantic value). This sort of thing utterly fails at large scale (see also: Google, YouTube, etc), but I think it would be perfect for HN. While implementing this it would also make sense to support 2FA using TOTP.
A while back I read that one of the reasons the site was closed was to keep the voting logic private. The chances are there are probably a bunch of other things (like maybe the spam detection/handling systems, and maybe the moderation tools) would be similarly classified as (for want of a better word) "sensitive". Well... it could be very, very interesting to split the codebase in half, with all the sensitive stuff in one corner, and the remainder of the codebase capable of running locally without it. Maybe you've already considered this, and consider it nonviable :(
(NB. The reason for the Rube Goldberg OAuth2 architecture I suggested was precisely to make it that much harder for people to register throwaway/bot accounts etc, keeping in mind the voting logic thing. Couldn't figure out how to reword the above two paragraphs to resolve the info dependency :) so put this here instead. lol)
IIUC, there are a very small pool of enthusiasts around the K programming language (https://kx.com) that privately study the source code to Kdb, and I understand that Arthur Whitney et al. are actually open to newcomers taking an interest in the project. (I'm sure I saw a comment mentioning as such a while back, possibly by geocar, but don't seem to be able to find it. I might've read as such elsewhere.) At some point I hope to go down that rabbithole, which looks genuinely interesting, but learning that it *was* actually accessible left a bit of an impression given that (http://archive.vector.org.uk/art10501320):
> Whitney demonstrated his “research K interpreter” at the Iverson College meeting[5] in Cambridge in 2011. We had visitors from Microsoft Research. The performance was impressive as always. The tiny language, mostly familiar-looking to the APL, J and q programmers participating, must have impressed the visitors. Perhaps conscious that with the occasional wrong result from an expression, the interpreter could be mistaken for a post-doctoral project, Whitney commented brightly, “Well, we sold ten million dollars of K3 and a hundred million of K4, so I guess we’ll sell a billion dollars worth of this.”
> Someone asked about the code base. “Currently it’s 247 lines of C.” Some expressions of incredulity. Whitney displayed the source, divided between five text files so each would fit entirely on his monitor. “Hate scrolling,” he mumbled.*
The above, combined with the project's niche accessibility (I understand that one does have to be genuinely interested) speaks to me of business and engineering focal points in perfect calibration and harmony with each other. (Hnng. :P) It also gives evidence that it is in fact possible in the first place to achieve and sustain this kind of calibration in contexts and situations that make use of niche technology. The (meta-?)question (to me), then, is how the same sort of niche accessibility context might be applicable/applied to news.arc (et al) to varying degrees.
I also wanted to incidentally mention that I've long had mixed feelings about using GitHub (in a sharing capacity). There's a bit of "but I don't have anything interesting enough!" in there, but it's mostly hesitancy about dumping stuff underneath The Giant Spotlight Of Doom, Inc™. This isn't GitHub's fault; it's more that the consumer end of open-source has something of a demand/non-empathy problem toward the higher end of the long tail, that GitHub is the biggest platform, that everything not-GitHub correlates with an exponential drop-off in terms of visibility... and the intrinsic lack of any nice middle ground in the resulting mess. Applying these considerations to HN's crazy popularity, I think that using GitHub could be great for "pop culture accessibility", if you will - but at great potential cost to behind-the-scenes logistics (maintaining CI that merges closed-source modules, explaining Arc for the 5,287th time, etc), and a noteworthily increased maintenance burden. While there are a variety of "alternative" Git hosting platforms, I think LuaJIT's approach is most interesting (https://luajit.org/download.html): the Git repo is only available over HTTPS as a firewall convenience, and there's no browser-accessible repository viewer. Thus everyone needs Git installed. You could also for example require everyone's Git client to provide an HTTPS client certificate, for example. Such speedbumps would enable a scalable form of "proof of interest" (there's also the fact that everyone has to go learn Arc once they do finally get at the code...) and naturally rate-limit this new dimension to something hopefully maintainable.
Lastly, and as a bit of a continuation of the last point, regarding the question of licensing ("oh no"), I'd actually be in favor of something custom. Both because all the opportunity (read: $$$ xD, but also Bay Area) is available to properly figure that option out, and also because virtually all existing licenses (and their wide use) bring a sort of reification to the table that makes politicking/taking sides ("ooooh! XYZ does ABC! that puts it in the same group as DEF!") all too easy, which may potentially threaten HN's pragmatic ethos somewhat. A unique license has no reference points and thus less potential impact to cohesion, and also makes solving niche cases extremely easy; you just work backwards from whatever end state you want (which you've almost certainly had years to think about, or at least subconsciously gather context for). One concrete example (working with the limited context I have) might be disallowing mirroring or copying of the code, which would close the loop on the Git setup described above.
I'm not sure what bits of the above are interesting and what bits ultimately amount to excited bikeshedding :) - but I definitely want to convey that I, and probably (many) others, would be genuinely interested in helping out. Also, I realize stuff does actually get fixed, like * vs \*, which I am very appreciative of :D.
I've obviously misstepped or misread the situation/context somehow. *Sigh* I'm sometimes really bad at this, sadly, despite my best efforts to the contrary. Sorry.
If someone would like to point out where I went wrong I'd greatly appreciate it.
Edit: I've realized that maybe my comment was interesting, but not as a reply to the specific comment I replied to. As my comment grew I thought at a couple different points that maybe it wasn't a good fit for subthread I was replying to, but barreled onwards because this happened to be dang's most recent reply to the whole thread at the time. Maybe this amounted to a spot of tone deafness - and if this is indeed the case, I do actively/explicitly request that my reply be detached from https://news.ycombinator.com/item?id=28756111.
If you want to contribute to an Arc based forum, the open source Anarki fork[0] is always open. It won't effect the original Arc distribution or HN but several other forums do use it.
My only point of hesitancy with diving into something like this is my strong suspicion/fear that HN has diverged sufficiently enough from Arc that almost all improvements I made to news.arc et al would need manual/refactoring before they would be useful - and this would need to be done every time I tweaked something on my end.
My interest in Arc is motivated solely by the fact that it's what runs HN, and while part of my motivation to dive into HN from a technical standpoint is because I just like doing that :) the rest is just wanting to help the site get out of users' ways, while retaining a pragmatic focus.
I think you’re well intentioned obviously, but in your offer to help, the first thing you do is ask he take 5 minutes to digest your ideas, when he hasn’t really indicated he’s looking for help. Thats a very long time for an internet browser. So yes I think they’re downvoting the context not the content. They are interesting ideas though.
Yeah, I definitely mushed the context here; this wasn't the correct spot to put this.
Thanks also for the reference to the time factor. The text took a good maybe hour, maybe (while doing other things) to put together - and at that point I just wasn't considering that text might actually take some time to read. Time to start paying attention to how long I take to read...
There is indeed also a bit of subtle intermixing of "I help!" vs self-centeredness in there too. That one seems to be proving a bit of a real challenge.
I need to find whatever volume control is set to "imperceptibly subtle" and turn it up so I don't need the 20/20 focus of hindsight to act correctly on these sorts of cues.
Eh, ever since the CDNs like fastly introduced cache-busting in milliseconds, caching logged in users became a not a big deal especially as most of logged in user content of HN is not different between the logged in users.
The recent iOS update changed the default appearance of buttons in Safari, and not many sites use the raw button appearance, but HN does. So that could be what you're seeing
Sorry everyone—there are performance improvements in the works which I hope will make a big difference, but it's not something we can roll out today.
p.s. You can read HN in read-only mode if you log out. Should be way faster.