Hacker Newsnew | past | comments | ask | show | jobs | submit | aleyan's commentslogin

If you want to get a single entry point into your repo's task, also consider my tool: dela[0]. It scans a variety of task file definitions like pyproject.toml, package.json, makefile, etc and makes them available on the cli via the bare name of the task. It has been very convenient for me so far on diverse repos, and the best part is that I didn't have to convince anyone else working on the repos to adjust the repos structure.

Dela doesn't currently support mise as a source of tasks, but I will happily implement it if there is demand. Currently [1] I saw mise use on 94 out of 100,000 most starred github repos.

Thank you for allowing this moment of self promotion.

[0] https://github.com/aleyan/dela

[1] https://aleyan.com/blog/2025-task-runners-census/#most-used-...


Sounds great but does it support listing all tasks?

Whenever I enter a repository for a node project the first thing I do is "npm run" to list the scripts. When I enter a repository with a Makefile I look at it. If I see make targets where both the target and dependencies are variables I exit the repository again real quick though.


> The Khrushchev flats had communal kitchens

Khrushchyovkas did not have communal kitchens; I grew up in one [0]. Perhaps you are thinking about kommunalkas [1]?

[0] https://news.ycombinator.com/item?id=7935844

[1] https://en.wikipedia.org/wiki/Communal_apartment


yes. I was completely wrong, thanks for the correction.


The view warrant canaries[0] link on the bottom of the page goes to a cloudflare 502 page. Bitrot is indistinguishable from subpoena, but neither is a good indicator.

[0] https://files.velocifyer.com/Warant%20canaries/


I fixed it. Bitrot from a blog i started this month would be ridiculus.


Thanks for fixing. Out of curiosity, what made you think your blog needs a warrant canary?


I have been using SVGs for charts on my blog for a couple of months[0] now. Using SVGs satisfied me, but in all honesty, I don't think anyone else cares. For completeness the benefits are below:

* The charts are never blurry

* The text in the chart is selectable and searchable

* The file size could be small compared to PNGs

* The charts can use a font set by a stylesheet

* The charts can have a builtin dark mode (not demonstrated on my blog)

Additionally as the OP shown, the text in SVG is indexed by google, but comes up in the image sections [1].

The downside was hours of fiddling with system fonts and webfonts and font settings in matplotlib. Also the sizing of the text in the chart and how it is displayed in your page is tightly coupled and requires some forethought.

[0] https://aleyan.com/blog/2025-llm-assistant-census

[1] https://www.google.com/search?q=%22slabiciunea+lui+Nicu+fore...


That's the kind of thing you do not for the users but because of your own standards, and even if no one else appreciates it, you always do.


That's totally correct! I once replaced some blurry scans from the 6502 manual by SVG versions, and, while I was at it, I coded them by hand (really, because for this particular job it seemed easier than doing it in a drawing program.) While nobody will notice, it's satisfactory.

[1] https://www.masswerk.at/6502/6502_instruction_set.html#stack


Nobody will notice, because that's how it should be... personally I often notice when it's bad: blurry plots, JPEG noise that should not be there, and so on, and think "oh no, another one who has no idea about how to do images properly..."


90% of your users will benefit from it without even realising. 9.9% will silently appreciate it. If you're lucky, the remaining 0.1% will tell you they appreciate it!


Users definitely benefit from:

- smaller file sizes

- dark mode

- readable text

- selectable text


Another thing to watch out for with SVGs is how they appear in RSS readers or browser reader views. If you're using external SVG files then it should be fine. If you've optimized them by embedding into the HTML, then you need to be careful. If they rely on CSS rules in the page's CSS then it's not going to work well. For my website I try to make the SVGs self-sufficient by setting the viewBox, width, and height attributes, using a web safe font, and only relying on internal styles. You can still get some measure of light/dark mode support by setting fill or stroke to currentColor.


My advice, for web pages: always specify the <svg> width and height attributes, or the width and height properties in a style attribute, because if non-inline CSS doesn’t load (more common, for various reasons, than most people realise), the SVG will fill the available width. And you probably don’t want your 24×24 icon rendered at 1900×1900.

(For web apps, I feel I can soften a bit, as you’re more likely to be able to rely on styles. But I would still suggest having appropriate width/height attributes, even if you promptly override them by CSS.)


This, so much this. It is extremely annoying when I have a slow connection and I have to scroll down ages to get to the page content.


They appear to have “solved” the RSS problem by only providing one sentence of content with each entry in the RSS feed.


To complete the test, the website needs an HTML page that is mostly SVG. I think that might stand a chance of getting into the main search results rather than just the image search.

Also of interest for me would be whether SVG description markup gets picked up in the index.

To complete the search of possibilities, having the SVG generated by Javascript on page load would be of interest, for example, with some JSON object of data that then gets parsed to plot some SVG images.

Your SVG graphs are very neat and nobody caring is a feature not a bug. If they were blurry PNGs then people might notice but nobody notices 'perfection', just defects.

I noticed you were using 'NASA numbers' in your SVGs. Six decimal places for each point on a path is a level of precision that you can cut down with SVGOMG or using the export features from Inkscape. I like to go for integers when possible in SVG.

The thing with SVG is that the levels of optimisation go on forever. For example, I would set the viewbox coordinates so that (0, 0) is where the graph starts. Nobody would ever notice or care about that, but it would be something I would have to do.


Oh man, this is a deep mine to dig. I haven't even thought about svg size optimization. The default blog template I used really wants me to use hero images, and the jpgs are already hefty. I just looked at my network panel, and it seems the font files are loaded once per svg on initial load and then are cached.

What is the motivation for viewbox coordinates being at (0,0)? I have been thinking about setting chart gutters so that the graph is left aligned with the text, but this seems like an orthogonal issue.


Okay, you did ask...

Rather than use MatLab to create your bar charts, you could do something like this.

Here I am assuming you don't want standalone images that others can steal but you do want maximal SVG coolness.

Move the origin with viewBox voodoo witchcraft to 0,0.

Add a stylesheet in your HTML just for your SVG wizardry.

Create some CSS properties scoped to SVG for your colours, for example svg { --claude-code: red; --cursor: orange; --github-copilot: yellow; } and so on.

Put them in the stylesheet, and add some styles, for example claude-code line { stroke: var(--claude-code); } and so on.

Rather than use paths in groups with clip paths and whatnot, just use a series of lines, made nice and fat. Lines have two points, and, since the viewBox is zeroed out to the origin, you only need to specify the y2 value, with y1, x1 and x2 taking the defaults of zero. The y2 value could be whatever suits, the actual value divided by 1000, 10000 or something.

Put each line in a group with the group having a class, for example claude-code.

Add the label to the group with its own transform to rotate the text 45 degrees.

Add a transform to the group to move the fat line and its label along the y axis using a translate.

Rinse and repeat for all entries on the graph.

Now do some labels for the other axis.

As for the title of the graph, move that out of the SVG file. Put the SVG file in a figure element and put the title in a figcaption element. Add CSS for the figcaptions.

With SVG in HTML there is no need to do xlink and version things, just keep it simple, with just the viewBox and no width/height. Scale your figures in CSS with the SVG set to fill the space of the figure, so we are going full width.

You can also use some title elements for mouseovers, so, hover over a bar and you get the actual data number.

Why do it this way?

Say you don't like the colours or you want to implement dark mode. You can do the usual prefers media query stuff and set your colours accordingly, for all the graphs, so they are all consistent.

Same goes with the fonts, you want all that in the stylesheet rather than baked into every SVG, so you can update them all with one master change.

As for the last graph with lots of squares, those squares are 'rect' not path, for maximum readability. The rectangles can be put in a defs container as symbols, so you have veryLightBlueSquare, lightBlueSquare, BlueSquare and so on. Then, with your text you can put each value in a group that contains a text node and a use tag to pull through the relevant colour square.


Yes! We do that with svija.com/en — an all SVG website with an HYML wrapper so it displays at the correct size.


> Also the sizing of the text in the chart and how it is displayed in your page is tightly coupled and requires some forethought.

I used to make a lot of charts with R/ggplot and the big disadvantage is, as you mentioned, the sizing of elements, especially text. So I wrote a small function that would output the chart in different sizes and a tiny bit of JS to switch between them at different breakpoints. It worked pretty well I think, the text was legible on all devices, though I still had to check that it looks fine and elements aren't suddenly overlapping or anything.

Another advantage of SVGs is that they can have some interactivity. You can add tooltips, hovers, animation and more. I used ggiraph for that: https://ardata.fr/ggiraph-book/intro.html


It does come up in normal results for me, I don't need to go to the images section, the page has keyword for testing lmtbk4mh, see result https://www.google.com/search?q=lmtbk4mh


Chartist is an amazing JS include that is tiny (10kb), that makes it super simple to create beautiful SVG charts.

It’s way undervalued and rarely gets updates.

https://gionkunz.github.io/chartist-js/


What are you using to produce the graphs?

I wrote a small graphing library for mine [1], but it has limitations.

[1] https://coffeespace.org.uk/projects/sound-source-delta.html


I have been excited about bun for about a year, and I thought that 2025 is going to be its breakout year. It is really surprising to me that it is not more popular. I scanned top 100k repos on GitHub, and for new repos in 2025, npm is 35 times more popular and pnpm is 11 time more popular than bun [0][1]. The other up and coming javascript runtime, deno is not so popular either.

I wonder why that is? Is it because it is a runtime, and getting compatibility there is harder than just for a straight package manager?

Can someone who tried bun and didn't adopt it personally or at work chime in and say why?

[0] https://aleyan.com/blog/2025-task-runners-census/#javascript...

[1] https://news.ycombinator.com/item?id=44559375


It’s a newer, vc funded competitor to the open source battle tested dominant player. It has incentives to lock you in and ultimately is just not that different from node. There’s basically no strategic advantage to using bun, it doesn’t really enable anything you can’t do with node. I have not seen anyone serious choose it yet, but I’ve seen plenty of unserious people use it


I think that summarizes it well. It's not 10x better that makes the risky bet of going into vendor lock from a VC-backed company worth it. Same issue with Prisma and Next for me.


Tailwind uses it.

Considering how many people rely on a tailwind watcher to be running on all of their CSS updates, you may find that bun is used daily by millions.

We use Bun for one of our servers. We are small, but we are not goofing around. I would not recommend them yet for anything but where they have a clear advantage - but there are areas where it is noticeably faster or easier to setup.


I really want to like Bun and Deno. I've tried using both several times and so far I've never made it more than a few thousand lines of code before hitting a deal breaker.

Last big issue I had with Bun was streams closing early:

https://github.com/oven-sh/bun/issues/16037

Last big issue I had with Deno was a memory leak:

https://github.com/denoland/deno/issues/24674

At this point I feel like the Node ecosystem will probably adopt the good parts of Bun/Deno before Bun/Deno really take off.


uh... looks like an AI user saw this comment and fixed your bun issue? Or maybe it just deleted code in a random manner idk.

https://github.com/oven-sh/bun/commit/b474e3a1f63972979845a6...


The bun team uses Discord to kick off the Claude bot, so someone probably saw the comment and told it to do it. that edit doesn't look particularly good though


I am also very curious what people think about this. To me, as a project, Node gives off a vibe of being mature, democratic and community driven, especially after successfully navigating then io.js fork drama etc a few years ago. Clearly neither bun nor deno are community driven democratic projects, since they are both VC funded.


I am Bun's biggest fan. I use it in every project I can, and I write all my one-off scripts with Bun/TS. That being said, I've run into a handful of issues that make me a little anxious to introduce it into production environments. For instance, I had an issue a bit ago where something simple like an Express webserver inside Docker would just hang, but switching bun for node worked fine. A year ago I had another issue where a Bun + Prisma webserver would slowly leak memory until it crashed. (It's been a year, I'm sure they fixed that one).

I actually think Bun is so good that it will still net save you time, even with these annoyances. The headaches it resolves around transpilation, modules, workspaces etc, are just amazing. But I can understand why it hasn't gotten closer to npm yet.


Take a look at their issue tracker, it's full of crashes because apparently this Zig language is highly unsafe. I'm staying on Node.


That's why out if I had to choose a Node competitor, out of Bun and Deno, I'd choose Deno.


Good thing libuv is written in a "safe" language.


npm is a minefield that thousands of people traverse every day. So you are unlikely to hit a mine.

bun is a bumpy road that sees very low traffic. So you are likely to hit some bumps.


Zig isn’t inherently highly unsafe. A bit less than Rust in some regards. But arguably more safe in a few others.

But the language haven’t even reached 1.0 yet. A lot of the strategies for doing safe Zig isn’t fully developed.

Yet, TigerBeetle is written in Zig and is an extremely robust piece of software.

I think the focus of Bun is probably more on feature parity in the short term.


There’s a `crash` label. 758 open issues.


Well node is C++ which isn’t exactly safe either. But it’s more tested.


Neither Bun nor Deno have any killer features.

Sure, they have some nice stuff that should also be added in Node, but nothing compelling enough to deal with ecosystem change and breakage.


bun test is a killer feature


I think part of the issue is that a lot of the changes have been fairly incremental, and therefore fairly easy to include back into NodeJS. Or they've been things that make getting started with Bun easier, but don't really add much long-term value. For example, someone else in the comments talked about the sqlite module and the http server, but now NodeJS also natively supports sqlite, and if I'm working in web dev and writing servers, I'd rather use an existing, battle-tested framework like Express or Fastify with a larger ecosystem.

It's a cool project, and I like that they're not using V8 and trying something different, but I think it's very difficult to sell a change on such incremental improvements.


This is a long term pattern in the JS ecosystem, same thing happened with Yarn.

It was better than npm with useful features, but then npm just added all of those features after a few years and now nobody uses it.

You can spend hours every few years migrating to the latest and greatest, or you can just stick with npm/node and you will get the same benefits eventually


I have been using pnpm as my daily driver for several years, and am still waiting for npm to add a symlink option. (Bun does support symlinks).

In the interim, I am very glad we haven't waited.

Also, we switched to Postgres early, when my friends were telling me that eventually MySQL will catch up. Which in many ways, they did, but I still appreciate that we moved.

I can think of other choices we made - we try to assess the options and choose the best tool for the job, even if it is young.

Sometimes it pays off in spades. Sometimes it causes double the work and five times the headache.


If Node becomes much better thanks to the existence of Bun, then I think Bun accomplished its goals. Same for C and Zig.


There's still a few compatibility sticking points... I'm far more familiar with Deno and have been using it a lot the past few years, it's pretty much my default shell scripting tool now.

That said, for many work projects, I need to access MS-SQL, which the way it does socket connections isn't supported by the Deno runtime, or some such. Which limits what I can do at work. I suspect there's a few similar sticking points with Bun for other modules/tools people use.

It's also very hard to break away from entropy. Node+npm had over a decade and a lot of effort to build that ecosystem that people aren't willing to just abandon wholesale.

I really like Deno for shell scripting because I can use a shebang, reference dependencies and the runtime just handles them. I don't have the "npm install" step I need to run separately, it doesn't pollute my ~/bin/ directory with a bunch of potentially conflicting node_modules/ either, they're used from a shared (configurable) location. I suspect bun works in a similar fashion.

That said, with work I have systems I need to work with that are already in place or otherwise chosen for me. You can't always just replace technology on a whim.


I tried to run my project with bun - it didn't work so I gave up. Also, there needs to be a compelling reason to switch to a different ecosystem.


To beat an incumbent you need to be 2x better. Right now it seems to be a 1.1x better (for any reasonably sized projects) work in progress with kinks you’d expect from a work in progress and questionable ecosystem buy-in. That may be okay for hobby projects or tiny green field projects, but I’m absolutely not gonna risk serious company projects with it.


Seems awfully close to 2x, and that was last year.

https://dev.to/hamzakhan/rust-vs-go-vs-bun-vs-nodejs-the-ult...


> 1.1x better (for any reasonably sized projects)

2x in specific microbenchmarks doesn’t translate to big savings in practice. We don’t serve a static string with an application server in prod.


There are some rough edges to Bun (see sibling comments), so there's a apparent cost to switching, namely wasted developer time in dealing with Node incompatibility. Being able to install packages 7x faster doesn't matter much to me so I don't see an upside to making the switch.


Tried it last year - I spent a few hours fighting the built in sqlite driver and found it buggy (silent errors) and the docs were very lacking.


Bun is much newer than pnpm, looking at 1.0 releases pnpm has about a 6 year head start.

I write a lot of one off scripts for stuff in node/ts and I tried to use Bun pretty early on when it was gaining some hype. There were too many incompatibilities with the ecosystem though, and I haven't tried since.


Honestly, it doesn't really solve a big problem I have, and introduces all the problem with being "new" and less used.


> I wonder why that is?

LLMs default to npm


You sure it's not just because npm has been around for 15 years as the default package manager for node?


Didn't prevent me from switching to Bun as the cost is 0.


That's an amazing addition! Once I read about Simpson's paradox[0], couldn't help but seeing it or suspecting it everywhere. Luckily, it is not a true paradox, and it can resolved if underlying data is available and not just summary statistics.

I recommend putting together the Quintet in one image, so that the original 4 charts, plus the new one are all visible and interpretable together. It will be learning aid for decades to come.

[0] https://en.wikipedia.org/wiki/Simpson's_paradox


Yes, not saying the data dinosaur isn't cool. But for real-world applications, the quartet with the addition of this fifth dataset is more useful for pedagogical purposes.


Setting aside the reason that hydrating django templates in rust from django is useful in ways that hydrating jinja templates in rust from django isn't useful. Petcat's comment could be useful and the author may not be aware of existing prior art. As engineers, there is sometimes a huge urge to build without looking around first. I am guilty of this myself. When I started on dela [0], I didn't know about 2 alternatives to it; I only learned about them through comments.

[0] https://github.com/aleyan/dela


What an amazing set of data!

The "Generative AI services popularity" [1] chart is surprising. ChatGPT is being #1 makes sense, but Character.AI being #2 is surprising, being ahead of Anthropic, Perplexity, and xAI. I suspect this data is strongly affected by the services DNS caching strategies.

The other interesting chart is "Workers AI model popularity" [2]. `llama-3-8b-instruct` has been leading at 30% to 40% since April. That makes it hands the most popular weights available small "large language model". I would have expected Meta's `m2m100-1.2b` to be more used, as well as Alphabet's `Gemma 3 270M` starting to appear. People are likely using the most powerful model that fits on a CF worker.

As shameless plug, for more popularity analysis, check out my "LLM Assistant Census" [3].

[1] https://radar.cloudflare.com/ai-insights#generative-ai-servi...

[2] https://radar.cloudflare.com/ai-insights?dateRange=24w#worke...

[3] https://aleyan.com/blog/2025-llm-assistant-census/


Character.AI is extremely popular among youngers so it's not really surprising.


What exactly is Character.AI? There's literally no info on their website.


Chat for teens.


choose-your-own-adventure style chatbots


With a lot of characters/scenarios of a sexual nature. They are the market leader for NSFW LLM experiences. Or maybe it's more accurate to call them "dating" experiences


Why would DNS caching skew results?

I don’t think Cloudflare is using DNS queries to compile the stats considering they have visibility into the full http requests for sites they proxy.

Edit: Another comment mentions DNS queries. Did I miss something about how they’re compiling the stats?


The heading says “Generative AI services popularity - Top 10 services based on 1.1.1.1 DNS resolver traffic”


1.1.1.1 will see the query regardless of caching by upstream servers. Downstream and client caching probably averages out quite nicely with enough volume.


If the TTL of one domain’s records are all shorter than the TTLs of another domain’s, what would make downstream and client caching cancel out? Do clients not respect TTLs these days?

(In this particular case, I don’t think the TTLs are actually different, but asking in general)


My sister read me the first chapter of this edition of The Hobbit and refused to read me any more. So I had to read the rest myself to find out what happens. It became the first "grown up" book I ever finished.

When I read LoTR a few years later, these illustrations formed the images of what hobbits, dwarfs, and Gollum looked like in my minds' eye. Decades later, having seen the Peter Jackson films several times, Bilbo still looks wrong to me as I expect Leonov; Gollum looks wrong too for that matter.


> Gollum looks wrong too for that matter.

“Down the face of a precipice, sheer and almost smooth it seemed in the pale moonlight, a small black shape was moving with its thin limbs splayed out. […] The black crawling shape was now three-quarters of the way down, and perhaps fifty feet or less above the cliff's foot.[…] They peered down at the dark pool. A little black head appeared at the far end of the basin, just out of the deep shadow of the rocks.”

No visual version of Tolkien’s works could ever be made now which depicts Gollum accurately.


What do you mean? This seems accurate to how Gollum was depicted in the Peter Jackson movies.

Do you mean the skin color? The reference to the color black here is clearly there because Gollum is in the shadows in a darkened cave.


I grant you that it’s not clear-cut, but nowhere in the book (that I can find) is Gollum described as being pale, or even lightly colored (except his eyes). Instead, Gollum is frequently, as I showed, described as being “black” in color. He is also being misidentified as an Orc, which are similarily described.


Similar experience for me, except my imagery was influenced by the Brothers Hildebrandt. I collected all their cards and was obsessed with the detail in them.


The OP postulates two paths for the future: "1: LLM Labs go direct" and "2: LLMs become commodities, wrappers win". I just happened to have published a blogpost on llm code assistants used by GitHub repos[0][1]. Claude Code has just overtaken Cursor as the most popular. Gemini CLI and OpenAI Codex also has a steeper growth curve than Cursor. So on just this question, it looks like the drugs are beating the dealers.

[0] https://aleyan.com/blog/2025-llm-assistant-census/ [1] https://news.ycombinator.com/item?id=44863713


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: