Hacker Newsnew | past | comments | ask | show | jobs | submit | anonyfox's commentslogin

Location: Hamburg, Germany

Remote: yes when timezone is reasonable

Willing to relocate: no

Technologies: Rust, Go, Elixir, JS/TS, Cloud stuff, AI/ML/Math, ...

Résumé/CV: https://anonyfox.com/cv

Email: max@anonyfox.com

—-

Either Fullstack (senior/…/principlal), DevOps, Architect, or Team Lead/CTO, whatever - I grew up with having to do all the roles back in the day, from Startup Growth Hacking over Scale-Up Optimization to Enterprise Executive. Preferrably doing something _real_, not another purely digital product/service or crypto stuff. Connection to something physical I could touch, that also has a future.


In Elixir/Erlang thats quite common I think, at least I do this for when performance matters. Put the specific subset of commonly used data into a ETS table (= in memory cache, allowing concurrent reads) and have a GenServer (who owns that table) listen to certain database change events to update the data in the table as needed.

Helps a lot with high read situations and takes considerable load off the database with probably 1 hour of coding effort if you know what you're doing.


There is zero reasoning in it so far, everything up to today is perfectly explainable with advanced statistics and NLP. Its large _language_ models after all, no matter the hype.

Still I find it excellent when exploring new knowledge domains or cross-comparing cross knowledge domains, since LLMs by design (and training corpus) will spill out highly probable terms/concepts matching my questions and phrase it nicely. Search on steroids when you will, where also real-time doesn't matter for me at all.

This is not intelligence, yet hugely valuable if used right. And I am sure because of this, a lot of scientific discoveries will be made with todays LLMs used in creative ways, since most of scientific discoveries is ultimately looking at X within a setting at Y, and there are a lot of potential X and Y combinations.

I am exaggerating a bit, but at some point (niels bohr?) had the thought of thinking about atoms like we do about planets, with stuff circling each other. Its an X but in Y situation. First come up with such a scenario (or: an automated way to combine lots of X and Y cleverly) and then filter the results for something that actually would make sense, and then dig deeper in a semi-automatic way with actual human in the loop at least.


> everything up to today is perfectly explainable with advanced statistics and NLP

Is there some concrete task or behavior that, if demonstrated, you believe wouldn't be explainable by advanced statistics and NLP?

In my mind even human/animal behavior is in theory explainable with advanced statistics.


thats the jackpot question I think.

personally I am leaning towards yes, the human mind is nothing magical here, just more advanced wetware.


I don't know. I recently am drawn more and more in pure Go code, which is conceptually as simple as it gets, yet I can achieve basically everything imaginable in it without hitting walls/limitations. Add some (embedded) SQLite for small/quick/cheap things or postgres for when it gets more involved or critical (horizontal scaling, backups, ...), and everything is quickly possible. Now with LLMs, its outright trivial to make them generate functionality at will immediately, and (pure) Go is exceptionally friendly with LLM codegen, due to the language being stupid simple and the stdlib being "complete" for years.

Essentially I prefer staying at the "you need to be a coder" abstraction level, but the general tooling nowadays makes becoming one very easy. Once you mentally lift the requirement that something needs to be configurable by some non-technical end user or even needs an UI at all, 90%+ of all dev effort can be saved directly. Plus there are no showstoppers in capabilities or usual barriers like too bad performance/config hell of "no code"/"low code" I encountered quite often. And if you don't use the latest webdev fads, things can be maintainable for decades (looking at Go compared to NodeJS).

Rawdogging basic programming (yes, also no framework if possible) made more "business" projects first succeed (and then stay alive easily) much more than either a web framework or any kind of lo-/no-tool, at least the things I encountered in the wild. Even bad spaghetticode monstrosities can now uploaded to an LLM and refactored into sanity quite efficiently.

The worst kind of projects (with lots of pain and regret) have been either JS-based, mis-used framework projects (including rails!) or Salesforce setups. Often you're stuck in a dead-end here.


Same thing I have settled on.

Sqlite for quick and easy postgres for scaling.

For simple frontends I just use the built in go templates.

If I need complex frontends for larger apps I use sveltekit/svelte5 for frontend data I just export a single instance of a class that has state/derived fields for data and an isInitialized field that returns a promise for loading the data. Then methods for reloading data, changing data or any actions I need.

So all i have to do is await classInstance then use the class data in whatever way I need. Everything is reactive and simple due to states. you can use and update the fields directly like regular js but with global reactivity built in.

The data automatically loads the first time the module is imported. due to how esmodule files work. I just have a classInstance.LoadData() after the export.

Svelte5 isn't as good with LLM's but with some small instructions about how states, derived, and effects work it works pretty well.


Not to disagree with your post, but I'd *love* to support a renaissance of RSS. It was/is essentially peak distribution of content in a proper decentralized manner, putting users first and letting providers use whatever they want freely to generate it. No walled gardens. No restrictions.


And no good tools. RSS readers in 2024 still keep failing with the same failing interfaces that failed in 1999.

No, I don't want a portal with a little box for every feed I follow.

No, I don't want a listing like an email client.

No, I never want it to show me a piece of content twice unless I ask for it. (e.g. as David Byrne says: "say something once, why say it again?")

Yes, I expect to subscribe to more RSS feeds than I can read entirely so I expect it to learn my preferences like my YOShInOn agent does. In a cycle of a few days, YOShInOn might find 3000 or so articles in RSS feeds and it chooses 300 to show me which I thumbs up or thumbs down. I knew such a thing was possible when I wrote this paper

https://pmc.ncbi.nlm.nih.gov/articles/PMC387301/

but now it is not only possible but easy.


A thing that took me several years to agree to is that almost everything can be coded within 24 hours if really needed - gun to the head situation. Will it be perfect/efficient/beautiful/... probably not. But it should roughly work like its supposed to.

If something cannot be coded within that 24 hours, something else is odd, not the feature. Transitioning from SWE to DevOps and then Leadership roles, most of my day actually is spent with all the reasons/excuses why "it cannot be done", and try to eliminate them. Probably my developers hate me for it, but I always push hard for an immediate first solution instead of doing days of soul-searching first, but over time we encounter and solve enough roadblocks (technical, social, educational, ...) that it actually happens more often than not to have surprisingly fast (and good enough) solutions. That speed is a quality in itself, since it frees up time to come back to things and clean up messes without the shipping pressure mounting up over days/weeks - something working is already there on day two.

The trick is of course to _not_ sell this 24 hour solution to upper management ever, or else it will become a hell of a mess fast once this becomes the outsider expectation.


Nowadays i come to my conclusion that "ease of maintenance" is the most important feature to have in a project. More critical is only that the project in itself is valuable enough, so many engineers optimize things that shouldn't exist in the first place.

Easy to maintain is not only about keeping something alive with minimal effort over longer periods of time. It also plays a pivotal role for scalability in any direction. Adding more engineers/teams, adding more unforseeable features, iterating quickly in general, surviving more traffic/load, removing technical bottlenecks, ... everything is so much easier when the project is easy to work with and maintainable.


Personal observation from a heavy LLM codegen user:

The sweet spot seems to be bootstrapping something new from scratch and get all the boilerplate done in seconds. This is probably also where the hype comes from, feels like magic.

But the issue is, that once it gets slightly more complicated, thinks break apart and run into a dead end quickly. For example yesterday I wanted to build a simple CLI tool in Go (which is outstandingly friendly to LLM codegen as a language + stdlib) that acts as a simnple reverse proxy and (re-)starts the original thing in the background on file changes.

AI was able to knock out _something_ immediately that indeed compiled, only it didn't actually work like intended. After lots of iterations back-and-forth (Claude mostly) the code balooned in size to figure out what could be the issue, adding all kinds of useless crap that kindof-looks-helpful-but-isn't. After an hour I gave up and went through the whole code manually (few hundred lines, single file) and spotted the issue immediately (holding a mutex lock that gets released with `defer` doesn't play well with a recursive function call). After pointing that out, the LLM was able to fix it and produced a version that finally worked - still with tons of crap and useless complexity everywhere. And thats a simple straightforward coding task that can be accomplished in a single file and only few hundred lines, greenfield style. And all my claude chat tokens of the day got burned for this, only for me at the end having to dig in myself.

LLMs are great to produce things in small limited scopes (especially boilerplate-y stuff) or refactor something that already exists, when it has enough context and essentially doesn't really think about a problem but merely changes linguistic details (rewrites text to a different format ultimately) - its a large LANGUAGE model after all.

But full blown autonomous app building? Only if you do something that has been done exactly thousands of times before and is simple to begin with. There is lots of business value in that, though. Most programmers at companies don't do rocket science or novel things at all. It won't build any actual novelty - ideal case would be building an X for Y (like Uber For Catsitting) only but never an initial X.

Personal productivity of mine went through the roof since GPT4/Cursor though, but I guess I know how/when to use it properly. And developer demand will surge when the wave of LLM-coded startups get their funding and realize the codebase cannot be extended anymore with LLMs due to complexity and the raw amount of garbage in there.


> After lots of iterations back-and-forth (Claude mostly) the code balooned in size to figure out what could be the issue, adding all kinds of useless crap that kindof-looks-helpful-but-isn't. After an hour I gave up and went through the whole code manually (few hundred lines, single file) and spotted the issue.

That's what experience with current generation LLMs looks like. But you don't get points for getting the code in/by the LLM looking perfect, you get points for what you check-in to git and PR. So skill is in realizing the LLM is running itself in circles, before you run out of tokens and burn an hour, and do it yourself.

Why use an LLM at all if you still have to do it yourself? Because it's still faster than without, and also that's how you'll remain employable - by covering the gaps that an LLM can't handle (until they can actually do full blown autonomous app development, which is still a while away, imo).


The most important signal of a degree is a written proof that a person somehow managed to show up to something for a longer timespan without being physically forced to constantly. Basically a minimum baseline of reliability and self-organization.

The reality is _a lot_ of applicants without any proof like that don't last long and thus wasting lots of company resources (including rehiring). Of course there are many very great folks without a degree and also a lot of idiots with a degree, but I learned to trust that heuristic until a strong signal otherwise pops up.


the concept of freedom itself kind of is hard even in a stricter context like politics.

like, I cannot freely choose to do X today, because (iE: capitalism environment) demands me to do certain other things that bring in money to live comfortably. I could make a tradeoff somewhere, but that very tradeoff itself limits actual freedom.

therefore to maximize actual freedom, we're looking at eliminating constraints that limit freedom, like the need to make money somehow, which would be a dramatic society change not everyone agrees to.

its hard. and wonky. so everyone uses it only through a personal belief lense.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: