Hacker Newsnew | past | comments | ask | show | jobs | submit | more keerthiko's commentslogin

i got $100 of credit at the start of the year, and have been using +1$ each month, starting at $2 in january using aider at the time. just switched to claude code this week, since it follows a similar UX. agentic CLI code assist really has been growing in usefulness for me as i get faster at reviewing its output.

i use it for very targeted operations where it saves me several roundtrips to code examples and documentation and stack overflow, not spamming it for every task i need to do, i spend about $1/day of focused feature development, and it feels like it saves me about 50% as many hours as i spend coding while using it.


What do you prefer, between Aider and CC? I use Aider for when I want to vibe code (I just give the LLM a high-level description and then don't check the output, because it's so long), and Cursor when I want to AI code (I tell the AI to do low-level stuff and check every one of the five lines it gives me).

AI coding saves me a lot of time writing high-quality code, as it takes care of the boilerplate and documentation/API lookups, while I still review every line, and vibe coding lets me quickly do small stuff I couldn't do before (e.g. write a whole app in React Native), but gets really brittle after a certain (small) codebase size.

I'm interested to hear whether Claude Code writes less brittle code, or how you use it/what your experience with it is.


I tested Aider a few times, and gave up because at the time it was so bad - it might be time to try it again, and I'll add that my experience with seeing how Claude Code works for me while lots of other people struggle with it suggests to me that my experience with Aider might well be that my style of working just meshes better with Claude Code than Aider.

Claude Code was the first assistant that gelled for me, and I use it daily. It wrote the first pass of multi-monitor support for my window manager. It's written the last several commits of my Ruby X11 bindings, including a working systray example, where it both suggested the whole approach and implemented it, and tested it with me just acting as a clicking monkey (because I haven't set up any tooling to let it interact with the GUI) when it ran test scripts.

I think you just needs to test the two side by side and see what works for you.

I intend to give Aider a go at some point again, as I would love to use an open source tool for this, but ultimately I'll use the one that produces better results for me.


Makes sense, thanks. I've used Claude Code but it goes off on its own too much, whereas Aider is more focused. If you do give Aider another shot, use the architect/editor mode, with Gemini 2.5 Pro and Claude 3.7, respectively. It's produced the best results for me.


If the IP address is hashed somehow it would no longer be personally identifying while still being unique enough for analytics purposes, correct?

Does geographic grouping data depend on the IP address? If so I suppose it would need to be extracted first before hashing the IP, and I wonder how much that weakens the anonymization.


You can hash every IPV4 for a rainbow table. Needs some salt.


According to the author, Rybbit hashes IPs with a daily rotating salt.

https://www.reddit.com/r/selfhosted/comments/1kgytl4/i_built...


Okay, but that doesn't mean the concept is bad.


Yes it does.

If a user can say "here's my IP address, what data do you have on me?" and you can answer that question, then that's personal data under GDPR. It's pseudynomized, but not anonymized, and pseudynomous data is personal data.


Even if you can't answer that question, if it can be answered, that's still personal data.


What's the minimum size of an operation before the GDPR kicks in? In other words, are all sites governed by GDPR, or are some companies considered too small to be under the GDPR regulations? I know that there are some regulations that get a pass for smaller outfits. I know nothing about GDPR as a European audience is not my target and not kowtowing for them.


GDPR does not currently have explicit business size thresholds. Its provisions are all framed as personal rights of the data subject, so its provisions are always in effect. By contrast, CCPA in California is framed as a consumer protection law so it only applies to companies of a certain size.

In practice, small fries are not an enforcement priority. Regulators in most countries are not well-funded so they have to be frugal with their enforcement actions.

The EU is currently reviewing an option to relax GDPR requirements for smaller businesses. Not remove GDPR requirements, just streamline some of the process overhead.


yep (zen). same on arc/chrome


in my experience, people who grow up as the biggest fish in a small pond (whether concerning just fields they care about, or in general) are always 99% of the time, one of these two when they end up a middling fish in the big pond: like you, happy to find peers and inspiring exemplars to collaborate with and learn from, or those who hate that they are not the best anymore.

the former group probably leads the healthiest & happiest life fulfillment while pursuing their interests — i'm heavily biased though because i too fall into this category and am proud of this trait.

the latter group consists of people who either spin their wheels real hard and more often than not burn out in their pursuit of being the best, or pivot hard into something else they think they can be the best at (often repeatedly every time they encounter stronger competition) like gates & co, or in rare cases succeed in being the best even in the more competitive environment.

this last .001% are probably people whose egos get so boosted from the positive reinforcement that they become "overcompetitive" and domineering like zuck or elon, and let their egos control their power and resources to suppress competition rather than compete "fairly" ever again.

i think there's a subset of people from both main groups that may move from one into the other based on life experiences, luck, influence of people close to them, maturity, therapy, or simply wanting something different from life after a certain point. i don't have a good model for whether this is most people, or a tiny percentage.


I think the more common outcome you're not seeing, for the "other" group, is that they just go back to smaller ponds where they excelled in the first place, and often make strong contributions there.

Once it's been observed that there are bigger fish, you can't really go back to the naive sense of boundless potentiality, but you can go back to feeling like a strong and competent leader among people who benefit from and respect what you have.

Your comment focuses on the irrepressibly ambitious few who linger in the upper echelons of jet-setting academia and commerce and politics, trying to find a niche while constantly nagged by threats to their ego (sometimes succeeding, sometimes not), but there's many more Harvard/etc alum who just went back to Omaha or Baltimore or Denver or Burlington and made more or less big things happen there. That road is not so unhealthy or unhappy for them.


this is a very good point, and a blind spot in my comment because IME people who left the small pond in the first place were dissatisfied and unfulfilled there.

it is absolutely possible that after experiencing the bigger pond, people can develop purpose in their "original" pond based on values like community and relationships, or even simply dislike the vibes in bigger ponds and want to undo as much as they can. this is a super valuable thing to society and humanity for the most part, as perhaps more change can happen this way than big things happening in big places.

personally i struggle with this, because whenever i re-enter a smaller ecosystem (including/such as the one i grew up around) i feel like everyone has a distorted view of the bigger pond and self-limit themselves, which is a contagious energy i can't stand.


well put


they have a ko-fi and a patreon, with about a 1000 "subscribers" across both at <unknown> amounts at the moment. it's not exactly enough to promise indefinite support, but tbh i don't really much reason to have that faith from products i've paid for but are closed-source either.


The project's main owner said that the income from the project is enough for him to make it his main job after he finishes university.


That is a pre-indicated stipulation of the green card validity, not revocation based on the whim of an evaluating (non-immigration judicial) official -- ie CBP and DHS and ICE cannot (read: should not be able to) revoke green cards.

The "basic US presence" requirement of green cards has always been present in the validity clause alongside the 5-10year expiry date, and not committing immigration fraud and other basic requirements to maintain green card -- a comical number of European green card holders gloss over/forget this clause every year, that is made explicit to them upon receiving the card and proceed to forfeit their green cards by not entering the US for over a year -- that is not a revocation (implies a subjective decision made by an official), it is a lapse of validity (implies some pre-stated condition was fulfilled).


I see your point. Thank you.

I think most non legally inclined people (like me) would say CBP yanked my GC.

Your point being that - nope, they just enforced the law.

Right?


Yes. People are generally familiar with rights that can lapse if eligibility is not maintained - consider the right to vote in state elections, which you lose if you fail to maintain residency in that state. Nobody yanked your state voter registration or your eligibility for in-state tuition, you abandoned it.


IMO the source engine codebase is probably chock-full of duct tape and cruft, full of undocumented, legacy, bespoke, hacky and deprecated stuff that it's not worth the dev resources for valve to bring it up to an OSS standard worthy of their reputation.

Contributing to this is probably

- custom external hooks (eg: homemade test framework, patchnotes publishing, steamapp backdoor integrations, hardware-specific firmware interfaces, 3rd party closed source SDK hooks)

- assumptions about Valve's server architecture/implementation for most multiplayer stuff used by Valve games, the codebase(s) of which are probably as vast as Source itself and closed-source too

- bespoke engine modifications made for specific games like HL2 or CS1.6 which hasn't been touched for a decade, the authors of which may not be accessible to document them trivially

Adding sufficient documentation to a massive closed-source system meant for internal use, over multiple decades, to bring them up to par for functional OSS publication is a monumental feat that honestly probably isn't worth the risk of bad publicity from the modder community who'd just be mad about how unusable it would be.


Supposedly there's a texture of a coconut included in source games, along with either a filename or tagged with a comment to say that if it's removed then the game will break and noone knows why.


That was a throwaway false joke on Reddit that the community has spread, this isn't true. It's just an unused texture from an old update.


Are you implying Valve has a repoutation for high quality code? Because I don't think user of Steam or their games shares that view.


> A decelerating car has negative velocity.

not really your point, but ??

a decelerating car has negative acceleration, and until it starts reversing relative to its start, it has velocity in whichever direction it started in -- presumably positive if that was your initial frame of reference. of course if you decided positive was the opposite direction from which the car was already going in, well, it started with negative velocity.

also to the GP, if you owe someone a sheep but don't have any, you really do have -1 sheep.


browser-store and cookies, among other tools, provide nice front-end-only persistent storage for holding things like recommendation weights/scoring matrices. maybe a simple algorithm that can evaluate down from a few bytes stored in weights might be all the more elegant.


imo it's the platform's choice to have default-visible or default-sandboxed program outputs and data.

while possible, it is fairly non-trivial for iOS apps to have read/write access to a shared folder where they can drop arbitrary files, which can then be accessed by other apps, or be discovered by the user. it often requires copious permission negotiation handling codepaths by the developer, and a fearlessness of scary permission-warning dialogs by the end-user.

even on modern (commercially popular flavors of) Android which no longer imbibes the "free software" ethos of the linux core the OS was built around, you can't access formerly accessible application sandbox folders without installing third party browsing tools or plugging into a desktop computer to mount the storage, and cross-application sandbox access is similar to iOS.

in the "personal computing way" mentioned by the article (even today on desktop environments, less so on MacOS) program outputs are default-visible, and developers have to go out of their way to firewall or obscure or encrypt it from being accessible by the user or other programs using OS-provided pathways.

i think this is 100% on the OS + hardware + application platform provider (with Apple as all three on iOS).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: