Hacker Newsnew | past | comments | ask | show | jobs | submit | ashwinsundar's commentslogin

This is like the #1 cause of spaghetti, unmaintanable, deadlocked codebases - a single developer who knows every “best-practice” and optimization technique, and will not hesitate to apply it in every situation regardless of practicality or need, as a way to demonstrate their knowledge. It's the sign of an insecure developer - please stop.

How do you know this works?

Why should any given software be open-sourced? What is there to be gained by the company?

I'm not sure whether you're being serious or sarcastic. In the former case, the answer may span from easing the process of review from the outside, hence better the quality of software (e.g. a cybersecurity company would gain from that, given that all major cryptographic algorithms and such are produced by academia in public papers and depend on computational power, not secrecy of algorithms), to satisfying your ethics (I'd rather live in a world where everyone can see with their own eyes if you fulfill you promises about your code, Stallman can teach us a thing or two about this matter). Moreover, the open source community has produced the best software we, as humanity, have created so far: that alone is a good reason to believe in the power of opening your system.

Adoption, Community and more

Caveats apply


    But how many of you wouldn’t hook up your website to Google?
Me. https://ashwinsundar.com/robots.txt

Your computer doesn't have the right to scrape what I say or do anything with it.

    I know one of the primary reasons that I do anything online is to provide an outlet for someone else to see it. If I didn’t want someone else to see it, I’d write it down on my notebook, not on the public web.
Sounds like the same schpiel from the anti-privacy advocates who think that we should all expose everything we're doing because "you should have nothing to hide".

https://archive.is/WjbcU

This article was written for Wired by Moxie Marlinspike in 2013, who went on to later develop the Signal protocol.

I don't want my thoughts or ideas spread across the web promiscuously. The things I say publicly are curated and full of context. That's why I have my own website, and don't post elsewhere.

I'm not playing the same game you are, which appears to be to post liberally and have loose thoughts to maximize "reach".


On this day 34 years ago, Tim Berners-Lee replies to a question regarding research on "Hypertext links enabling retrieval from multiple heterogeneous sources of information". He proposes a CERN project called the WorldWideWeb (WWW), and welcomes collaborators to the project.


Git and GitHub are not the same thing. git repos can live independently of GitHub

What features do you feel like git is missing?


Reviewable merge requests, review comments, etc.


You can propose this to the Git mailing list. I don't think this kind of feature should be respnsibility of Git, however, you can try


the real vibe codes were the list comprehensions we made along the way


Are you implying that people use “training data” to learn things


I am joking with the terminology, but I don't believe B.S.'s claim that they were able to do human language learning without 'labelling'.

It's a valid way to learn a second language in the same script, see Lingua Latina for example, but how can you possibly learn a first language or a new script without being told the sounds characters make? You can learn to listen/comprehend and speak by in immersion like that, but not reading & writing.


I think he's implying that humans require available information from which to learn new things, and that borrowing a term from AI research is one valid (if backwards-sounding) way to describe that fact.


Location: Denver, CO

Remote: Yes

Willing to relocate: No

Technologies: python, django, docker, git, postgres, react, next.js, gatsby, typescript, jQuery, vanilla javascript, objectstore (nosql db), prisma ORM, graphql, bash/zsh, github actions, amazon web services (AWS), digitalocean, go, mqtt (mosquitto), R, matlab

Résumé/CV: https://github.com/AshwinSundar/resume/blob/main/resume.md

Email: ashiundar@gmail.com

Seeking: Part-time contracting work (25%-75%)

Target billing rate: $125-$150/hour

About: Full-stack developer with a master's degree in biomedical engineering. Polyglot technologist, with a focus on Python and Django development. Wide variety of professional experience to draw from. Am currently launching a startup (while working a full-time job as a Sr SWE) - I'd like more time to spend on the startup, while continuing to pay my bills. Thus the desire for part-time contracting work.


heads up, your resume link is a 404


Oh man, thanks for the heads up! Fixed


No problem. Also, hello Denver neighbor! Cheers, best of luck.


Those aren’t mutually exclusive goals. You can serve efficient pages AND enable pre-fetch/pre-render. Let’s strive for sub-50ms load times


Yeah but it "fake" sub 50ms load when you load it at the front before it shows


I guess you could call it fake or cheating, but ahead-of-time preparation of resources and state is used all the time. Speculative execution [0], DNS prefetching [1], shader pre-compilation, ... .

[0]: https://en.wikipedia.org/wiki/Speculative_execution

[1]: https://www.chromium.org/developers/design-documents/dns-pre...


also dns not changing every second where you website need it

Yeah but you not only "count" it only when it shows tho????

I'am not saying its not "valid" but when you count it only when you shows it, are we really not missing a part why we need "cache" in the first place?


Every layer underneath tries really hard to cheat and keep things usable/fast.

This includes libraries, kernels, CPUs, devices, drivers and controllers. The higher level at which you cheat, the higher the benefits.


Idk, if you are starting from prerender/prefetch `where href_matches "/*"` maybe you are wasting resources like you are swinging at a piñata in a different room.

This approach will just force the pre-loader/renderer/fetcher to be cautious and just prepare a couple of items (in document order unless you randomise or figure out a ranking metric) and have low hit ratios.

I think existing preloading/rendering on hover works really well on desktop, but I'm not aware of an equivalent for mobile. Maybe you can just preload visible links as there's fewer of them? But tradeoffs on mobile are beyond just latency, so it might not be worth it.


Not mutually exclusive, but they compete for resources.

Prefetch/prerender use server resources, which costs money. Moderate eagerness isn’t bad, but also has a small window of effect (e.g. very slow pages will still be almost as slow, unless all your users languidly hover their mouse over each link for a while).

Creating efficient pages takes time from a competent developer, which costs money upfront, but saves server resources over time.

I don’t have anything against prefetch/render, but it’s a small thing compared to efficient pages (at which point you usually don’t need it).


> Creating efficient pages takes time from a competent developer, which costs money upfront, but saves server resources over time.

Not trying to be a contrarian just for the sake of it, but I don't think this has to be true. Choice of technology or framework also influences how easy it is to create an efficient page, and that's a free choice one can make*

* Unless you are being forced to make framework/language/tech decisions by someone else, in which case carry on with this claim. But please don't suggest it's a universal claim


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: