This is like the #1 cause of spaghetti, unmaintanable, deadlocked codebases - a single developer who knows every “best-practice” and optimization technique, and will not hesitate to apply it in every situation regardless of practicality or need, as a way to demonstrate their knowledge. It's the sign of an insecure developer - please stop.
I'm not sure whether you're being serious or sarcastic. In the former case, the answer may span from easing the process of review from the outside, hence better the quality of software (e.g. a cybersecurity company would gain from that, given that all major cryptographic algorithms and such are produced by academia in public papers and depend on computational power, not secrecy of algorithms), to satisfying your ethics (I'd rather live in a world where everyone can see with their own eyes if you fulfill you promises about your code, Stallman can teach us a thing or two about this matter). Moreover, the open source community has produced the best software we, as humanity, have created so far: that alone is a good reason to believe in the power of opening your system.
Your computer doesn't have the right to scrape what I say or do anything with it.
I know one of the primary reasons that I do anything online is to provide an outlet for someone else to see it. If I didn’t want someone else to see it, I’d write it down on my notebook, not on the public web.
Sounds like the same schpiel from the anti-privacy advocates who think that we should all expose everything we're doing because "you should have nothing to hide".
This article was written for Wired by Moxie Marlinspike in 2013, who went on to later develop the Signal protocol.
I don't want my thoughts or ideas spread across the web promiscuously. The things I say publicly are curated and full of context. That's why I have my own website, and don't post elsewhere.
I'm not playing the same game you are, which appears to be to post liberally and have loose thoughts to maximize "reach".
On this day 34 years ago, Tim Berners-Lee replies to a question regarding research on "Hypertext links enabling retrieval from multiple heterogeneous sources of information". He proposes a CERN project called the WorldWideWeb (WWW), and welcomes collaborators to the project.
I am joking with the terminology, but I don't believe B.S.'s claim that they were able to do human language learning without 'labelling'.
It's a valid way to learn a second language in the same script, see Lingua Latina for example, but how can you possibly learn a first language or a new script without being told the sounds characters make? You can learn to listen/comprehend and speak by in immersion like that, but not reading & writing.
I think he's implying that humans require available information from which to learn new things, and that borrowing a term from AI research is one valid (if backwards-sounding) way to describe that fact.
About: Full-stack developer with a master's degree in biomedical engineering. Polyglot technologist, with a focus on Python and Django development. Wide variety of professional experience to draw from. Am currently launching a startup (while working a full-time job as a Sr SWE) - I'd like more time to spend on the startup, while continuing to pay my bills. Thus the desire for part-time contracting work.
I guess you could call it fake or cheating, but ahead-of-time preparation of resources and state is used all the time. Speculative execution [0], DNS prefetching [1], shader pre-compilation, ... .
Idk, if you are starting from prerender/prefetch `where href_matches "/*"` maybe you are wasting resources like you are swinging at a piñata in a different room.
This approach will just force the pre-loader/renderer/fetcher to be cautious and just prepare a couple of items (in document order unless you randomise or figure out a ranking metric) and have low hit ratios.
I think existing preloading/rendering on hover works really well on desktop, but I'm not aware of an equivalent for mobile. Maybe you can just preload visible links as there's fewer of them? But tradeoffs on mobile are beyond just latency, so it might not be worth it.
Not mutually exclusive, but they compete for resources.
Prefetch/prerender use server resources, which costs money. Moderate eagerness isn’t bad, but also has a small window of effect (e.g. very slow pages will still be almost as slow, unless all your users languidly hover their mouse over each link for a while).
Creating efficient pages takes time from a competent developer, which costs money upfront, but saves server resources over time.
I don’t have anything against prefetch/render, but it’s a small thing compared to efficient pages (at which point you usually don’t need it).
> Creating efficient pages takes time from a competent developer, which costs money upfront, but saves server resources over time.
Not trying to be a contrarian just for the sake of it, but I don't think this has to be true. Choice of technology or framework also influences how easy it is to create an efficient page, and that's a free choice one can make*
* Unless you are being forced to make framework/language/tech decisions by someone else, in which case carry on with this claim. But please don't suggest it's a universal claim
reply