I aspire to have my site on this list one day. But according to their rules, it would need to get 100 upvotes on Hacker News, and that's not realistic.
5. ... some content from the website must have received at least 100 points on Reddit or Hacker News on at least one occasion ...
I guess they did not mean the linked site itself, but most of the linked sites. They seem to be personal sites of individual people (developers, bloggers), and typically contain few paragraphs of unstyled text (Black font on white background) and few links. Which might be perfectly fine for such pages, but you can't build, for example, your company website this way. Even putting your company logo in decent resolution would push you over the limit :)
> Even putting your company logo in decent resolution would push you over the limit :)
Not wishing to disagree with the rest of your comment, but I expect many company logos can fit within the limit and look good when represented in compressed SVG (.svgz). Those can also be inlined on a home page to reduce initial view latency.
Titles would have improved the U.X at no cost.
A mailto link might have as well, personally I don't like them but a lot of people find them useful.
Also styling for the resume would have improved the reading ease.
It's not just a matter of aesthetic, it's a matter of visual parsing.
We're using a novel Hamming code that doesn't require any interleaving, so the original data can be left unmodified. The new Hamming code is simple and easy to prove correct.
Take 20 milligrams of propranolol, 1.5 hours (90 minutes) before your first interview.
It works for presentations and speeches, too.
Propranolol is a beta blocker and you can easily get a prescription for "performance anxiety" if you ask your doctor. It has a long history of chronic use at higher doses for blood pressure control, so it's very safe at the 20mg dosage. Just ask your doctor and you'll get the prescription.
It has negligible direct effect on the brain. It is NOT a tranquilizer.
But your body will be dead calm -- no adrenaline at all. And indirectly this will calm your mind.
It's interesting to consider how you might prevent training using a license without being too restrictive.
Here is an example of a license that attempts to directly prohibit training. The problem is that you can imagine such software can't be used in any part of a system that might be used for training or inference (in the OS, for example). Somehow you need to additionally specify that the software is used directly... But how, what does that mean? This is left as an exercise for the reader and I hope someone can write something better:
The No-AI 3-Clause License
This is the BSD 2-Clause License, unmodified except for the addition of a third clause. The intention of the third clause is to prohibit, e.g., use in the training of language models. The intention of the third clause is also to prohibit, e.g., use during language model inference. Such language models are used commercially to aggregate and interpolate intellectual property. This is performed with no acknowledgement of authorship or lineage, no attribution or citation. In effect, the intellectual property used to train such models becomes anonymous common property. The social rewards (e.g., credit, respect) that often motivate open source work are undermined.
How much hubris we have as a species to think that our professions will endure until the end of the stars. To think that the software we write will be eternal.
The thing that we do now is no different than spinning cotton.
I'd be shocked if the total duration of human-authored programming lasted more than a hundred years.
I'll also wager that in thirty years, "we'll" write more software in any given year than all of history up until that point.
I'm all on board if the Microsoft's of the world are. But they choose to train their AI on OSS code and not their own codebase. So clearly they think similarly to the parent, they just want you to forget about that part when it suits them.
The spirit of this is good, but the implementation is garbage - you need a lawyer or team of lawyers to do this right. You grandstand and soapbox in this weakly written paragraph, and it hurts the whole thing. You discuss social rewards, intentions, etc. This just reads like a stallman-esque tirade
Hmm, I am going to overlook the threats and strange response - I think what you have here, in this license you are trying to push, is a good thing. I was giving it feedback. Hire a lawyer, strip away the opinions, and you're cooking. I wish you luck with it.
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in
the documentation and/or other materials provided with the
distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Presumably, as long as GitHub Copilot:
a) fails to respect these itself, or
b) present the user that is going to use its output verbatim or produce derivative code from it so that the user can respect these
Then GitHub Copilot is either in violation of the license or a tool assisting in such a violation by stripping the license away†.
From TFA:
> David Heinemeier Hansson, creator of Ruby on Rails, argues that the backlash against Copilot runs contrary to the whole spirit of open source. Copilot is “exactly the kind of collaborative, innovative breakthrough that I’m thrilled to see any open source code that I put into the world used to enable,” he writes. “Isn’t this partly why we share our code to begin with? To enable others to remix, reuse, and regenerate with?”
I don't mean to disrespect DHH, but the "spirit of open source" isn't to wildly share code around as if it were public domain, because it is not, an author gets to choose within which framework their code gets to be used and modified††, otherwise one would have used public domain as a non-license + WTFPL for those jurisdictions where one can't relinquish their own creation into public domain.
† depending on whether the "IA"/Microsoft can be held liable of the automated derivative, or if the end user is.
Finally, Hamburg's "attack" will not be feasible in the foreseeable future, even with quantum computers. Even if the "attack" were extended to the full 24 rounds, it would not contradict any security claims made in the Gimli paper.
Daniel J. Bernstein (djb) is a wizard, but use Gimli at your own risk.
Judging from the statement (I haven’t the cryptographic kung fu to distil the paper myself), the attack seems to be more of an exploration of how vulnerable the (relatively new) ways the Gimli suite builds everything out of a single crypto core can be when used badly, not something applicable to the usage that’s actually specified. Or is that still concerning? (Plus it does seem worse than brute force from the numbers given, though I can’t judge whether that makes it uninteresting in general, either.)
Yes, you clearly don't understand it. That's ok, but you probably shouldn't confuse the issue.
Basically the attack shows that Gimli's speed and simplicity introduces exploitable flaws and reduces its security:
Bernstein et al. have proposed a new permutation, Gimli, which aims to provide simple and performant implementations on a wide variety of platforms. One of the tricks used to make Gimli performant is that it processes data mostly in 96-bit columns, only occasionally swapping 32-bit words between them. Here we show that this trick is dangerous by presenting a distinguisher for reduced-round Gimli.
Yet the attack, as very clearly stated at your link, does require much more computing resources and time than a standard brute force attack.
So I can only agree with DJB that the attack, in its present form, is completely useless.
At most, it can be argued that maybe someone will find a way to use the ideas from your link to conceive a new attack that is much more efficient.
I do not find this more convincing than the threat that someone will find an efficient attack based on completely different ideas.
Any more recent cryptographic algorithm is riskier than the older algorithms, because it is less understood.
However Gimli is intended for slow microcontrollers, where the encrypted data cannot be very valuable, otherwise one would use a slightly more expensive CPU like Cortex-A55 (a few dollars instead of less than a dollar, for a MCU/MPU package), with standard cryptographic libraries.
So the damage done by an attacker decrypting the MCU communication cannot be great, therefore it is an acceptable risk to use a less trusted algorithm, if that reduces a lot the hardware cost.
So the attack must be useless, but if it was not it wouldn't matter because the software doesn't have to be secure to begin with. That's not a great mindset through which to view cryptography.
You put in my mouth words that I have not said, so I will say more clearly:
1. That attack is useless.
2. Nevertheless, Gimli is relatively new and it is also designed for minimum cost, not for maximum security, so there is a risk that someone else could discover a real attack, a risk that is greater than for older algorithms like AES or Chacha.
3. There exists no practical 100% secure form of cryptography. Any choice of cryptographic algorithms is a compromise between the computational cost for the operations done for protecting data, e.g. encryption/decryption/signing/verifying and the computational cost for an attacker that tries to decrypt or forge the protected data.
4. The compromise must be chosen for each application depending on the implications of a successful attack. Some data is so important that it has to remain secret even 10 or 20 years in the future, other data is ephemeral and it does not matter if an attacker would succeed to decrypt it a week later.
The correct mindset in cryptography, like in any other domain, is to choose the right tool for the job.
If you want to use a $0.50 microcontroller, then you must use simpler cryptographic algorithms that can have an acceptable performance on such low-cost hardware. If you want to use algorithms that are harder to break, then you must accept to pay $5.00 for a more powerful device (at the latter price any decent device would have hardware implementations for standard algorithms like AES and SHA-256, so you would not have reasons to use anything less secure).
You can perform very strong cryptography on a 70 cent microcontroller. While that is more than 50 cents, the considerations are quite simply not purely cost (and certainly not to the extent that you quote). Given your explanation, I do not see how my original interpretation is any different from what you actually meant - given that, I must question your reasoning about absolute cost-effectiveness and how much you need to sacrifice for this in practice.
I do not disagree with the premise that sometimes you must make tradeoffs based on cost, but I do disagree with the premise that a platform that you yourself say is not so good should be used because to do otherwise is to (potentially) lose a lot of money. If we get to chips that are a few cents a piece and somehow still need encryption, perhaps you are correct. Then we can be at peace with encryption that only defeats casual snooping. In all other cases, this seems like a poor tradeoff.
I should not be allowed into the same room as crypto development, and have certainly never tried to "attack" a crypto algorithm.
Still, reading that, against 22.5 (of 24) rounds of the Gimli computation, the attack is claimed to need 2^129 bits of memory. That is 77,371,252,455,336,267,181,195,264 TB if math is right, which does seem to gently push that "attack" into a rather theoretical plane?
Not sure what I'm missing, from your tone I would expect a smoking hole and this doesn't seem to be that.
Off label use is fine. I'd be more worried about the fact that a patient is bring it up out of the blue (ie. "give me drugs plz"), and the evidence for life extension is shaky
>There is some evidence metformin may be helpful in extending lifespan, even in otherwise healthy people. It has received substantial interest as an agent that delays aging, possibly through similar mechanisms as its treatment of diabetes (insulin and carbohydrate regulation). This is controversial, and an area of active research.[61][62]
Just ask your doctor for it. Say: "Please prescribe me Metformin, 2 grams daily."
If you're overweight, tell your doctor you want help with your appetite. Otherwise, say you want metformin for its cancer prevention and anti-aging effects.
Metformin is safe and it almost universally improves health and well-being.
Your doctor knows this, and she'll gladly prescribe it.
> Your doctor knows this, and she'll gladly prescribe it.
If your doctor gladly prescribes anything you just walk in and ask for by name and dosage and doesn't have a green cross symbol outside their office, then I'd highly suggest seeking out a new doctor.
I once heard about one drug that basically works by making you nauseous to food. Is it the one?
> Metformin is generally safe, but common side effects do include vomiting, nausea, and diarrhea. Those experiencing these unpleasant side effects might consume less food, resulting in modest weight reductions.
Admittedly, I’m on a fairly low dose (500mg x 2), but I can still kill a bag of Haribo when the mood hits. It doesn’t , quite as often, but I’ve noticed no changes in flavor.
I've never heard of this effect (let alone experienced it).
I do know that Diabetes itself can reduce the efficiency of taste receptors, and metformin will sometimes cause people to have a metallic taste in their mouth (due to metformin in produced saliva)
It's easy to find studies connecting metformin to weight loss and loss of appetite. I'll leave that to you, just use Google and you'll be swamped with evidence.
You say you've never heard of metformin affecting food palatability and taste? Did you search the web at all? Just use Google if you want anecdotal examples. For example:
I joined as after 18 months on metformin, my appetite and enjoyment of food is constantly declining. I’ve lost about 10 kg over that time, which is not a problem. But I’ve lost my appetite and, as a keen cook and someone who has always enjoyed good food, I no longer look forward to meals and get little pleasure from food.
Metformin changed the taste of food slowly during the time I took it. It finally got to the point where all vegetables tasted like burnt plastic. This meant that I didn't want to eat them.
And so on. Now you've heard of it! All it took was a web search.
> Because the vast majority of research regarding metformin included only people with diabetes or prediabetes, it’s unclear whether these potential benefits are limited to people with those conditions, or whether people without diabetes may derive benefit as well.
A more appropriate drug for this is probably Ozempic/Wegovy (Semaglutide), or an even more effective version available in the US, Tirzepatide. I've personally been on the former and after 4.5 months, I've lost ~20% of my body weight
In contrast, GIP analogues like semaglutaide present real risks and are less well understood. For some people, with serious health problems, GIP analogues make sense. But generally you don't want to be taking them if you're healthy.
I think those kinds of blanket statements about metformin are a little misleading and a bit dangerous. For instance I myself have been on metformin now for almost 2 years and I can assure you I have no repulsion to sugar.
Yes, I see three responses along these lines, already. Some people don't lose weight. Some people's appetite isn't affected. Some people retain their taste for sweets. And so on.
It's like a very complicated form of linear interpolation:
a*x + (1-a)*y
These systems do not "think". Today I spent all day mulling an idea, experimenting with variations, feeling frustrated or excited, imagining it, simulating it, making mistakes, following paths of reasoning, deducing facts, revisiting dead-ends with new insight, daydreaming, talking to my wife about it, etc. That's human thought.
These models do not "think" like a human, they do not dream or imagine or feel. They run a feed-forward system of linear equations (matrix multiplications).
They INTERPOLATE HUMAN WORK.
They don't exist without training data (huge amounts of intellectual property) aggregated and interpolated in a monstrous perversion of "fair use":
> Starve the machine, it doesn't exist without having your work to interpolate.
But again… aren’t people the same way? Noone exists in isolation. The Sir Isaac Newton quote comes to mine:
“If I have seen further, it is by standing on the shoulders of giants”
Edit: to be clear - these algorithms are specifically non-linear and are a far cry from ‘linear interpolation’. Yes they do involve matrix multiplication that does not make them interpolaters unless you want to water down the meaning of interpolation to be so generic it loses its meaning. Having said all that - the sophistication of the algorithm is beyond the point here as long as what they are generating is substantially transformative (which >99% of the possible outputs are legally speaking).
Certainly, don’t mean to imply they are (legal distinctions aside). A person can create an algorithm (or use an algorithm) and create new things, even works of art.
Yes… we all do that every day. Humans don’t exist in isolation, we build and learn from other’s accomplishments from the wheel to the printing press to the computer. Modern impressionists don’t owe royalties to Monet but they certainly draw from and learn from his contributions to the art world. Brand new material from art algorithms (frankly regardless of their sophistication) certainly deserve and fall under this same legal treatment.
Rather than attack me (calling me foolish, swearing at me) why don’t you rebut my ideas and have a conversation if you actually have something to contribute.
I’ve read the papers, I’ve worked personally with these systems. I understand them just fine. Notice that I said earlier: “regardless of how simple they are”. I understand you are trying to water them down to be simple interpolation which they definitely are not but even if they were that simple it wouldn’t change the legal calculus here one bit. New art is being generated (far beyond any ‘transformative’ legal test precedent) and any new art that is substantively different from its inputs is legally protectable.
So is any scaled up process unethical or is there another comparison you are trying to go for here?
Notably in your example both are certainly legal just varied by the level of controversy around them and on that level I would agree. Scaled up processes do tend to attract more controversy.
Humans also interpolate human work. True originality is an illusion and all creative works are based on, inspired by, or contributed to by something else. Are you implying that human thought is required to create human-level art? Because if anything, I think AI-generated art is in the process of disproving this exact hypothesis. It is unnerving to realize that something we felt up til now was fundamentally exclusive to the human experience isn't actually exclusive, but it's becoming more and more apparent.
Humans on the whole aren't capable of hoovering up basically every piece of artist content accessible on the web, storing all of it, and then creating near-faithful reproductions at a moments notice.
It's a problem of scale.
> Because if anything, I think AI-generated art is in the process of disproving this exact hypothesis
But it's not creating anything, it's regurgitating it's training material (through a suitably fine blender) in the way that scores best. These models are nothing without the actual art they've appropriated.
Memes (in the sense Dawkins used) have found easier replication into this new medium. Rather than jumping from brain to brain, with the intermediate step of writing, our old memes now replicate by language model. They do meaningful work when deployed without a human in the loop.
I think both humans and AI without training are stupid. Take a human alone, raised alone, without culture. He/she will be closer to animals than humans. It's the culture that is the locus of intelligence and we're borrowing intelligence from it just like the AIs.
> It's a sophisticated interpolation, not so different from linear interpolation: a*x + (1-a)*y
These algorithms are specifically non-linear a far cry from ‘linear interpolation’ unless you want to water down the meaning of interpolation to be so generic it loses its meaning.
Having said all that - the sophistication of the algorithm is beyond the point here as long as what they are generating is substantially transformative (which >99% of the possible outputs are legally speaking).
If it is interpolation what kind of interpolation is it? Linear? Bilinear? Nearest Neighbor? Lanczos? No… because it isn’t and doesn’t resemble anything close to interpolation.
They even gave a linear equation in their example… again not even close. If we can call what these algorithms do interpolation - we can call what humans do interpolation too - it makes the word that meaningless
https://youtu.be/o22BAuQj3ds?t=37m40s