Hacker News new | past | comments | ask | show | jobs | submit | more bugfix-66's comments login

Explained by Don Knuth here, if you can't solve it yourself:

https://youtu.be/o22BAuQj3ds?t=37m40s


I aspire to have my site on this list one day. But according to their rules, it would need to get 100 upvotes on Hacker News, and that's not realistic.

  5. ... some content from the website must have received at least 100 points on Reddit or Hacker News on at least one occasion ...


I found the instructions on how to measure the total download associated with a page useful.

Agreed: my home page is clocking in at around 2Kb or less but probably not of any interest to anyone outside of my very narrow silo.


The user experience is excellent.

Can you point to a specific problem?


I guess they did not mean the linked site itself, but most of the linked sites. They seem to be personal sites of individual people (developers, bloggers), and typically contain few paragraphs of unstyled text (Black font on white background) and few links. Which might be perfectly fine for such pages, but you can't build, for example, your company website this way. Even putting your company logo in decent resolution would push you over the limit :)


> Even putting your company logo in decent resolution would push you over the limit :)

Not wishing to disagree with the rest of your comment, but I expect many company logos can fit within the limit and look good when represented in compressed SVG (.svgz). Those can also be inlined on a home page to reduce initial view latency.



So, specifically what's wrong? It looks no-nonsense and to-the-point.

Were you unable to find the information you wanted?

Was the site confusing?

Or it was just aesthetically unpleasant?


68 bytes (0.07 KB :)

    <meta name="viewport" content="width=device-width, initial-scale=1">
Before and after: https://files.catbox.moe/yfylg0.jpg

In the pic the text is slightly cut off, looks like i zoomed a few pixels by accident. Still, a touch of padding would improve it imo :)


Titles would have improved the U.X at no cost. A mailto link might have as well, personally I don't like them but a lot of people find them useful. Also styling for the resume would have improved the reading ease.

It's not just a matter of aesthetic, it's a matter of visual parsing.


This is an attempt to improve on djb's fantastic libsecded:

https://pqsrc.cr.yp.to/libsecded-20220828/README.html

We're using a novel Hamming code that doesn't require any interleaving, so the original data can be left unmodified. The new Hamming code is simple and easy to prove correct.

Here is the checksum generator:

https://bugfix-66.com/6956d0447254a7adce9531669bb2d24e3d2e98...

Here is the corresponding SECDED scrubber, that uses the checksum to fix single bit flips and detect double bit flips:

https://bugfix-66.com/f0a66d0ba87bd566172e5a880cdebd57c50bb8...

Future work is to vectorize the index XOR loop (a tiny piece of code).


HERE IS THE ANSWER, ACCEPT NO SUBSTITUTES:

Take 20 milligrams of propranolol, 1.5 hours (90 minutes) before your first interview.

It works for presentations and speeches, too.

Propranolol is a beta blocker and you can easily get a prescription for "performance anxiety" if you ask your doctor. It has a long history of chronic use at higher doses for blood pressure control, so it's very safe at the 20mg dosage. Just ask your doctor and you'll get the prescription.

It has negligible direct effect on the brain. It is NOT a tranquilizer.

But your body will be dead calm -- no adrenaline at all. And indirectly this will calm your mind.


It's interesting to consider how you might prevent training using a license without being too restrictive.

Here is an example of a license that attempts to directly prohibit training. The problem is that you can imagine such software can't be used in any part of a system that might be used for training or inference (in the OS, for example). Somehow you need to additionally specify that the software is used directly... But how, what does that mean? This is left as an exercise for the reader and I hope someone can write something better:

  The No-AI 3-Clause License
This is the BSD 2-Clause License, unmodified except for the addition of a third clause. The intention of the third clause is to prohibit, e.g., use in the training of language models. The intention of the third clause is also to prohibit, e.g., use during language model inference. Such language models are used commercially to aggregate and interpolate intellectual property. This is performed with no acknowledgement of authorship or lineage, no attribution or citation. In effect, the intellectual property used to train such models becomes anonymous common property. The social rewards (e.g., credit, respect) that often motivate open source work are undermined.

  License Text:
https://bugfix-66.com/7a82559a13b39c7fa404320c14f47ce0c304fa...


This is such a Luddite behavior.

How much hubris we have as a species to think that our professions will endure until the end of the stars. To think that the software we write will be eternal.

The thing that we do now is no different than spinning cotton.

I'd be shocked if the total duration of human-authored programming lasted more than a hundred years.

I'll also wager that in thirty years, "we'll" write more software in any given year than all of history up until that point.


I'm all on board if the Microsoft's of the world are. But they choose to train their AI on OSS code and not their own codebase. So clearly they think similarly to the parent, they just want you to forget about that part when it suits them.


If we pass laws restricting the training on copyrighted information, the only organizations that will be able to train will be institutional.

Microsoft would benefit from restriction. Not us.


would you pay for a product trained on say, the MS Teams, Sharepoint or Skype codebases?

no, and no-one else would either


The spirit of this is good, but the implementation is garbage - you need a lawyer or team of lawyers to do this right. You grandstand and soapbox in this weakly written paragraph, and it hurts the whole thing. You discuss social rewards, intentions, etc. This just reads like a stallman-esque tirade


[flagged]


Wow. That's aggressive.

You previously said:

"I work at the most important company in the "AI" industry, a company you hear about every day.

I write GPU kernels for transformers and convolutions. You probably use my BLAS kernels in your networks."

It's pretty easy for people to figure out it's NVIDIA. You probably work on the cuBLAS library.

All the personal attacks I've seen come from you, with your snarky comments.

All the people that responded to you did so in good faith, trying to engage in honest conversation.


Hmm, I am going to overlook the threats and strange response - I think what you have here, in this license you are trying to push, is a good thing. I was giving it feedback. Hire a lawyer, strip away the opinions, and you're cooking. I wish you luck with it.


What about fair use? (both in the copying made for training itself and the resulting output from the service)


We are witnessing a monstrous perversion of "fair use" and the greatest theft of intellectual property in human history.


Do you measure IP's value using the amount of work/effort that was put into creating it, or only the end result?

Currently US copyright law only cares about the end result. Effort has no meaning or bearing in any legal analysis of copyright matters.


Copyright infringement trials are tried in the infringer's jurisdiction.


This is the BSD 2-Clause License:

    1. Redistributions of source code must retain the above copyright
       notice, this list of conditions and the following disclaimer.

    2. Redistributions in binary form must reproduce the above copyright
       notice, this list of conditions and the following disclaimer in
       the documentation and/or other materials provided with the
       distribution.

    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
    HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Presumably, as long as GitHub Copilot:

a) fails to respect these itself, or

b) present the user that is going to use its output verbatim or produce derivative code from it so that the user can respect these

Then GitHub Copilot is either in violation of the license or a tool assisting in such a violation by stripping the license away†.

From TFA:

> David Heinemeier Hansson, creator of Ruby on Rails, argues that the backlash against Copilot runs contrary to the whole spirit of open source. Copilot is “exactly the kind of collaborative, innovative breakthrough that I’m thrilled to see any open source code that I put into the world used to enable,” he writes. “Isn’t this partly why we share our code to begin with? To enable others to remix, reuse, and regenerate with?”

I don't mean to disrespect DHH, but the "spirit of open source" isn't to wildly share code around as if it were public domain, because it is not, an author gets to choose within which framework their code gets to be used and modified††, otherwise one would have used public domain as a non-license + WTFPL for those jurisdictions where one can't relinquish their own creation into public domain.

† depending on whether the "IA"/Microsoft can be held liable of the automated derivative, or if the end user is.

†† cue GPL vs MIT/BSD


Published by Tesla?

It's unwise to use cryptographic code written by engineers working 72 hour weeks at an "extremely hardcore" company.

Here is the Gimli spec:

https://gimli.cr.yp.to/spec.html

Here is the attack illustrating weaknesses in the design:

https://eprint.iacr.org/2017/743

Here is a statement from the Gimli team arguing that it is still secure despite the published 22-round attack:

https://gimli.cr.yp.to/statement.html

Finally, Hamburg's "attack" will not be feasible in the foreseeable future, even with quantum computers. Even if the "attack" were extended to the full 24 rounds, it would not contradict any security claims made in the Gimli paper.

Daniel J. Bernstein (djb) is a wizard, but use Gimli at your own risk.


Judging from the statement (I haven’t the cryptographic kung fu to distil the paper myself), the attack seems to be more of an exploration of how vulnerable the (relatively new) ways the Gimli suite builds everything out of a single crypto core can be when used badly, not something applicable to the usage that’s actually specified. Or is that still concerning? (Plus it does seem worse than brute force from the numbers given, though I can’t judge whether that makes it uninteresting in general, either.)


Yes, you clearly don't understand it. That's ok, but you probably shouldn't confuse the issue.

Basically the attack shows that Gimli's speed and simplicity introduces exploitable flaws and reduces its security:

Bernstein et al. have proposed a new permutation, Gimli, which aims to provide simple and performant implementations on a wide variety of platforms. One of the tricks used to make Gimli performant is that it processes data mostly in 96-bit columns, only occasionally swapping 32-bit words between them. Here we show that this trick is dangerous by presenting a distinguisher for reduced-round Gimli.

https://eprint.iacr.org/2017/743


Yet the attack, as very clearly stated at your link, does require much more computing resources and time than a standard brute force attack.

So I can only agree with DJB that the attack, in its present form, is completely useless.

At most, it can be argued that maybe someone will find a way to use the ideas from your link to conceive a new attack that is much more efficient.

I do not find this more convincing than the threat that someone will find an efficient attack based on completely different ideas.

Any more recent cryptographic algorithm is riskier than the older algorithms, because it is less understood.

However Gimli is intended for slow microcontrollers, where the encrypted data cannot be very valuable, otherwise one would use a slightly more expensive CPU like Cortex-A55 (a few dollars instead of less than a dollar, for a MCU/MPU package), with standard cryptographic libraries.

So the damage done by an attacker decrypting the MCU communication cannot be great, therefore it is an acceptable risk to use a less trusted algorithm, if that reduces a lot the hardware cost.


So the attack must be useless, but if it was not it wouldn't matter because the software doesn't have to be secure to begin with. That's not a great mindset through which to view cryptography.


You put in my mouth words that I have not said, so I will say more clearly:

1. That attack is useless.

2. Nevertheless, Gimli is relatively new and it is also designed for minimum cost, not for maximum security, so there is a risk that someone else could discover a real attack, a risk that is greater than for older algorithms like AES or Chacha.

3. There exists no practical 100% secure form of cryptography. Any choice of cryptographic algorithms is a compromise between the computational cost for the operations done for protecting data, e.g. encryption/decryption/signing/verifying and the computational cost for an attacker that tries to decrypt or forge the protected data.

4. The compromise must be chosen for each application depending on the implications of a successful attack. Some data is so important that it has to remain secret even 10 or 20 years in the future, other data is ephemeral and it does not matter if an attacker would succeed to decrypt it a week later.

The correct mindset in cryptography, like in any other domain, is to choose the right tool for the job.

If you want to use a $0.50 microcontroller, then you must use simpler cryptographic algorithms that can have an acceptable performance on such low-cost hardware. If you want to use algorithms that are harder to break, then you must accept to pay $5.00 for a more powerful device (at the latter price any decent device would have hardware implementations for standard algorithms like AES and SHA-256, so you would not have reasons to use anything less secure).


You can perform very strong cryptography on a 70 cent microcontroller. While that is more than 50 cents, the considerations are quite simply not purely cost (and certainly not to the extent that you quote). Given your explanation, I do not see how my original interpretation is any different from what you actually meant - given that, I must question your reasoning about absolute cost-effectiveness and how much you need to sacrifice for this in practice.

I do not disagree with the premise that sometimes you must make tradeoffs based on cost, but I do disagree with the premise that a platform that you yourself say is not so good should be used because to do otherwise is to (potentially) lose a lot of money. If we get to chips that are a few cents a piece and somehow still need encryption, perhaps you are correct. Then we can be at peace with encryption that only defeats casual snooping. In all other cases, this seems like a poor tradeoff.


> You can perform very strong cryptography on a 70 cent microcontroller

Yes, thanks to Gimli.

I dunno why you changed the price from 50 cents to 70, but anyway which MCU and algorithms are you thinking of?


I should not be allowed into the same room as crypto development, and have certainly never tried to "attack" a crypto algorithm.

Still, reading that, against 22.5 (of 24) rounds of the Gimli computation, the attack is claimed to need 2^129 bits of memory. That is 77,371,252,455,336,267,181,195,264 TB if math is right, which does seem to gently push that "attack" into a rather theoretical plane?

Not sure what I'm missing, from your tone I would expect a smoking hole and this doesn't seem to be that.


We have an old, safe, cheap drug for humans already: Metformin.

Most people can get an off-label metformin prescription for anti-aging or appetite suppression.

Your interest in food will be reduced to the point you have trouble keeping weight on.

Sugar will be repulsive to you. Sweet food tastes "sickly sweet", nauseating.

Metformin also interferes with the absorption of carbohydrates in your gut.


> Most people can get an off-label metformin prescription for anti-aging.

Dumb question: How? I'm reasonably certain my GP would not be amenable to being asked to prescribe drugs for off-label use.


Off label use is fine. I'd be more worried about the fact that a patient is bring it up out of the blue (ie. "give me drugs plz"), and the evidence for life extension is shaky

>There is some evidence metformin may be helpful in extending lifespan, even in otherwise healthy people. It has received substantial interest as an agent that delays aging, possibly through similar mechanisms as its treatment of diabetes (insulin and carbohydrate regulation). This is controversial, and an area of active research.[61][62]


Just ask your doctor for it. Say: "Please prescribe me Metformin, 2 grams daily."

If you're overweight, tell your doctor you want help with your appetite. Otherwise, say you want metformin for its cancer prevention and anti-aging effects.

Metformin is safe and it almost universally improves health and well-being.

Your doctor knows this, and she'll gladly prescribe it.


> Your doctor knows this, and she'll gladly prescribe it.

If your doctor gladly prescribes anything you just walk in and ask for by name and dosage and doesn't have a green cross symbol outside their office, then I'd highly suggest seeking out a new doctor.


We're talking about a particular drug here (metformin) not just "anything".

I'm saying it's likely your doctor will be willing to prescribe METFORMIN (that drug, in particular) for off-label use if you ask.


I once heard about one drug that basically works by making you nauseous to food. Is it the one?

> Metformin is generally safe, but common side effects do include vomiting, nausea, and diarrhea. Those experiencing these unpleasant side effects might consume less food, resulting in modest weight reductions.


Many foods you used to like (e.g., sweets) will be nauseating.

Other foods remain delicious. For example, eggs and cheese and meat.

But basically, your interest in food will be drastically reduced, and you will eat just enough.

This is an important part of metformin's beneficial effect on lifespan and health.


Admittedly, I’m on a fairly low dose (500mg x 2), but I can still kill a bag of Haribo when the mood hits. It doesn’t , quite as often, but I’ve noticed no changes in flavor.


Thanks for the anecdote. Biological variation, we're all different!


Can you point to any studies around this?

I've never heard of this effect (let alone experienced it).

I do know that Diabetes itself can reduce the efficiency of taste receptors, and metformin will sometimes cause people to have a metallic taste in their mouth (due to metformin in produced saliva)


It's easy to find studies connecting metformin to weight loss and loss of appetite. I'll leave that to you, just use Google and you'll be swamped with evidence.

You say you've never heard of metformin affecting food palatability and taste? Did you search the web at all? Just use Google if you want anecdotal examples. For example:

https://forum.diabetes.org.uk/boards/threads/loss-of-appetit...

I joined as after 18 months on metformin, my appetite and enjoyment of food is constantly declining. I’ve lost about 10 kg over that time, which is not a problem. But I’ve lost my appetite and, as a keen cook and someone who has always enjoyed good food, I no longer look forward to meals and get little pleasure from food.

or

https://www.diabetes.co.uk/forum/threads/anyone-experience-l...

Metformin changed the taste of food slowly during the time I took it. It finally got to the point where all vegetables tasted like burnt plastic. This meant that I didn't want to eat them.

And so on. Now you've heard of it! All it took was a web search.


You should not feel anything when you take Metformin. Its effects are subtle and very long term.


> Because the vast majority of research regarding metformin included only people with diabetes or prediabetes, it’s unclear whether these potential benefits are limited to people with those conditions, or whether people without diabetes may derive benefit as well.

https://www.health.harvard.edu/blog/is-metformin-a-wonder-dr...


A more appropriate drug for this is probably Ozempic/Wegovy (Semaglutide), or an even more effective version available in the US, Tirzepatide. I've personally been on the former and after 4.5 months, I've lost ~20% of my body weight


Metformin is well-understood, old, safe, and has various health benefits.

Surprisingly, metformin seems to be almost a pure win, even for healthy people: https://www.health.harvard.edu/blog/is-metformin-a-wonder-dr...

In contrast, GIP analogues like semaglutaide present real risks and are less well understood. For some people, with serious health problems, GIP analogues make sense. But generally you don't want to be taking them if you're healthy.


Aren’t those hard to get and then super expensive if you do quality?


No idea, in the EU I pay ~85 EUR/mo


I think those kinds of blanket statements about metformin are a little misleading and a bit dangerous. For instance I myself have been on metformin now for almost 2 years and I can assure you I have no repulsion to sugar.


Yes, I see three responses along these lines, already. Some people don't lose weight. Some people's appetite isn't affected. Some people retain their taste for sweets. And so on.

But "misleading and a bit dangerous"? Really?

Readers can find a summary of metformin's typical effects on Wikipedia, here: https://en.m.wikipedia.org/wiki/Metformin

YMMV: "your mileage may vary"


Yeah tell that to my stress eating family members who all have metformin prescriptions.

This may be true for some people but definitely not a universal


Here's a question to consider: How much worse would the situation be if they weren't on metformin?


So Metformin basically reduces to fasting which, as Hackernews knows, is the cure for everything.


These systems aggregate and interpolate human work. Interpolation: https://en.m.wikipedia.org/wiki/Interpolation

It's like a very complicated form of linear interpolation:

  a*x + (1-a)*y
These systems do not "think". Today I spent all day mulling an idea, experimenting with variations, feeling frustrated or excited, imagining it, simulating it, making mistakes, following paths of reasoning, deducing facts, revisiting dead-ends with new insight, daydreaming, talking to my wife about it, etc. That's human thought.

These models do not "think" like a human, they do not dream or imagine or feel. They run a feed-forward system of linear equations (matrix multiplications).

They INTERPOLATE HUMAN WORK.

They don't exist without training data (huge amounts of intellectual property) aggregated and interpolated in a monstrous perversion of "fair use":

https://bugfix-66.com/7a82559a13b39c7fa404320c14f47ce0c304fa...

Starve the machine. Without your work, it's got nothing.


> Starve the machine, it doesn't exist without having your work to interpolate.

But again… aren’t people the same way? Noone exists in isolation. The Sir Isaac Newton quote comes to mine:

“If I have seen further, it is by standing on the shoulders of giants”

Edit: to be clear - these algorithms are specifically non-linear and are a far cry from ‘linear interpolation’. Yes they do involve matrix multiplication that does not make them interpolaters unless you want to water down the meaning of interpolation to be so generic it loses its meaning. Having said all that - the sophistication of the algorithm is beyond the point here as long as what they are generating is substantially transformative (which >99% of the possible outputs are legally speaking).


People are the same, yes, but corporations aren’t people.


Certainly, don’t mean to imply they are (legal distinctions aside). A person can create an algorithm (or use an algorithm) and create new things, even works of art.


Are you foolishly suggesting that Sir Isaac Newton was just aggregating and interpolating others' work?

Like a feed-forward chain of matrix multiplications, trained to predict its training data?

No, of course you weren't. That would be FUCKING RIDICULOUS.


Yes… we all do that every day. Humans don’t exist in isolation, we build and learn from other’s accomplishments from the wheel to the printing press to the computer. Modern impressionists don’t owe royalties to Monet but they certainly draw from and learn from his contributions to the art world. Brand new material from art algorithms (frankly regardless of their sophistication) certainly deserve and fall under this same legal treatment.


[flagged]


> You just don't understand the math.

This is not in good faith, please read HN rules.

Rather than attack me (calling me foolish, swearing at me) why don’t you rebut my ideas and have a conversation if you actually have something to contribute.

I’ve read the papers, I’ve worked personally with these systems. I understand them just fine. Notice that I said earlier: “regardless of how simple they are”. I understand you are trying to water them down to be simple interpolation which they definitely are not but even if they were that simple it wouldn’t change the legal calculus here one bit. New art is being generated (far beyond any ‘transformative’ legal test precedent) and any new art that is substantively different from its inputs is legally protectable.


That or they do understand the math and they think what's going on in our own minds may not be that special.


Ethical meat business (single farm, limited scope) = good, industrial meat grinder (huge factories) = bad.


So is any scaled up process unethical or is there another comparison you are trying to go for here?

Notably in your example both are certainly legal just varied by the level of controversy around them and on that level I would agree. Scaled up processes do tend to attract more controversy.


> So is any scaled up process unethical

If it abuses someone - yes.


Humans also interpolate human work. True originality is an illusion and all creative works are based on, inspired by, or contributed to by something else. Are you implying that human thought is required to create human-level art? Because if anything, I think AI-generated art is in the process of disproving this exact hypothesis. It is unnerving to realize that something we felt up til now was fundamentally exclusive to the human experience isn't actually exclusive, but it's becoming more and more apparent.


Humans on the whole aren't capable of hoovering up basically every piece of artist content accessible on the web, storing all of it, and then creating near-faithful reproductions at a moments notice.

It's a problem of scale.

> Because if anything, I think AI-generated art is in the process of disproving this exact hypothesis

But it's not creating anything, it's regurgitating it's training material (through a suitably fine blender) in the way that scores best. These models are nothing without the actual art they've appropriated.


> Humans also interpolate human work.

Humans, artists aren't the machines other human created, they interpret or copy, not interpolate.


Thank you for saying it.

We're surrounded by people who don't understand what's happening. They seem to think some kind of art intelligence has been invented.

No, it's the aggregation and interpolation of vast amounts of existing art.

The same thing is happening with software, through Microsoft's Copilot:

https://bugfix-66.com/7a82559a13b39c7fa404320c14f47ce0c304fa...

I think people just don't understand what they're seeing. They have no idea what it is.

They think it's really "intelligence", dreaming and imagining and simulating and feeling and experimenting and...

It's none of these things. It's a sophisticated interpolation, not so different from linear interpolation:

  a*x + (1-a)*y


Memes (in the sense Dawkins used) have found easier replication into this new medium. Rather than jumping from brain to brain, with the intermediate step of writing, our old memes now replicate by language model. They do meaningful work when deployed without a human in the loop.

I think both humans and AI without training are stupid. Take a human alone, raised alone, without culture. He/she will be closer to animals than humans. It's the culture that is the locus of intelligence and we're borrowing intelligence from it just like the AIs.


Lovely perspective. Esp. the first point.


> It's a sophisticated interpolation, not so different from linear interpolation: a*x + (1-a)*y

These algorithms are specifically non-linear a far cry from ‘linear interpolation’ unless you want to water down the meaning of interpolation to be so generic it loses its meaning.

Having said all that - the sophistication of the algorithm is beyond the point here as long as what they are generating is substantially transformative (which >99% of the possible outputs are legally speaking).


> water down the meaning of interpolation to be so generic it loses its meaning.

"Interpolation" was always a very generic word.


If it is interpolation what kind of interpolation is it? Linear? Bilinear? Nearest Neighbor? Lanczos? No… because it isn’t and doesn’t resemble anything close to interpolation.

They even gave a linear equation in their example… again not even close. If we can call what these algorithms do interpolation - we can call what humans do interpolation too - it makes the word that meaningless


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: