Hacker News new | past | comments | ask | show | jobs | submit | ohcmon's comments login

You would be surprised, but nvidia’s employee stock plans allow to select the purchase price within the last 2 years https://www.nvidia.com/en-us/benefits/money/espp/


> allow to select the purchase price within the last 2 years

I don't think that's true. My reading of that is "you lock in the price on your start date and can keep that for the next 2 years going forward". That doesn't help anybody joining at >$1k / share. :D (and that's only ESPP, not standard stock compensation).


Can't speak for NVIDIA but at another company I know they use the lowest price on the last 4 periods (so lowest of 8 timestamps)


ESPP is a very small amount vs RSUs. You’re limited to buying $25,000 per year (that you still have to shell out for even if it’s at a discount) vs just being given several hundred thousand (or more) in RSUs.


ESPP is completely different from stock-based compensation.


Needed Word on Mac – you can’t imagine how surprised I was to see Skype starting too.


> glorified token predicting machine trained on existing data (made by humans)

sorry to disappoint, but human brain fits the same definition


Sure.

> Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer

> To understand even the basics of how the brain maintains the human intellect, we might need to know not just the current state of all 86 billion neurons and their 100 trillion interconnections, not just the varying strengths with which they are connected, and not just the states of more than 1,000 proteins that exist at each connection point, but how the moment-to-moment activity of the brain contributes to the integrity of the system.

https://aeon.co/essays/your-brain-does-not-process-informati...


What are you talking about? Do you have any actual cognitive neuroscience to back that up? Have they scanned the brain and broken it down into an LLM-analogous network?


If you genuinely believe your brain is just a token prediction machine, why do you continue to exist? You're just consuming limited food, water, fuel, etc for the sake of predicting tokens, like some kind of biological crypto miner.


Genetic and memetic/intellectual immortality, of course. Biologically there can be no other answer. We are here to spread and endure, there is no “why” or end-condition.

If your response to there not being a big ending cinematic to life with a bearded old man and a church choir, or all your friends (and a penguin) clapping and congratulating you is that you should kill yourself immediately, that’s a you problem. Get in the flesh-golem, shinzo… or Jon Stewart will have to pilot it again.


I'm personally a lot more than a prediction engine, don't worry about me.

For those who do believe they are simply fleshy token predictors, is there a moral reason that other (sentient) humans can't kill -9 them like a LLaMa3 process?


Morality is just what worked as set of rules for groups of humans to survive together. You can try to kill me if you want, but I will try to fight back and society will try to punish you.

And all of the ideas of morality and societal rules come from this desire to survive and desire to survive exists because this is what natural selection obviously selects for.

There is also probably a good explanation why people want to think that they are special and more than prediction engines.


Yes, specifically that a person's opinions are never justification for violence committed against them, no matter how sure you might be of your righteousness.


But they've attested that they are merely a token prediction process; it's likely they don't qualify as sentient. Generously, we can put their existence on the same level as animals such as cows or chickens. So maybe it's okay to terminate them if we're consuming their meat?


"It is your burden to prove to my satisfaction that you are sentient. Else, into the stew you go." Surely you see the problem with this code.

Before you harvest their organs, you might also contemplate whether the very act of questioning one's own sentience might be inherent positive proof.

I'm afraid you must go hungry either way.


> "It is your burden to prove to my satisfaction that you are sentient. Else, into the stew you go." Surely you see the problem with this code.

It's the opposite; I've always assumed all humans were sentient, since I personally am, but many people in this comment section are eagerly insisting they are, in fact, not sentient and no more than token prediction machines.

Most likely they're just wrong, but I can't peer into their mind to prove it. What if they're right and there's two types of humans, ones who are merely token predictors, and ones who aren't? Now we're getting into fun sci-fi territory.


And how would we discern a stochastic parrot from a sentient being on autopilot?

So much of what we do and say is just pattern fulfillment. Maybe not 100%, on all days.


Why would sentient processes deserve to live? Especially non sentient systems who hallucinate their own sentience? Are you arguing that the self aware token predictors should kill and eat you? They crave meat so they can generate more tokens.

In short, we believe in free will because we have no choice.


Well, yes. I won't commit suicide though, since it is an evolutionarily developed trait to keep living and reproducing since only the ones with that trait survive in the first place.


If LLMs and humans are the same, should it be legal for me to terminate you, or illegal for me to terminate an LLM process?


What do you mean by "the same"?

Since I don't want to die I am going to say it should be illegal for you to terminate me.

I don't care about an LLM process being terminated so I have no problem with that.


It's a cute generalization but you do yourself a great disservice. It's somewhat difficult to argue given the medium we have here and it may be impossible to disprove but consider that in first 30 minutes of your post being highly visible on this thread no one had yet replied. Some may have acted in other ways.. had opinions.. voted it up/down. Some may have debated replying in jest or with a some related biblical verse. I'd wager a few may have used what they could deduce from your comment and/or history to build a mini model of you in their heads, and using that to simulate the conversation to decide if it was worth the time to get into such a debate vs tending to other things.

Could current LLM's do any of this?


I’m not the OP, and I genuinely don’t like how we’re slowly entering the “no text in internet is real” realm, but I’ll take a stab at your question.

If you made an LLM to pretend to have a specific personality (e.g. assume you are a religious person and you’re going to make a comment in this thread) rather than “generic catch-all LLM”, they can pretty much do that. Part of Reddit is just automated PR LLMs fighting each other, making comments and suggesting products or viewpoints, deciding on which comment to reply and etc. You just chain bunch of responses together with pre-determined questions like “given this complete thread, do you think it would look organic if we responded with a plug to a product to this comment?”.

It’s also not that hard to generate these type of “personalities”, since you can use a generic one to suggest you a new one that would be different from your other agents.

There are also Discord communities that share tips and tricks for making such automated interactions look more real.


These things might be able to produce comparable output but that wasn't my point. I agree that if we are comparing ourselves over the text that gets written then LLM's can achieve super intelligence. And writing text can indeed be simplified to token predicting.

My point was we are not just glorified token predicting machines. There is a lot going on behind what we write and whether we write it or not. Does the method matter vs just the output? I think/hope it does on some level.


See, this sort of claim I am instantly skeptical of. Nobody has ever caught a human brain producing or storing tokens, and certainly the subjective experience of, say, throwing a ball, doesn't involve symbols of any kind.


> Nobody has ever caught a human brain producing or storing tokens

Do you remember learning how to read and write?

What are spelling tests?

What if "subjective experience" isn't essential, or is even just a distraction, for a great many important tasks?


Entirely possible. Lots of things exhibit complex behavior that probably don't have subjective experience.

My point is just that the evidence for "humans are just token prediction machines and nothing more" is extremely lacking, but there's always someone in these discussions who asserts it like it's obvious.


Any output from you could be represented as a token. It is a very generic idea. Ultimately whatever you output is because of chemical reactions that follow from the input.


It could be represented that way. That's a long way from saying that's how brains work.

Does a thermometer predict tokens? It also produces outputs that can be represented as tokens, but it's just a bit of mercury in a tube. You can dissect a thermometer as much as you like and you won't find any token prediction machinery. There's lots of things like that. Zooming out, does that make the entire atmosphere a token prediction engine, since it's producing eg wind and temperatures that could be represented as tokens?

If you need one token per particle then you're admitting that this is task is impossible. Nobody will ever build a computer that can simulate a brain-sized volume of particles to sufficient fidelity. There is a long, long distance from "brains are made of chemicals" to "brains are basically token prediction engines."


The argument that brains are just token prediction machines is basically the same as saying “the brain is just a computer”. It’s like, well, yes in the same way that a B-21 Raider is an airplane as well as a Cessna. That doesn’t mean that they are anywhere close to each other in terms of performance. They incorporate some similar basic elements but when you zoom out they’re clearly very different things.


But we are bringing it up in regards to what people are claiming is a "glorified next token predictor, markov chains" or whatever. Obviously LLMs are far from humans and AGI right now, but at the same time they are much more amazing than a statement like "glorified next token predictor" lets on. The question is how accurate to real life the predictor is and how nuanced it can get.

To me, the tech has been an amazing breakthrough. The backlash and downplaying by some people seems like some odd type of fear or cope to me.

Even if it is not that world changing, why downplay it like that?


To be fair my analogy works if you want to object to ChatGPT being called a glorified token prediction machine. I just don’t agree with hyperbolic statements about AGI.


There's so many different statements everywhere, that it's hard to understand what someone is specifically referring to. Are we thinking of Elon Musk who is saying that AGI is coming next year? Are we thinking of people who believe that LLM like architecture could reach AGI in 5 to 10 years given tweaks, scale and optimisations? Are we considering people who believe that some other arch breakthrough could lead to AGI in 10 years?


>> Are we thinking of people who believe that LLM like architecture could reach AGI in 5 to 10 years given tweaks, scale and optimisations?

Yep, that’s exactly who I’m talking about! I’m pretty sure Sam Altman is in that camp.


I’m afraid the problem is not indexing, but monetization. Alternative google search will not be profitable (especially if you have to pay a share to google indexing) because no one will buy ads there - even for bing it is a challenge


The hope though is that by splitting indexing that puts search providers on an equal footing in terms of results quality (at least initially). Advertisers go to Google because users go to Google. But users go to Google because despite recent quality regressions, Google still gives consistently better results.

If search providers could at least match Google quality 'by default' that might help break the stranglehold wherein people like the GP are at the mercy of the whims of a single org


People go to Google, because it is default search engine in most browsers, they don't seem to change it.


> Google still gives consistently better results

How sure are you about that? I find them to be subpar when compared to Bing, especially for technical search topics (mostly, PHP, Go, and C related searches).


Wow! This is insanely cool!


Ecosystem around chat GPT is the differentiator that Meta and Mistral can’t beat – so I’d say that Altman is more relevant today than ever. And, for example, if you’ve read Mistral’s paper – I think you would agree that it’s straightforward to replicate similar results for every other major player. Replicating ecosystem is much harder.

Performance is never a complete product – neither for Apple, nor for Open AI (its for-profit part).


If you really need such an ecosystem, then you can build one right away, like Kagi Labs and Phind did. In the case of Kagi, no GPT is involved; in the case of Phind, GPT-4 is still vital, but they are closing the gap with their cheaper and faster LLaMA-2 34B-based models.

> Performance is never a complete product

In the case of GPT-4, performance - in terms of the quality of generation and speed - is the vital aspect that still holds competitors back.

Google, Microsoft, Meta, and countless research teams and individual researchers are actually responsible for the success of OpenAI, and this should remain a collective effort. What OpenAI is doing now by hiding details of their models is actually wrong. They stand on the shoulders of giants but refuse to share these days, and Altman is responsible for this.

Let us not forget what OpenAI was declared to stand for.


Under ecosystem I mean people using ChatGPT daily on their phones and browsers, developers (and now virtually anyone) writing extensions. For most of the world all of the progress is condensed at chat.openai.com, and it will be only harder to beat this adoption.

Tech superiority might be relevant today, but I highly doubt it will stay the same for a long time even if openai continues to hide details (which I agree is bad). We could argue about the training data, but we have so much publicity available so that is not an issue as well.


52k after taxes or before?


before taxes, at 32h/week


How come refactoring is not fun??


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: