If she’s rich and married a successful tech founder, I don’t understand why she didn’t get a lawyer to draft these investment papers to keep herself from getting fleeced. Like the amount she was dropping could probably have been recouped from a buyout without much fuss if the contracts were a bit more assertive.
As an angel, you do not get to set weird terms for your 10k check. The startup is raising on standard paper (these days almost always a SAFE) and you take them.
But no terms are going to save you from the reality that a failing startup that needs to raise more money will have to accept dilutive terms. You might be able to restrict it, but the alternative is that they can’t raise at all and shut down.
too blithe. range of outcomes between failed and IPO.
TFA identified a few exits that should have returned some money that did not. ensuring protection of minority shareholder rights so that this doesn't happen is an important "plug leaks in your game" hygiene
Most exits in that range are where the exit is less than the capital raised, or a little more but where the last round raised was questionable and was done on terms that the minority shareholders may complain about but where they didn't have any other option - they were hosed no matter what.
I've seen hundreds of them. And it has little to do with size of position, it's where you sit in the preference stack. Anyone who thinks that early round investors should get a pay out on an exit no matter what just doesn't understand pretty basic finance.
You're correct about the weights: each machine could in fact store all of the weights. However I think you still have to transfer the activations and the KV-Cache while performing inference.
This is silly. If I’m only rebuilding 1 file out of a 200 file source base, I guarantee the non-unity build will be faster. Just the number of characters I have to tokenize in this thought experiment should be enough to convince. If this isn’t the case then you’re doing some sort of n^2 c++ boost every header includes most other headers shenanigans and you need to stop that rather than doing a unity build.
I'm a Spaniard and to my ears it clearly sounds like "Es una manzana y un plátano".
What's strange to me is that, as far as I know, "plátano" is only commonly used in Spain, but the accent of the AI voice didn't sound like it's from Spain. It sounds more like an American who speaks Spanish as a second language, and those folks typically speak some Mexican dialect of Spanish.
Interesting, I was reading some comments from Japanese users and they said the Japanese voice sounds like a (very good N1 level) foreigner speaking Japanese.
At least IME, and there may be regional or other variations I’m missing, people in México tend to use “plátano” for bananas and “plátano macho” for plantains.
In Spain, it's like that. In Latin America, it was always "plátano," but in the last ten years, I've seen a new "global Latin American Spanish" emerging that uses "banana" for Cavendish, some Mexican slang, etc. I suspect it's because of YouTube and Twitch.
Mobile devices don’t have the memory or the memory bandwidth to run an LLM that’s big enough to be good at much. Plus the fixed battery and thermal constraints.
Length of time doing something doesn't imply mastery. It can contribute to mastery, but only if you're always pushing your skill to the next level I think.
For example: I know people who have been skateboarding for decades and are just barely OK at it. How you practice matters.
Headphones don’t require a treated room to sound excellent. Speakers will engage the natural acoustics of whatever space they live in and comb filtering will result. Almost always a great pair of headphones will beat a great pair of speakers in a listening test.
reply