I’m wondering if anything in their setup is worth incorporating into the “status quo”
The most immediately relevant thing is equity-based comp that doesn’t have a tax hit.
Another thing I find interesting is whether this comp might be more “mission-aligned”:
- It’s likely this is all just to accomodate their “non-profit with limited profit subsidiary” structure
- But there's still a deeper question there: do current specifics of how startups do comp / investment / exits somehow result in societally suboptimal outcomes?
- If so, how exactly, and is it fixable, or are those inherent properties of an ecosystem of competitive, high-growth startups?
- One way that might be the case is if there’s a pressure for “growth at all costs”, which is something OpenAI would want to avoid.
- The pressure for that could come from a lot of sources — intrinsic desire for growth; financial expectations from shareholders (employees, VCs, public market investors, even customers); regulatory requirements around fiduciary duty; existential risk of going under or competition, or needing to hit a certain size to achieve whatever it is you want to achieve
- Another way this could be the case would be limited exit opportunities, or limited creativity around them / sticking to well-trodden paths, ultimately resulting in control of the company being handed over to either poor management or groups prioritizing growth at the expense of good product
- Thinking about twitter and reddit’s API shutdowns here — how much do unpopular product decisions like that come from legal specifics? Or perhaps those would have happened no matter what, to keep the companies from going under or to have enough profits to fund further innovation.
- It’s not obvious that this setup avoids a motivation for “growth at all costs” — employees’ pay is still based on increasing profit, and if it doesn’t grow as fast as competitors it may just become irrelevant. But it’s possible it affects other strings around company control. Not sure — IANAL :)
"In the province of the mind, there are no limits."
I do wonder to what degree this is true
There are claims that other animals see more colors than us. But what if that's not actually the case? Maybe conscious minds are only capable of seeing the same rainbow that we do
The mapping could be quite different, and even if you can't add more letters to "RGB" you can add more letters to "HSL", making things look neon, stereoscopic, brushed, iridescent, hyperbolic, etc. But perhaps what we see is already all the possible "base" colors?
Apple wants the iPad OS developer experience to be bad
Their #1 focus is lock-in, preventing being commodified, and extracting value
No apps that run operating systems, no apps with windowed environments, no app-store-like apps / third party app stores, no browser engines, no Flash, terrible mobile safari experience, no OS mods / jailbreaks, no desktop-style cursors, physical iPad keyboards lacking escape keys. Wanting 30% of all payments for digital goods that occur on an iPhone.
Being able to eg "run Docker on your iPad" would go against all that. I think it's also a big part of why they charge $99/yr for being a developer - otherwise people would be able to sideload apps much more easily
Hard to imagine just how different things might be right now if things were more open. iPhones are always on and connected to the internet - you could run servers from them! maybe mesh networking could actually be a thing?
The mother of all demos would not be allowed on the app store. Squeak/smalltalk would not be allowed. Feels disrespectful to humanity that the most popular platform in the US is so locked down. Idk how things would change here without government intervention -- this all seems to be in shareholders' interests.
Seems like a good time to be running a competitor. iOS being so locked down means they'll have lackluster support for the long tail of desired AI use cases. And cross platform development via react native is really good now, imo.
Also, bun macros are very cool -- they let you write code that writes over itself with GPT-4. Just mentioning as a thing to keep on your radar as you keep pushing the boundaries of what's possible in javascript :) making it more lispy and preserving eval-ability is great
Had a homeless guy sit behind me on the bart repeatedly saying he'd slice my neck if I turned around or moved at all. It was a mostly-empty car, and it seemed like he was serious and that nobody else was going to do anything. Wound up sitting still and bolting out at the next stop. Tried calling the cops after and they didn't care in the slightest. Just one of many incidents.
I live in Cambridge, MA now, and it's a million times better. Only thing remotely close that I've seen was one time biking past the Woods-Mullen Shelter in Boston (saw two other comments here call it "Mass and Cass"), except it's just one block and extremely tame compared to what you find all over the Mission, SOMA, and Tenderloin.
Speaking of python performance, I recently benchmarked "numpy vs js" matrix multiplication performance, and was surprised to find js significantly outperforming numpy. For multiplying two 512x512 matrices:
You're right, it's not a fair comparison -- I think it's still interesting though, since numpy is the standard people would reach for, which made me think it would be the fastest / use the GPU. I expect a python library that uses the GPU would be just as fast as the others.
What I meant was I expected numpy to be faster than the js libraries I was testing, simply because people use it so much more, for real "scientific computing" work. And indeed it is very fast given it only uses the CPU, but that still leaves its matrix multiplication as ~100x slower than what my mac is capable of.
TensorFlow.js matrices are immutable, which puts more restrictions on your programming style that standard Numpy. You cat get immutable, GPU-enhanced matrices for Python, too.
I find building react native apps much faster than building native apps. You don't need to recompile if you're just changing the javascript, and only a couple weeks out of the year have I actually needed to change any native code.
This is a misunderstanding of how Arrays work.
When you have an object with exactly two keys, it is very fast to iterate through them.
So what this code demonstrates is that if you have to look up a billion keys, that is going to be slower than if you have to look up two keys for an object.
const sparseArray = [];
sparseArray[9999] = 'foo'; // Creates an array with dictionary elements.
In this example, allocating a full array with 10k entries would be rather wasteful. What happens instead is that V8 creates a dictionary where we store a key-value-descriptor triplets. The key in this case would be '9999' and the value 'foo' and the default descriptor is used. Given that we don't have a way to store descriptor details on the HiddenClass, V8 resorts to slow elements whenever you define an indexed properties with a custom descriptor:
As an interesting fun fact many languages, including JavaScript, inherit complex type memory organization from C lang. So the same fundamental performance implications work the same way in all these related languages, because it is more about what happens on the metal than at the syntax.
Please note this observation only applies to access during execution.
During the late 70s a developer named Paul Heckel discovered that hash maps (what JavaScript objects are) were randomly accessed in memory until the specified key was found. That random access was faster than the access of a specified index from an array, because array indexes are accessed sequentially. Because arrays are ordered sequentially, even in memory, they are substantially faster to iterate over, though.
Interesting (and unexpected): I tried searching for "Paul Heckel" in Google and only found stuff about a jazz musician. Then I searched for "Paul Heckel's algorithm" and my writing about implementing the algoritm in JavaScript was one of the first Google results.
Sorry, I don't quite understand: running through a sparse array - running through the "undefined" - is much faster than running through a dense array. What have I missed?
they are trying to show that looping a dense array is not much slower than looping the sparse, I just did a test on FF and is 78% slower to loop the dense vs the sparse
This is such a wonderful initiative and could bring so many kids into programming! So many people start programming in order to make games and I could see this developing into the new standard for it & it being supported first class in replit makes it super simple to create and share your game.
I got a chance to try out kaboom last weekend and while it's already great for games, I'd love for it to also have more support for visualizations, something I used to spend a ton of time on in high school in khanacademy's processing.js editor[1]. Kaboom's support for ES6 and WebGL, focus on gaming, and integration with replit would make it a much better tool than KA for this. Some things in particular I'd like to see from kaboom for viz - it's missing basic raw functions like drawEllipse, the docs were difficult to read (I wish it would say what params something takes instead of "[conf]"), and in the end I failed to port over my voronoi viz[2][3]. If they decide this would be a good focus for them, I'd suggest having one of the built-in kaboom examples on replit to be a visualization.
The most immediately relevant thing is equity-based comp that doesn’t have a tax hit.
Another thing I find interesting is whether this comp might be more “mission-aligned”:
- It’s likely this is all just to accomodate their “non-profit with limited profit subsidiary” structure
- But there's still a deeper question there: do current specifics of how startups do comp / investment / exits somehow result in societally suboptimal outcomes?
- If so, how exactly, and is it fixable, or are those inherent properties of an ecosystem of competitive, high-growth startups?
- One way that might be the case is if there’s a pressure for “growth at all costs”, which is something OpenAI would want to avoid.
- The pressure for that could come from a lot of sources — intrinsic desire for growth; financial expectations from shareholders (employees, VCs, public market investors, even customers); regulatory requirements around fiduciary duty; existential risk of going under or competition, or needing to hit a certain size to achieve whatever it is you want to achieve
- Another way this could be the case would be limited exit opportunities, or limited creativity around them / sticking to well-trodden paths, ultimately resulting in control of the company being handed over to either poor management or groups prioritizing growth at the expense of good product
- Thinking about twitter and reddit’s API shutdowns here — how much do unpopular product decisions like that come from legal specifics? Or perhaps those would have happened no matter what, to keep the companies from going under or to have enough profits to fund further innovation.
- It’s not obvious that this setup avoids a motivation for “growth at all costs” — employees’ pay is still based on increasing profit, and if it doesn’t grow as fast as competitors it may just become irrelevant. But it’s possible it affects other strings around company control. Not sure — IANAL :)