I feel like once the for-profit "genie" is out of the bottle, it can't be put back in. How many employees have they hired since 2018 that are the absolute best of the best of the industry? How influential was the massive equity package in their employment decisions?
Personally i saw the for-profit subsidiary as a pretty decent middle ground. Obviously this action took place because the current board felt the company moved too far in the for-profit direction. While they do have the technical right and ability to fire Sam and Greg, and it was the "right" thing to do to bring the company back to the non-profit roots, do you really think the employees saw it the same way? How many hours do the board minus Ilya actually contribute in a week to OpenAI? None of the board members minus Ilya were even around before the for-profit subsidy was created so can they truly align the new OpenAI with something they didnt experience?
Also an aspect i didn't really see anywhere but they had 3 board members leave in 2023. How would those board members have voted? I'd guess probably not to fire Sam but i have no clue. Also why did they not add more board members when those left? Did Sam and Greg want the board expanded again but the other 3 kept voting to not expand and to keep the power?
It quite literally was. It had some of the fastest growing user growth ever. There was something like a million new users just days after ChatGPT launch.
I'm not super informed on the space but i do try to keep up with different 3D sensing tech. What makes this a big leap forward over what we already have? I mean doesnt the iphone and most flagships already do 3D sensing?
Hi - I'm one of the founders of Tangram Vision here. It's a good question. This sensor in particular is focused on robotics, where the capabilities of 3D sensors are fairly different from what you'd find on an iPhone. In the case of HiFi, the leaps are in resolution (much higher than other depth sensors for robots), AI compute (about 5x the amount of the next competitor), and ease of integration with a robotic platform.
It would perhaps be more accurate to say that this is a big leap forward compared to most existing off-the-shelf depth cameras for robotics. To address the iPhone specifically: you probably aren't going to mount iPhones on a bunch of production robots in the field.
Comparing to other alternatives in the robotics space (I've listed RealSense and Structure above, but there are others), there is somewhat of a laundry list of potential pitfalls and issues that we've seen folks trip over again and again.
Calibration is a big one, and a large part of what we're doing with HiFi is launching it with it's own automatic, self-calibration process (no fiducials). There are some device failures that a process like this wouldn't be able to handle, but the vast majority of calibration problems in the field result from difficult tooling or requirements, a need to supply one's own calibration software, or a combination of hardware and software that make the process difficult. A nickel for every time someone has to train a part-time operator to fix calibration in the field, and I'd own Amazon.
Depth quality and precision is another big pitfall — there are folks out there today using RealSense for their robot, but we've talked to a number of folks who just don't rely on the on-board depth. It's too noisy, it warps flat surfaces, etc. Lots of little details that on the surface you might not think about when just looking at a list of cameras! Putting our edge AI capabilities aside, the improved optics and compute available on the HiFi allow us to build a sensor that always provides good depth. That sounds like a baseline for this kind of tech, but there's plenty of examples otherwise on the market today!
Software is probably the other last big thing that we really want to leap forward on. We don't have too much to say about our SDK today, but when we launch it we hope to make working with these sensors a lot easier. I work with RealSense quite a bit (I am the maintainer of realsense-rust), and quite honestly what has been a solid overall hardware package for many years (until HiFi, I hope) is let down by how confusing it is to use librealsense2 in any meaningful project.
Needless to say, I think HiFi stands on some solid merits and I'm not sure it can be directly compared to other 3D sensors in e.g. iPhones, mostly because the expected use-case is so utterly different.
Appreciate the detailed response! Definitely seems like we've come a long way from when i heard about people using Kinect cameras, and look forward to all future advancements that you will contribute!
Great comment, thank you for sharing your insights. I don't think many people truly understand just how massive these weather models are and the sheer volume of data assimilation work that's been done for decades to get us to this point today.
I always have a lot of ideas about using AI to solve very small scale weather forecasting issues, but there's just so much to it. It's always a learning experience for sure.
It is about attitude in voice, not about audio quality. It is very noticeable at 1.5x speed, 1x speed is too slow anyway with infinite pauses for such low density technical details.
The developer experience is lacking vs. other vector database providers and the performance doesn't match those that prioritize performance rather than devex. You're also spending time writing plumbing around postgres that isn't really transferrable work.
For some people already in the ecosystem it will make sense.
Yeah. Just realized it had that bug like an hour ago. Intro also got removed. Bit demotivated after getting removed from frontpage and show, but will update later.
Random thing that came to my mind related to that. Back in 2015-2017 when the tesla P85D came out and then the P100D, lots and lots of car "experts" had their money taken in drag races and rolling street races. It was just a simple google search away that it did a 10.5 1/4 mile yet slower cars that had no chance would legitimately bet hundreds on a quick race just to lose. Moral of the story, these people understood their street racing and drag racing worlds very well, but it took a few years to put their bias aside when breakthrough were being made every year.