Hacker News new | past | comments | ask | show | jobs | submit | HALtheWise's comments login

I believe it's a 360deg planar lidar mounted on a vertical plane, with a motor to rotate it around and slowly cover a full 4pi sphere. There's also a fisheye camera integrated in. This is a pretty common setup for scanning stationary spaces (usually tripod mounted)

Do you have other sensors in the same price range that you'd recommend instead for most uses? How much accuracy improvement would you expect?

Yaw drift is my problem, so I tried a bunch of IMUs. Ones built around BNO055 seem to be alright and they are not that much more expensive. I ended up using Adafruit's.

AutoPallet Robotics (YC S24) | ONSITE | San Francisco | Full-Time and Intern | Robotics Software (Rust), Mechatronics Engineer, ML Intern, ??

We’re building novel robots for case picking in warehouses, a huge unsolved automation opportunity for a very labor-intensive, critically important and sh*tty human job in the thin-margin industry of logistics. Our software is all Rust/Pytorch, embedded dev through research-frontier applied algorithms and an integrated development iteration time measured on a watch, not a calendar.

There’s more than enough fun problems to go around, and we’re looking to grow the team with a couple more great-to-work-with folks, including generalists or job descriptions we haven’t realized we should be hiring for. We’re also specifically looking for a summer applied ML intern for robotic perception and controls. There’s a lot more cool stuff we can’t share publicly just yet. To apply, email a resume to <the-two-letters-of-this-site>@autopallet.bot, and include the role you’re looking for in the subject line, and we’ll take it from there. Bonus points if you include a link or photo of something cool you’ve made.


awesoem


It seems like a 24hr delay for auto upgrades would mitigate a lot of this, maybe with some way that a trusted third-party could skip the delay for big-ticket zero day patches?


I think what we need is first and third party notifications about vulnerabilities in specific versions, and a culture of cherry-picking security fixes onto previous versions. (In many cases, the same patch will apply to a previous version without any real difficulty.) First and third party notifications both provide critical roles; I think we've leaned too heavily on first party notifications only, but that's a SPOF.


Not an expert here, but afaik a turbine section consists of alternating spinning blades attached to the shaft and stationary vanes attached to the duct, which de-spin the air coming off the blades and prepare it for the next set. I'm not sure why the vanes are often hidden in cutaway views.

If you had a spinning duct, you'd presumably need a stationary shaft in the middle for mounting the vanes, and would have similar tolerance issues between the tips of the stationary vanes and the rotating duct. There's reasons that it might be easier to solve (the duct can be lower temperature) and reasons it's harder (bearings for a giant spinning duct). Not sure if anyone has tried such a design.


What's your perspective on variable-width SIMD instruction sets (like ARM SVE or the RISC-V V extension)? How does developer ergonomics and code performance compare to traditional SIMD? Are we approaching a world with fewer different SIMD instruction sets to program for?


Var-width SIMD can mostly be written using the exact same Highway code, we just have to be careful to avoid things like arrays of vectors and sizeof(vector).

It can be more complicated to write things which are vector-length dependent, such as sorting networks or transposes, but we have always found a way so far.

On the contrary, there are increasing numbers of ISAs, including the two LoongArch LSX/LASX, AVX-512 which is really really good on Zen5, and three versions of Arm SVE. RISC-V V also has lots of variants and extensions. In such a world, I would not want to have to implement per-platform implementations.


Presumably you could store the TID in every event, or otherwise check whether the TID has changed since the last time it was logged and push a (timestamp, TID) pair if so. Reading TID should be cheap.


In what sense should reading the TID be cheap? You would need either a syscall (not cheap) or thread-local storage (the subject of TFA.) Avoiding the use of TLS by reading the TID can't really work


It looks like the TID is stored directly in the pthread struct pointed to by %fs itself, at a fixed offset which you can somewhat-hackily compile into your code. [0]

In the process of investigating this, I also realized that there's a ton of other unique-per-thread pointers accessible from that structure, most notably including the value of %fs itself (which is unfortunately unobservable afaict), the address of the TCB or TLS structures, the stack guard value, etc. Since the goal is just to have a quickly-readable unique-per-thread value, any of those should work.

Windows looks similar, but I haven't investigated as deeply.

[0] https://github.com/andikleen/glibc/blob/b0399147730d478ae451...

[1] https://github.com/andikleen/glibc/blob/b0399147730d478ae451...


In addition to the other good answers, if the amount of state that's explicitly managed by software gets too large then it gets really expensive to save or restore that state. This happens, for example, when a syscall transfers control to the operating system. If the (many-MB) cache were software-managed, the OS would need to decide between flushing it all to main memory (expensive for quick syscalls) or leaving it in place and having OS code and data be uncached. Function calls between libraries have similar problems, how is a called function supposed to know which cache space is available for it to use? If you call the same function multiple times who's responsible for keeping its working data cached? For a 32MB L3 cache, flushing the entire cache to memory (as would be required when switching between processes) could take over a millisecond, let alone trying to manage caches shared by multiple cores.


The name is very intentional, this isn't "AI's Last Evaluation", it's "Humanity's Last Exam". There will absolutely be further tests for evaluating the power of AIs, but the intent of this benchmark is that any more difficult benchmark will either be

- Not an "exam" composed of single-correct-answer closed-form questions with objective answers

- Not consisting of questions that humans/humanity is capable of answering.

For example, a future evaluation for an LLM could consist of playing chess really well or solving the Riemann Hypothesis or curing some disease, but those aren't tasks you would ever put on an exam for a student.


Isn't FrontierMath a better "last exam"? Looking through a few of the questions, they seem less reasoning based and more factual based. There's no way that one could answer "How many paired tendons are supported by this sesamoid bone [bilaterally paired oval bone of hummingbirds]" without either having a physical model to dissect, or just regurgitating the info found somewhere authoritative. It seems like the only reason that a lot of the questions can't be solved yet is because the knowledge is specialized enough that it simply is not found on the web, you'd have to phone up the one guy who worked on it.


It sounds like one function in libmodem accepts a pointer to a configuration struct, then stores that pointer (or an interior pointer from within it), which is then later used by another libmodem function later. If all of libmodem were written in Rust, this could be done without any use of unsafe, but it would require the lifetime on the original "reference" to provably outlive the second function getting called, probably by being static.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: