Hacker News new | past | comments | ask | show | jobs | submit | MichaelEstes's comments login

How can vodka help with alcoholism?

PlayCanvas is a game engine that runs on browsers, but I’m not certain what it’s future will be, it was bought by SnapChat, but SnapChat has shut down running games in app.


A long time ago I had a company backchannel someone I actually would have listed as a reference except I knew he was out of country and on vacation. That left a sour taste in mouth and I ended up not going with their offer. So if you’re going to backchannel I’d suggest at least not cold calling.


Location: Los Angeles, CA

Remote: Ideally

Willing to relocate: No

Technologies: C#, JavaScript/TypeScript, Go, Python, React, Unity, Three.JS, AWS, Maya, C4D, Substance, Photoshop

Résumé/CV: estes.es

Email: Michael@estes.es

Red Flags: I spend too much time picking fonts, I have trust issues with black box code, I spend more time on making things easier to make than just making the things, I pace when I’m thinking and I can’t solve most problems without drawing it out.


Depends on the asset, most code or UI tools would probably be a no without a good amount of effort to port, 2D and particle assets you can most likely rip the image files and recreate in another engine with some work. 3D assets usually come with an FBX file (If they were made using Probuilder or something inside Unity you can use Unity's FBX exporter to get an FBX file) that you can easily transfer to another engine, there may be some edge cases where you'd have to re-rig the assets depending on the engine you're moving to. For animation assets they'll either be an FBX file that you can transfer over or an Unity Animation Clip that you can also convert to an FBX using the FBX exporter. Shaders are a little bit vendor locked if they were made with Shader Graph, you can get the generated source for the shaders, but it is generated which can make it hard to read (single letter variables/function names). There's a ton of edge cases you could run into depending on the engine you're moving too, like Z being up vs Y being up or engines using a different normal map tangent basis, but there's tools that can fix those issues when you come up against them.


If you’re on OSX I haven’t found any program better than Sequel Pro (sometimes referred to as Sequel pancakes). It’s one of a few programs that makes keeping a Mac around worth it.


Does anyone know how accurate these are to the raw data a CT scan produces, it looks really clean, has it been touched up any significant amount or is this actually the quality of the machines?


I've seen slightly better scans from the Lumafield machine I have access to at $DAYJOB, but only slightly better. The scans shown here are very high quality.

You don't get access to raw data, assuming you mean something like individual X-ray images. The service runs the tomography on some cluster in the cloud and you get access to the reconstruction through a web app.


The scans you see on this website are from Lumafield machines, so that does check out.


Radiologist?


No, I am a software engineer at a writing instruments company... long story.


Oh interesting. The writing instrument company needs scans this good?


Yes, and I wish it was even more accurate :)

Doing metrology on production parts normally means disassembling them and putting them under the microscope or X-raying them, but sometimes there are problems that only manifest when the pen is assembled and closed. There's a lot of geometry that isn't visible externally in a pen, more so in certain markers. The writing systems are very sensitive to manufacturing tolerances, and out of spec parts are perceived by users as a bad pen or marker (which we don't want). With a normal X-ray, it is very difficult to resolve internal geometry deep in the assembled pen with any degree of accuracy.

CT scans allow us to examine internal geometry non-destructively and they are relatively fast to run. The scans shown in that blog post I would guess took about 6-8 hours of scanning + 1 hour of reconstruction to generate. Once you start the machine, it's completely automated from there, so you don't need a technician or an engineer sitting at the X-ray machine (which BTW is running Windows XP or something worse) takings images of parts.


I’m a radiographer, but haven’t done CT in a long time.

These look cleaned up, as the metal artifact from dense things is minimal.

I’ve scanned things then converted the Dicom file into a format suitable for printing (I’d broken a part of a coffee grinder). These images look like the item when moves to a 3D print in format.

Side story: finding out what’s inside things is what CT is for. We used to scan the chip packets before loading them into the vending machine. We sort out the ones with prizes inside.


Ha! I'm with Lumafield--we've actually done the same thing, scanning bags of Doritos, Cheetos, and Ruffles: https://www.lumafield.com/article/bite-into-doritos-ruffles-...

To answer the question about whether these are cleaned up, these scans aren't processed beyond what our software does automatically during the reconstruction. Industrial CT scanners are designed to scan a wider range of material densities than medical scanners. We use some copper filtration to scan parts with lots of dense materials, but no extra processing is required once we've reconstructed the model.


I’ve spent a good amount of time on YC’s cofounder matching service and my biggest problem is not anything that can be fixed by a better platform. Most the “ideas” for products are dystopian, moral-less get-rich-quick schemes that I’d have to abandon any semblance of ethics to get behind. Web3 especially has brought out the “music men” of the world and given them a platform. Fuck this “do anything for a buck” world we’ve built.


In computer rendering and simulation: They have an Oscar.

The Academy Awards has a separate event for scientific and technical awards, I don’t think it’s something you can really strive for, but as far a social proof goes I bet it works really well.


I’ve mostly used ChatGPT for finding functions in more obscure or densely documented APIs, it shaves some time off when I go “I know Maya must have a python function that can do this” and I can ask CharGPT without having to dig through their documentation. I would say it’s been about a 60% success rate, 20% of the time it gets it completely right, 40% of the time it gives me a good starting point and the other 40% I’m stuck digging through documentation anyways.


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: