Hacker News new | past | comments | ask | show | jobs | submit | votick's comments login

I can’t use it on mobile looks like which is annoying as we’ve had touch screen phones for almost 20yrs now, the good thing is at least I care enough im annoyed I can’t use it. I’d imagine you’ll get a lot more sharing if mobile works

I will definitely support mobile today/tomorrow but I am sure it is almost useless for mobile phones

I thought this was a "your mom is compatible with everyone" joke


[flagged]


I don't know whether to laugh at the joke or be triggered that USB takes three turns to insert.


This seems completely reasonable pre revenue and I’ve no doubt they make an ai app that 1000x’s everyone’s investment using the best ai technology and algorithms and maybe even some better data. hard to know tho for sure sometimes they lose money sometimes they won’t lose money but ai big and I think ai stay big so more ai


Put this guy in charge of a VC firm, stat.


Hold on a second. We don’t even know whether he drinks Lacroix or owns a patagonia puff vest.


Lacroix's for closers. Only.


I only talk to investors who were bottle-fed Hint straight out of their mother’s surrogate’s womb


This investment makes no sense because ai is supposed to take us into a post scarcity society?


It’s going to take Ilya there for sure.


What kind of AI movies did you grow up watching? Personally I'll never forget the robot foot crushing a human skull.

https://www.youtube.com/watch?v=DHKxoARmjLU


Could you explain more ? I was never able to understand this argument. I can understand AI might reduce the labor costs in the future. What about finite resources like minerals ?


Well imagine a robot that goes to work for you everything else being equal


My gpt detector is going off on this comment, but it's probably just plain old BS.


Your detector seems to have given up before the end.


I'll plug my own attempt at doing this a few years ago, https://www.youtube.com/watch?v=H8w0eFXaXjI

The moment I received my first packet on a cut-up wired headset I used as a transceiver makeshift tenceiver it felt like something clicked and I just began to understand more how the universe works. I recommend projects of this type and wish more folks did it.


This is clean, nice work!


Thank you!


Hey HN, this is a lil side project I've been working on.

I've ran a ML hosting platform (banana.dev) last few yrs.

Not being able to get GPUs has stressed me out a lot. So took a crack at solving it.

This is a super alpha product, please don't trust it with sensitive workloads or production applications yet. Down to chat if you'd like to use this at scale!

Would love honest feedback, and feel free to AMA! -kyle


Oh neat, I'm coincidentally trying to setup tier rn.

Bit stuck in progress following Vercel template here https://vercel.com/templates/next.js/tier

But I've already built out my own stripe usage-based billing app b4, and it sucks way more to debug that, so I get what you're trying to do here.


Would love to help. Let me know whats blocking you with the template? Also you can join our community at https://tier.run/slack


I tried to implement the TCP/IP protocol of internet from scratch + radio transceiver

https://kyle.af/internet-from-scratch


author here:

hey HN, we use used to run an ML consultancy for a year that helped companies build & host models in prod. We learned how tedious & expensive it was to host ML. Customer models had to run on a fleet of always-on GPUs that would often get <10% utilization, which felt like a big money sink.

Over time we built infrastructure to improve GPU utilization. Six months ago we made a pivot to focus solely on productizing this infra into a hosting platform for ML teams to use that would remove the pain of deployment and reduce the cost of hosting models.

We deploy on A100 GPUs, and you pay per second of inference. If you aren’t running inferences you pay nothing. Couple points to clarify: Yes, the models are actually cold-booted, we aren’t just running them in the background. We boot models faster due to how we manage OS memory. Yes, there is still cold-boot time, it’s not instant but it’s significantly faster (e.g., 15 seconds instead of 10 minutes for some transformers like GPTJ).

Lastly, model quality is not lost on Banana because we aren’t doing traditional weight quantization or network pruning which makes networks smaller/faster but sacrifices quality. You can think of Banana more as a compiler + hosting platform. We break down your code to run faster on GPUs.

Try it out and let us know what you think!


Hey all,

We're building toward a GPT3 level moment in computer vision, and here's our V0 (demo video linked).

It's called Carrot. Request access here: https://banana-dev.typeform.com/carrot

We are starting with a Visual Question-Answer model, and plan to expand its capabilities to be increasingly general purpose over time as we build in common CV features and upscale the quantity of parameters.

This is a hybrid of vision and language models, which can extract semantic meaning from images and query against it using natural English. This v0 runs on 13B parameters, with 18B and 34B model iterations coming in the pipeline.

The API is in beta, so jump into the waitlist linked above to get early access.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: