Hacker News new | past | comments | ask | show | jobs | submit | Whiteshadow12's comments login

Nice voice of reason swyx, people who are not hooked on X, will have selective memory, "Mistral has changed, I miss the old Mistral".

Last year Mistral watched as every provider host their models with little to no value capture.

Nemo is Apache 2.0 license, they could have easily made that a Mistral Research License model.

It's hard to pitch vc for more money to build more models when you don't capture anything making it Apache 2.0.

Not everyone can be Meta.

Magnet links are cute but honestly, most people rather use HF to get their models.


Something Open Router get's close to that https://openrouter.ai/models/meta-llama/llama-3.1-405b-instr...


Amazing, just noticed the mention of precision


Nice I built something similar https://huggingface.co/spaces/Whiteshadow12/llm-pricing-calc...

I like your charting, many have taken this task and then lose interest.

similar other tools for inspiration https://llmprices.dev/ https://www.llmpricing.app/

What no one is doing is focusing on GPUs, what is the cost of running L3-8B on an A100 or H100 per second.


Going to add my personal favorite: https://artificialanalysis.ai/models/llama-3-instruct-70b/pr...

What sets them apart is that they have speed and latency as well.


Thanks for bringing llmprices.dev to my attention. I have also a comparison page for models hosted on OpenRouter (https://minthemiddle.github.io/openrouter-model-comparison/), I do comparison via regex (so "claude-3-haiku(?!:beta)|flash" will show you haiku, but not haiku-beta vs flash.

I wish that OpenRouter would also expose the amount of output tokens via API as this is also an important criteria.


Yeah we want to do exactly this, benchmark and add more data from differnt gpus/cloud providers, will appreciate your help a lot! There are many inference engines which can be tested and updated to find best inference methods


Goodluck, companies would love that. Don't get depressed unlike my tool I think you should charge, that might keep you motivated to keep doing the work.

It's a lot of work, your target users is companies that use Runpod and AWS/GCP/Azure, not Fireworks and Together, they are in the game of selling tokens, you are selling the cost of running seconds on GPUs.


This is true especially if you are deploying custom or fine-tuned models. Infact, for my company i also ran benchmark tests where we tested cold-starts, performance consistency, scalability, and cost-effectiveness for models like Llama2 7Bn & Stable Diffusion across different providers - https://www.inferless.com/learn/the-state-of-serverless-gpus... Can save months of evaluation time. Do give it a read.

P.S: I am from Inferless.


Thank you!


This is really pretty.


Perfect Streisand effect.


Sorry everyone. This was me.


What did you do?


Bounder


The beauty of HN is interactions like this.


Saw this for the first time, it's awesome, I love this place. I started coming here recently as a curious teenager :)


Yes, this was the longest period in the last 6 months.


Had to visit twitter to be sure I wasn't the only one

I hope dang will take sometime to explain why it was down

We might learn a few things :)


It was painful, usually, HN is where one goes to find out if a service is down, I also used Twitter to double check it wasn't just me.



You are most likely not Sam Altman.


I doubt we would have clicked on it if they called it a Modern Alternative to Yahoo! Directory. You are correct though.

Can you allow people to add sites they found, for example adding something like Hydra or Citus would require you to know about them.

This is perfect though, cause I'm always finding fascinating tools online that I either favorite on hn or star on Github but not all the time there's also a rust serverless company that I can't recall the name of, which fits your use case nicely.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: