Hacker News new | past | comments | ask | show | jobs | submit | more so_tired's comments login

Off topic: are $1M/$2M seed rounds realistic during YC demo day?

Even with some traction/users/conversion/revenue, and a grand long-term vision, a suggested 15%-25% dilution [1] gives $4-10M valuation. Outside SV, this is already Series A valuation. Any thoughts ?

[1] https://blog.ycombinator.com/how-to-raise-a-seed-round/


These are certainly realistic, and happen quite often during and after demo day.

Fwiw - series A valuations around the world certainly vary, but it is rare to see an A done in the $4-10m range, at least for companies that we've funded.


Yes getting $1M to $2M raised because of demo day is reasonable. Yet closing $2M on demo day without subsequent follow up meetings would be very impressive!


$1 million doesn’t get you very far these days.

That only pays for the salaries of 5 engineers. And what about rent? That takes one salary itself. Which only leaves you with enough money for 4 engineers.


Give me 5 engineers and I will change the world.


Well, if you can encourage them to work 70 hours a week, and you work for free, then you’d get the equivalent of 10 engineers.


I will never encourage anyone to work 70 hours a week.


Is there any research that spiking/dopamine type learning is good at "animal level" behaviour, but abstract and complex thinking is enabled by different mechanisms ?


On the AI side of the fence the approach has been "let's see just how far Reinforcement Learning can take us, and then start making up stories (hypothesis) about what the secret ingredients are that are missing." On the neuroscience side of things my sense is that that's not a question that can be empirically answered any time soon. This experiment was interesting because they new what they were looking for going in. "What algo are these cells running?" is a hard question, "are these cells' firing activities consistent with this given algo is comparatively easy. Inference vs hypothesis testing.


Amazing open source effort. My immediate thought is building an AI generic player that can train itself and play (most) of the games

What r your opinions ?


I just wonder if the AI would be easier to train for Rome or for the Gauls!


Great twits and write up.

Can u mention how much human Dev time is involved?

We have a stupid-basic single machine Deep reinforcement Self play setup. It takes about 24 hrs to run a full experiment. The NN is the bottle neck. Using Tensor flow. Nothing fancy.

How much dev time for a good enginner (backend, kernel, multi core experience) to get this down to say 1hr ?

Obviously a very general question. Thanks for any input.


hi

Checked out your profile/web page.

Can i email you about a small biz-dev opportunity?

We PAY YOU a monthly commission. We dont want any customer or account data. We dont compete with u, or your customers.

I hope this doesn't sound too spammy..


Why don't you just use https://referdigital.com/homepage/contact? But because you are asking: It sounds a bit spammy ;)


Hopefully hackernews is a better starting point for a conversation?

For what's its worth. Its a totally new startup. This is my first (failed?) attempt at finding partners online?

Maybe we are like Robinhood? Paying millennials a higher interest rate AND giving them zero-commision stock trades :)


How do you recruit? Where do u advertize?

I think the real problem is cutting down the initial 1000 CVs no ?


HN is the best place to find devs, without exception.

> I think the real problem is cutting down the initial 1000 CVs no ?

Yeah that's a time sink. Most applicants are unqualified. Good ones tend to stand out immediately though. Whatever you're after, you can find someone with the exact requirements you asked for.


I took the title from the techmeme.com link

> TikTok was splurging by spending nearly $1 billion on advertising for the year


> prevent our users from burning too much founder attention doing chargeback management

Yikes. Yes. Thank you. I wish i had this 2 months ago.


I never really tried multi-task learning. Is it so catastrophic ?

If I have several tasks, plenty of samples for each task, and a network that converges well for each task.

Cant i just mix and task-label these samples, train from scratch a slightly bigger network, and ta dam .. a multi modal network or what ever?


Yes you can! The "mix" part is key. It's the sequential learning which screws up networks today. If you randomly sample from tasks you're fine, or if you can replay older tasks while you're learning new ones (essentially another form of random sampling) the network can learn multiple tasks. But the moment you drop a task from a distribution of training data you're going to start losing competency on it. By default neural networks don't have mechanisms to protect data (weights) from being over-written.


https://updatefy.co/documentation#types

Err.. so there are only 3 widget types right now? This looks like a neat idea. More widgets needed.

But i am not a UI or front-end guy. Am i mis-understanding? Are users supposed to design our own widgets?


Thanks :) At the moment we have 3 widget types we are planning to release more. No, all widgets are coming from us :)


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: