Long going problem with European companies is access to capital. Getting billions of USD on AI is relatively easy in USA. Getting millions of EUR on AI is comically difficult.
That means that you need to grow organically and thus slowly and that means that competitor who has access to easy money can outgrow you without a problem and take all the market for himself.
You're too cheap. Anyone that won't pay for a proper enterprise support contract you should tell to pound sand. You'll be surprised that when you start charging more people will actually take you more seriously and will be more inclined to sign up. It's counter intuitive from your side, but perception is reality. A 20k/yr enterprise support agreement is more believable to provide results than a 2k/yr deal.
Quite. I remember one of my first corporate customers who was very suspicious of $2K/week, because nobody could do that for that cheap they said. It was nothing extraordinary, just some integration work, tests etc for the project, they wanted it to work with their other suppliers' systems
A lot of Reddit users have drifted over here, too. Bringing with them a culture of juvenile humor and unreserved antagonism. I'm sure it has made Dang's job harder.
>A lot of Reddit users have drifted over here, too. Bringing with them a culture of juvenile humor and unreserved antagonism. I'm sure it has made Dang's job harder.
Did you mean "undeserved" antagonism?
I think HN has as good a culture as some of the more well moderated subreddits, but HN has more experts and Reddit has more folks who are eager to learn and share their thoughts.
Dan has certainly done a good job keeping it as a place to discuss technical matters and not descending into a sea of memes ala slashdot, but I think one of the few failings of HN (and this is not the fault of dang) is that a small subset of the folks here are very fragile -- you can tell from interacting with them that they are not used to be challenged in meatspace, and because of that the mildest criticism gets "rounded up" to antagonism or worse.
Maybe part of why folks are like that is we have a lot of "rockstars" posting, and just like in the entertainment space, when most of your interactions consist of people catering to you, any form of criticism can feel like a personal attack.
They haven't improved at all since then in my experience. I poke at them every now and then and they still can't refrain from feeding me false info (and likely never will be able to, because they are stochastic parrots without any actual understanding). They are useless to me because I will take more time checking their output than I will just doing the task myself.
> and they still can't refrain from feeding me false info
If that's your metric, and even then only if you've got a boolean yes/no measurement, then I agree.
If you measure "false info" as a percent, they're better. If you measure scores on IQ tests, on general knowledge, on exams, on the size of a code problem they can write before they have a 20% chance of failure, on the quality of translations they make, on the new modalities like being able to both consume and respond with images, on mathematical olympiad questions, then they're significantly better.
Unfortunately, we can tell by the general public reaction (not just you) that even all those things combined still don't fully capture what normal people mean by "intelligence".
> They are useless to me because I will take more time checking their output than I will just doing the task myself.
What size problem do you give them? I use them in software, and try to keep each single task I give them to ones which would take a human 90 minutes. I can check the quality of an attempt at a human-would-take-90-minutes-to-do task in about 5 minutes.
When I've accidentally let an LLM do bigger tasks than that, then the difficulty of checking goes way up and the quality of the output goes way down.
Conveniently, one of the tasks that generally takes a human less than 90 minutes is breaking down a bigger task in to sub-tasks that themselves take less than 90 minutes. Fail do do this and I get exactly what you experience.
That's not the gotcha that you think it is because everyone else out there reading this realizes that these things are able to combine things together to make a previously non-existent thing. The same technology that has clothing being put onto people that never wore them is able to mash together the concept of children and naked adults. I doubt a red panda piloting a jet exists in the dataset directly, yet it is able to generate an image of one because those separate concepts exist in the training data. So it's gross and squicks me to hell to think too much about it, but no, it doesn't actually need to be fed CSAM in order to generate CSAM.
reply