Hacker News new | past | comments | ask | show | jobs | submit | code51's comments login

How convenient that insiders jumped on INTC with this "rumor" 3 days ago.

They made about 25% from this in 3 days. Easiest money from a rumor.


The SEC really doesn’t like such plays.

No wait that was a couple months ago.


Is the SEC still around? I thought Elon announced he was eliminating all federal agencies this month containing a letter in the word Valentine.

The husk of the SEC with its brains scooped out is a useful tool to sic upon one's competitors or ideological enemies. I wouldn't be surprised by an SEC investigation into OpenAI in the next few years.

Can the SEC investigate private companies?

Yes. Public companies just have more reporting requirements, but as long as an organization deals with Securities like shares, they fall under the SEC's ambit, public or private.

I think the OpenAI's non-profit to profit transition would have warranted a looking into by any administration, I suspect there will be additional personal animus driving this specific set of players.


Yeah I was thinking the same thing. It's only a matter of time before Musk goes after Altman with the newly weaponized agencies.

The SEC will probably still exist to punish enemies.

So update your bio with a MAGA photo and something racist.


If this was a real thing that would play out, the stock would go up far more than 25%. Still time for normal laymen to get in. Or it's not going to happen, by which case, it's going to go back down.

Coinflips for everyone!


Sometimes the play is only about the announcement itself, not the implementation - news about a possible big news - just like upcoming Trump tariffs.

Daniel Kahneman, Nobel laureate in Economics, described this in his book, "Thinking, Fast and Slow".

Saddam Hussein's capture caused oil industrial stock to climb because of the association between oil and the Middle East. After people realized that Saddam Hussein actually had no relevance to the price of oil or the product, the socks fell back down.

Stock market is quite often pushed emotion and asinine assumptions versus logic and reasoning. Enough social media echo cambers can easily sway stock prices momentary to make a quick buck.


Despite all its worth, Suno doesn't approach the problem well.

After a few laughs and cheap replicas, people realize that it's damn hard to produce a good sounding, creative piece. Suno almost always adds noise and you can feel most of this is coming from fingerprinting.

With Suno and Udio, you lose control. You can generate starters and helpers but sooner or later, you want real control. That's not editing a section and having a conditioned piece of garbage seemingly fitting to the rest. No, control means completely changing progressions, sudden but calculated change of beats, removing any instrument for the shortest time and putting it back with razor-sharp studio detail.

I know a few of these are already addressable, you can take the output, separate into channels (if it's simple enough), quantize, edit and have a good one. Yet, you're not really supported anymore. What should have been was these other core music production software to get cheaper and/or far more effective.

Suno and Udio is a top-down approach. Maybe one day Logic Pro, Ableton, Melodyne etc. will fill in the details up to this point, coming from the ground up with AI, I don't know. We're not there yet and it just brings down the mask of mainstream music industry with its all-repeating shallow beats marketed to hell. Hearing mainstream was awful but it suddenly got even more awful.


I don’t believe any of these companies are seriously trying to create a tool for musicians/artists.

When they say they’re catering to “artists”, they want to say that you can become an artist by promoting their models for the small fee of X.99/mo.

Their goal is to train a model of the well-trodden one-shot text<->art form, and just sell prompting access directly to the mass market consumer audience. Way less work, way bigger market (and less discerning, too)


The tradeoffs between automation and empowerment are tricky to navigate.


Much of this "bad actor" activity is actually customer needs left hanging - for either the customer to automate herself or other companies to fill the gap to create value that's not envisioned by the original company.

I'm guessing investors actually like a healthy dose of open access and a healthy dose of defence. We see them (YC, as an example) betting on multiple teams addressing the same problem. The difference is their execution, the angle they attack.

If, say, the financial company you work for is capable in both product and technical aspect, I assume it leaves no gap. It's the main place to access the service and all the side benefits.


> Much of this "bad actor" activity is actually customer needs left hanging - for either the customer to automate herself or other companies to fill the gap to create value

Sometimes the customer you have isn't the customer you want.

As a bank, you don't want the customers that will try to log in to 1000 accounts, and then immediately transfer any money they find to the Seychelles. As a ticketing platform, you don't want the customers that buy tickets and then immediately sell them on for 4x the price. As a messaging app, you don't want the customers who have 2000 bot accounts and use AI to send hundreds of thousands of spam messages a day. As a social network, you don't want the customers who want to use your platform to spread pro-russian misinformation.

In a sense, those are "customer needs left changing", but neither you nor otherr customers want those needs to be automatible.


With current state of legal, a real challenge can happen only around 10 years from now. By then AI players will gather immense power over the law.


Why is Apache DataFusion not there as an alternative?


Initially LLM researchers were saying training on code samples made the "reasoning" better. Now, if "language to world model" thesis is working, shouldn't chess actually be the smallest case for it?

I can't understand why no research group is going hard at this.


I don't think training on code and training on chess are even remotely comparable in terms of available data and linguistic competency required. Coding (in the general case, which is what these models try to approach) is clearly the harder task and contains _massive_ amounts of diverse data.

Having said all of that, it wouldn't surprise me if the "language to world model" thesis you reference is indeed wrong. But I don't think a model that plays chess well disproves it, particularly since there are chess engines using old fashioned approaches that utterly destroy LLM's.


So it'll turn to yet another arms race - similar to captcha, cybersecurity and nuclear weapons. SEO will use AI to fill in fluff inside AI-generated content (which is already done).

It won't directly match ChatGPT logs and OpenAI would just be pouring precious compute to a bottomless pit trying to partial-match.


Serve Claude-generated version to OpenAI bots. Serve OpenAI-generated version to Claude bots. Problem solved.

Serve users a random version and A/B test along the way.


Then you are still left with self hosted models which are pretty good at this task.


Are the previous investments counting as "donations" still? Elon must have something to say...


And the tax office! If this works, many companies will be founded as non-profit first in the future.


Landing page yes but is it actually working with production performance? I have strong doubts after executing your own demo query of "Summarize the reviews you have on products in the OCR category." in the system.

If the idea is just demo and collecting leads, then you could have cached at least your demo queries.

Everybody's too landing-page focused these days.


Hm can you elaborate on this - I do not fully understand. Was there no response for "Summarize the reviews you have on products in the OCR category"?

The demo is not "cached" or anything this is production.

Would be cool if you could give us more info on this.


For Mac and iOS, you can install ChatGPT app.

Why they won't enable search for their main web user crowd is beyond me.

Perhaps they are just afraid of scale. With all their might, it's still possible that they can't estimate the scale and complexity of queries they might receive.


A user's personal data really does not have that much scale. Worst case they can cache everything locally. I've imported thousands of chat sessions into a local AI chat app's database, total storage is under 30MB. Full text search (with highlights and all) is almost instant.


They did staged rollouts for almost every recent feature.

I think it might be in their interest if you just ask the LLM again? Old answers might not be up to their current standards and they don't gain feedback from you looking at old answers


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: