Hacker Newsnew | past | comments | ask | show | jobs | submit | evrydayhustling's commentslogin

Best lines in this article. But it doesn't get to IMO a very important point: why can't these processes easily be structured? Here are some good reasons:

- Your process interacts with an unstructured external world (physical reality, customer communication, etc.)

- Your process interacts with differently structured processes, and unstructured in the best agreed transfer protocol (could be external, like data sources, or even internal between teams with different taxonomies)

- Your process must support a wild kind of variability that is not worth categorizing (e.g. every kind of special delivery instruction a customer might provide)

Believing you can always solve these with the right taxonomy and process diagram is like believing there is always another manager to complain to. Experienced process design instead pushes semi-structured variability to the edges, acknowledges those edges, and watches them like a hawk for danger.

We should ABSOLUTELY be applying those principles more to AI... if anything, AI should help us decouple systems and overreach less on system scope. We should get more comfortable building smaller, well-structured processes that float in an unstructured soup, because it has gotten much cheaper for us to let every process have an unstructured edge.


This write up on a paper about business process redesign is such a fine read https://ferd.ca/notes/paper-moving-off-the-map.html

Ok, so you define violence to include advocating for a law that tramples someone rights. The next person says that advocating for laws is its own inalienable right, so you trampled him. And the whole semantic redefinition snake just eats its own tail.

If we want constitutional to have any force, we have to push for a world where words mean something.


Words do mean something - which is exactly why “violence” already has recognised psychological and coercive forms in law and medicine. Pretending otherwise isn’t defending meaning, it’s narrowing it for comfort. People who’ve lived under regimes of fear understand that harm doesn’t need batons to leave marks. But sure, if the only kind of wound you acknowledge is one that bleeds, then the rest of us must be imagining things.


Most people just means physical violence when they say violence, if you use the word differently you will trick many people into thinking you say something you don't.


“Most people” once thought depression was laziness and marital rape was impossible. Appealing to what most people think isn’t clarity, it’s inertia. Language changes because our understanding of harm does. The fact that many still default to the physical doesn’t make the rest untrue - it just shows how far denial can pass for common sense.


This is so true. When you get DSA wrong, you end up needing insanely complex system designs to compensate -- and being great at Testing just can't keep up with the curse of dimensionality from having more moving parts.


Those human imperfections likely decrease randomness - for example leaving cards that started adjacent more likely to remain adjacent han by strict chance.


They most definitely decrease randomness.

But I guess the article’s point is that human imperfections offset that with lower correlated failure modes.


This all day. Programmer since c64, c++, java, f#, python, JavaScript and everything in between. Code was never the point, but it wasn't just commerce either - it's fun making machines do things they couldn't before. AI is an s-tier upgrade to that mission.


Feels like cultivating acceptance and indifference to your own entanglements is the most isolationist thing you can actually do. To be entangled is to be biased about what's happening to you... do we think the crocodile was indifferent to the escape of his prey, or to being culled in an act of revenge?

Anyway, if folks enjoy this theme I recommend Scavengers Reign, which does a beautiful job of illustrating struggle with biological entanglement.


Drawing on your other comment about spurious correlations, might there be a more direct mathematical test for an unexpectedly high number of aligned correlations?


Trust was its own reason. It's useful for the whole world to have a currency and business environment that operates by rules, even when the rules aren't perfect or fair.

That environment isn't being outcompeted by better, more fair rules - it's just getting vandalized for a few people's gain, and creating risk for everyone else .


I'm totally with you on the value prop at the time we signed up. I was more surprised that it sounds like you are reluctant to delete now, when the company is going through an unpredictable transition.

Did I get that right? If so, is there an ongoing value you want to maintain, or it more out of respect for the organization that provided you value in the past?


Nice and provocative read! Is it fair to restate the argument as follows?

- New tech (eg: RL, cheaper inference) are enabling agentic interactions that fulfill more of the application layer.

- Foundation model companies realize this and are adapting their business models by building complementary UX and witholding API access to integrated models.

- Application layer value props will be squeezed out, disappointing a big chunk of AI investors and complementary infrastructure providers

If so, any thoughts on the following?

- If agentic performance is enabled by models specialized through RL (e.g. Deep Research's o3+browsing), why won't we get open versions of these models that application providers can use?

- Incumbent application providers can put up barriers to agentic access of the data they control. How does their data incumbency and vertical specialization weigh against the relative value of agents built by model providers?


Hi. Yes this is wholly correct.

On the second points:

* Well I'm very much involved in making open more models, pretrained the first model on free and open data without copyrigh issues, released the first version fo GRPO that can run on Google Colab (based on Will Brown). Yet, even then I have to be realistic: open source RL has a data issue. We don't have the action sequence data nor the recipes (emulators) that could make it possible to replicate even on a very small scale what big labs are currently working on.

* Agreed on this and I'm seeing this dynamic already in a few areas. Now it's still going to be uphill as some of the data can be bought and advanced pipelines can shortcut some of the need for it, as models can be trained directly on simulated environments.


Thanks for the reply - and for the open AI work!

> We don't have the action sequence data nor the recipes (emulators) that could make it possible to replicate even on a very small scale what big labs are currently working on.

Sounds like an interesting opportunity for application-layer incumbents that want to enable OSS model advancement...


answering the first question if i understand it correctly.

The missing piece is data obviously. With search and code, it's easier to get the data so you get such specialized products. What is likely to happen is: 1/ Many large companies work with some early design partners to develop solutions. They have the data + subject matter expertise, and the design partners bring in the skill. This way we see a new wave of RL agent startups grow. My guess is that this engagement would look different compared to a typical saas engagement. Some companies might do it inhouse, some wont because maintaining such systems is a task. 2/ These companies open source part of their dataset which can be consumed by oss devs to create better agents. This is more common in tech where a path to monopoly is to commoditize the immediately previous layer. Might play out elsewhere too, though I do not have a high degree of confidence here.


Why will application layer value props be squeezed out? And if so, where does value accrue going forward in an RL first world?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: