Hacker Newsnew | past | comments | ask | show | jobs | submit | _heimdall's commentslogin

I'd argue that it was all downhill after we moved away from using HTML as the state representation.

Moving state out of HTML and into JS means we now have to walk this ridiculous tightrope walk trying to force state changes back into the DOM and our styles to keep everything in sync.

Given that problem, reactivity isn't the worst solution in my opinion. It tries to automate that syncing problem with tooling and convention, usually declaratively.

If I had to do it all again though, DOM would still be the source of truth and any custom components in JS would always be working with DOM directly. Custom elements are a great fit for that approach if you stick to using them for basic lifecycle hooks, events, and attribute getters/setters.


Wasn’t that the Lit framework? It was okay. Like a slightly more irritating version of React.

I recall the property passing model being a nasty abstraction breaker. HTML attributes are all strings, so if you wanted to pass objects or functions to children you had to do that via “props” instead of “attributes.”

I also recall the tag names of web components being a pain. Always need a dash, always need to be registered.

None of these problems broke it; they just made it irritating by comparison. There wasn’t really much upside either. No real performance gain or superior feature, and you got fewer features and a smaller ecosystem.


The point of Lit is not to compete with React itself, but to build interoperable web components. If your app (Hi Beaker!) is only using one library/framework, and will only ever one one in eternity, then interoperability might not be a big concern. But if you're building components for multiple teams, mixing components from multiple teams, or ever deal with migrations, then interoperability might be hugely important.

Even so, Lit is widely used to build very complex apps (Beaker, as you know, Photoshop, Reddit, Home Assistant, Microsoft App Store, SpaceX things, ...).

Property bindings are just as ergonomic as attributes with the .foo= syntax, and tag name declaration has rarely come up as a big friction point, especially with the declarative @customElement() decorator. The rest is indeed like a faster less proprietary React in many ways.


Kind of? Lit does add some of the types of patterns I'm talking about but they add a lot more as well. I always avoided it due to the heavy use of typescript decorators required to get a decent DX, the framework is pretty opinionated on your build system in my experience.

I also didn't often see Lit being used in a way that stuck to the idea that the DOM should be your state. That could very well be because most web devs are coming to it with a background in react or similar, but when I did see Lit used it often involved a heavy use of in-memory state tracked inside of components and never making it into the DOM.


Lit is not opinionated about your build system You can write Lit components in plain JS, going back to ES2015.

Our decorators aren't required - you can use the static properties block. If you think the DX is better with decorators... that's why we support them!

And we support TypeScript's "experimental" decorators and standard TC39 decorators, which are supported in TypeScript, Babel, esbuild, and recently SWC and probably more.

Regarding state: Lit makes it easier to write web components. How you architect those web components and where they store their state is up to you. You can stick to attributes and DOM if that's what you want. Some component sets out there make heavy use of data-only elements: something of a DSL in the DOM, like XML.

It just turns out that most developer and most apps have an easier time of presenting state in JS, since JS has much richer facilities for that.


Dont get me wrong, I'm a pretty big believer in interop, but in practice I've rarely run into a situation where I need to mix components from multiple frameworks. Especially because React is so dominant.

HTML simply can't represent the complex state of real apps. Moving state to HTML actually means keeping the state on the server and not representing it very well on the client.

That's an ok choice in some cases, but the web clearly moved on from that to be able to have richer interaction, and in a lot of cases, much easier development.


I'm sure you could find examples to prove me wrong here so I'm definitely not saying this is a hard line, but I've always found that if app state is too complex to represent in the UI or isn't needed in the UI at all, that's state that belongs on the back end rather than the frontend.

My usual go-to rule is that business logic belongs where the state lives - almost always on the back end for state of any real complexity.

With true web apps like Figma I consider those entirely different use cases. They're really building what amounts to a native app that leverage the web as a distribution platform, it has nothing to do with HTML at all really.


State in HTML is a horrible mistake. Now everything has to be constantly serialized/deserialized into strings.

It's a bit more nuanced than that. State in Qite is held both in HTML and in JS Component. The html serialization is sort of consequence of changing a field (like when you want to update textarea content, for example). You can completely ignore it or you can also use it for CSS, for example. Another usecase is when user interacts with the pages, changes text in said textarea and it also automatically updates the JS Component field. Finally, there are also flags, which aren't stored in DOM. I'd like to point out this architecture isn't random, it came from building apps and realizing how everything interacts.

> This is a more narrow version of my belief that general AI tools like LLMs fundamentally don't fit as additions to products, but rather subsume products

That seems reasonable, its just yet to be seen if LLMs are a form of artificial intelligence in any meaningful sense of the word.

They're impressive ML for sure, but that is in fact different from AI despite how companies building them have tried to merge the terms together.


What I'm saying is not (directly) related to whether or not LLMs are "true AI" or not. It's sufficient that they are fully general problem solvers.

A software product (whether bought or rented as a service) is defined by its boundaries - there's a narrow set of specific problems, and specific ways it can be used to solve those problems, and beyond those, it's not capable (or not allowed) to be used for anything else. The specific choices of what, how, and on what terms, are what companies stick a name to to create a "software product", and those same choices also determine how (and how much) money it will make for them.

Those boundaries are what LLMs, as general-purpose problem solvers, break naturally, and trying to force-fit them within those limits means removing most of the value they offer.

Consider a word processor (like MS Word). It's solving the problem of creating richly-formatted, nice-looking documents. By default it's not going to pick the formatting for you, nor is it going to write your text for you. Now, consider two scenarios of adding LLMs to it:

- On the inside: the LLM will be able to write you a poem or rewrite a piece of document. It could be made to also edit formatting, chat with you about the contents, etc.

- From the outside: all the above, but also the LLM will be able to write you an itinerary based on information collected from maps/planning tool, airline site, hotel site, a list of personal preferences of your partner, etc. It will be able to edit formatting to match your website and presentation made in the competitor's office tools and projected weather for tomorrow.

Most importantly, it will be able to do both of those automatically, just because you set up a recurring daily task of "hey, look at my next week's worth of calendar events and figure out which ones you can do some useful pre-work for me, and then do that".

That's the distinction I'm talking about, that's the threat to software industry, and it doesn't take "true AI" - the LLMs as we have today are enough already. It's about generality that allows them to erase the boundaries that define what products are - which (this is the "mortal wound to software industry" part) devalues software products themselves, reducing them to mere tool calls for "software agents", and destroying all the main ways software companies make money today - i.e. setting up and exploiting tactics like captive audience, taking data hostage, bundled offers, UI as the best marketing/upsale platform, etc.

(To be clear - personally, I'm in favor of this happening, though I worry about consequences of it happening all at once.)


> That's the distinction I'm talking about, that's the threat to software industry, and it doesn't take "true AI" - the LLMs as we have today are enough already.

They most certainly are not. With the current state of LLMs, anyone who puts them in charge of things is being a fool. They have zero intelligence, zero ability to cope with novel situations, and even for things in their training data they do worse than a typical skilled practitioner would. Right now they are usable only for something where you don't care about the quality of the result.


I hasn't said anything about true AI though, and I'm not sure how we would define that.

That's part of the problem, we as an industry have dove straight into the deep end without pausing for even the basics like agreeing on definitions.

What is intelligence, and how do we recognize it? What is consciousness, if it even exists? How do we measure intelligence - is it really just economic value as OpenAI argues, and if so can that only be measures 6+ months after we unleash it on society?


> and it doesn't take "true AI" - the LLMs as we have today are enough already.

I believe that relatively few people would agree with you on that point. LLMs aren’t good enough (yet?), and very obviously so, IMO, to be autonomous problem solvers for the vast majority of problems being solved by software companies today.


What you lose is control. Even in the case of an actually-intelligent agent, if you task a subordinate with producing a document for you, they are going to come up with something that is different from exactly what you had in mind. If they are really good, they might even surprise you and do a better job than than you'd have done yourself, but it still will be their vision, not yours.

Your notion of a "mortal wound" to the software industry seems to assume that today's SaaS portals are the only form that industry can take. Great software is "tool calls for agents". Those human agents who care about getting exactly the result they want will not be keen on giving up Photoshop for Photoshop-but-with-an-AI-in-front-of-it.


I'm surprised GitHub got by acting fairly independently inside Microsoft for so long. I'm also surprised GitHub employees expected that to last

The real problem today IMO is that Microsoft waited so long to drop the charade that they now felt like they had to rip the bandaid. From what I've heard the transition hasn't gone very smoothly at all, and they've mostly been given tight deadlines with little to no help from Microsoft counterparts.


If this were a place for memes, then I'd share that swimming pool meme with Microsoft holding up copilot while GitHub is drowning.

Then Azure Dev Ops (formerly known as Visual Studio Team System) dead o n the ocean floor.

Although given how badly GitHub seems to be doing, perhaps it's better to be ignored.


why is az devops on the floor? i am having to choose between the clients existing az dops and our internal gitlab for where to host a pipeline, and i don't know what would be good at all

It works fine,it just feels like it has been under a kind of maintenance mode for a while.

There's clearly one small team that works on it. There are pros and cons to that.

It hasn't even got an obnoxious Copilot button yet for example, but on the other hand it was only relatively recently you could properly edit comments in markdown.

If the client has existing AzDo Pipelines then I'd suggest keeping them there.


It operated with an independent CEO for a long while.

When I saw his interview: https://thenewstack.io/github-ceo-on-why-well-still-need-hum... i thought "oh, there is some semblance of sanity at Microsoft".

This was after seeing those ridiculous PRs where microsoft engineers patiently deconstructed AI slop PRs they were forced to deal with on the open source repos they maintained.

When he was gone a few months later and github was folded into microsoft's org chart the writing was firmly on the wall.


He was never truly independent though. The org structure was such that the GitHub CEO reported up through a Microsoft VP and Satya. He was never really a CEO after the acquisition, it was in name only.

Also of note is that the Microsoft org chart always showed GitHub in that structure while the org chart available to GitHub stopped at their CEO. Its not that they were finally rolled into Microsoft's org chart so much as they lifted the veil and stopped pretending.


I never said he was "truly independent" nor meant to imply it.

Nonetheless it looks like he was both willing and able to push back on a good deal of the AI stupidity raining down from above and then he was removed and then, well, this...


You said he was independent, I didn't include "truly" intending to make a distinction there. How could one be an independent CEO while reporting to a VP who reports to another CEO?

I don't personally know him and wouldn't begin to assume on what, or how, he pushed back. Though Microsoft had AI in the GitHub org well before the leadership change - the AI leader now in charge of GitHub was previously in charge of an AI org that was moved over in the org chart to dotted line report as embedded employees, or whatever they would have been called.


The article mentions some concerns related to migrating their MySQL clusters off bare metal.

I've been confused by this with many LLM products in general. Sometimes infrastructure is part of it so there's that, but often it seems like the product is a magic incantation of markdown files.

Solving for infrastructure is a huge part of the problem too. Curious to know what you think about it?

Here I'm mostly considering the seemingly countless services that are little more than some markdown files and their own API passing data to/from the LLM procider's API.

By no means is that every AI product today, and I wasn't saying the OP QA service falls into that bucket though.

More of a general comment related to the GP, maybe too off topic here though?


That requires a high level of trust in your current government and whomever is in charge in the future.

Its worth remembering how the Nazis so efficiently found Jews in the Netherlands. The Dutch government kept meticulous records, including things like your name, address, and religious affiliation. That wasn't a big deal until the Nazis rolled in, throw in some level of Nazi sympathizers in the Dutch government and it wasn't hard for them to track down anyone they wanted to find.


That's an argument against any mass collection/concentration of data in anyone's hands. Not against gov. collection of data in particular.

Sure, there's a good reason to avoid centralizing data in general but in this case we're talking about governments. Governments are particularly dangerous for mass data collection because they also come with the authority, and military, of a state.

And with the money (or else: the authority) to get the data from private businesses. So they get the full data without any restrictions that they themselves would face.

Based on your other comments, I’m curious what your solution is?

The government needs our records to collect taxes. So at the minimum the government must have some information. We can argue over the mechanism and trust factor but that’s not the core issue here.

The private companies doing this is the core problem. This is a service that the government could provide for free with the most safeguards.

Or perhaps you have some other proposal? And I’m not interested in the no government anarchy you propose elsewhere.


> That requires a high level of trust in your current government and whomever is in charge in the future.

Some entity has to be trusted with our data anyway, at least government supposed to have some accountability before the citizens, corporations have much higher incentives for profit.


Why is it a given that we need to trust an entity with our data? Most of human history got by without data collection, centralized or otherwise, there's no innate law of nature requiring it

It doesn't require only trusting the government (or another corporation) today, it requires trusting all future iterations of them as well. It may be a different story if the data was periodically purged, say after each administration for example.


> Some entity has to be trusted with our data anyway

Why?


Because the government needs to know who you are to do anything involving you. Taxes, drivers' licenses, passports, courts, etc.

There are still a lot of underlying assumptions here worth noting though. You're assuming we must have a government and what it must be able to do, like charge me taxes or gatekeep certain activities behind licensing systems.

I'm not arguing we don't need a government. But to silently take for granted that everything from income taxes to public roads and travel restrictions are a given jumps ahead here.

We could decide, for example, that the government shouldn't be allowed to centralize certain data and remove some of what we expect them to do instead.


> We could decide, for example, that the government shouldn't be allowed to centralize certain data and remove some of what we expect them to do instead.

How exactly government manages our data is a valid concern and in the modern world this needs to be reevaluated.


Yes, I think one ID, presented only as necessary for those interactions, is enough for them to do their job.

It would be good to clamp down on private companies collecting that data.


Does it? We can live without anyone knowing our age except the entities we tell it to.

Is it actually a crime to upload a fake ID photo to a private company for age verification?


When I was at Microsoft my org had a 100% pass rate as a launch gate. It was never expected that you would keep 100% but we did have to hit it once before we shipped.

I always assumed the purpose was leadership wanting an indicator that implied that someone had at least looked at every failing test.


Once model providers started releasing "reasoning" models, and later roles and multi-agent systems, it seemed pretty clear to me they are just automating the process of prompt engineering.

They track everything we all do in a chat, then learn the patterns that work and build them in. Rinse and repeat.


Its also inevitable given that we still don't even really know how these models work or what they do at inference time.

We know input/output pairs, when using a reasoning model we can see a separate stream of text that is supposedly insight into what the model is "thinking" during inference, and when using multiple agents we see what text they send to each other. That's it.


  > separate stream of text that is supposedly insight into what the model is "thinking" during inference
taking a look at those streams is almost disturbing and hilarious at the same time... like looking into the mind of a paranoiac.

Said leaders are only really democratic based on the literal name of the party they signed with when running for office. There's nothing democratic about these types of programs and I have to assume that a plainly explained referendum spelling this out on a ballot would fail miserably.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: