Hacker Newsnew | past | comments | ask | show | jobs | submit | Zafira's commentslogin

> In the new design system, windows now have a softer, more generous corner radius, which varies based on the style of window. Windows with toolbars now use a larger radius, which is designed to wrap concentrically around the glass toolbar elements, scaling to match the size of the toolbar. Titlebar-only windows retain a smaller corner radius, wrapping compactly around the window controls. These larger corners provide a softer feel and elegant concentricity to the window…


Just a bunch of words that raised no red flags, maybe sounded like a decent idea even, but when you see it how is your reaction not “oh, that’s bad”

I feel like this is the design process. You have ideas, they sound ok, you try them out, and then immediately you revert a lot of them. The ideas without the taste to know when not to do something is becoming the new Apple way


I think what they're saying is that larger radii are for 'real windows' that have toolbars and such but there are 'mini windows' and those get smaller radii. It doesn't seem well enough baked for them to release it like it is but there are other UI problems that I've been annoyed about for a long time (in particular shadows around window boundaries so you can never get a truly flat tiled experience).


Rounded corners (and the utterly massive drag area next to them) are touchbar 2.0. Features that no one asked for, has questionable value, and that provides marginal benefit even for its intended audience (touchscreen macs, no doubt).


So, there was no reasoning.


> At the moment it is a mysterious, occasionally fickle, tool - but if you provide the correct feedback mechanisms and provide small tweaks and context at idiosyncrasies, it's possible to get agents to reliably build very complex.

This sounds like arguing you can use these models to beat a game of whack-a-mole if you just know all the unknown unknowns and prompt it correctly about them.

This is an assertion that is impossible to prove or disprove.


No it's more like if you knew how to build it before - LLM agents help you build it faster. There's really no useful analogy I can think of, but it fits my current role perfectly because my work is constantly interrupted by prod support, coordination, planning, context switching between issues etc.

I rarely have blocks of "flow time" to do focused work. With LLMs I can keep progressing in parallel and then when I get to the block of time where I can actually dive deep it's review and guidance again - focus on high impact stuff instead of the noise.

I don't think I'm any faster with this than my theoretical speed (LLMs spend a lot of time rebuilding context between steps, I have a feeling current level of agents is terrible at maintaining context for larger tasks, and also I'm guessing the model context length is white a lie - they might support working with 100k tokens but agents keep reloading stuff to context because old stuff is ignored).

In practice I can get more done because I can get into the flow and back onto the task a lot faster. Will see how this pans out long term, but in current role I don't think there are alternatives, my performance would be shit otherwise.


You could probably replace LLM with "junior engineer" here as it sounds like you're basically a manager now. The big negative that LLMs have in comparison with junior engineers is that they can't learn and internalise new information based on feedback.


"The big negative that LLMs have in comparison with junior engineers is that they can't learn and internalise new information based on feedback."

No, but they can take "notes" and can load those notes into context. That does work, but is of course not so easy as it is with humans.

It is all about cleaning up and maintaining a tidy context.


I don't like that analogy. If I had to work with a Claude like junior I would ask for them to get removed from my team - inability to learn stuff, completely unexpected/unrelatable faliure modes and performance.

On the other hand Claudes tenacity, stamina and sustained speed is superhuman. The more capable models become the more valuable this is.


The same is true with human engineers - isn't this just what engineering is?


>This is an assertion that is impossible to prove or disprove.

This is a joke right? There are complex systems that exist today that are built exclusively via AI. Is that not obvious?

The existence of such complex systems IS proof. I don't understand how people walk around claiming there's no proof? Really?


The assertion was "if you really know how to prompt, give feedback, do small corrections and fix LLM errors, then everything works fine".

It is impossible to prove or disprove because if everything DOES NOT work fine you can always say that the prompts were bad, the agent was not configured correctly, the model was old, etc. And if it DOES work, then all of the previous was done correctly, but without any decent definition of what correct means.


>And if it DOES work, then all of the previous was done correctly, but without any decent definition of what correct means.

If a program works, it means it's correct. If we know it's correct, it means we have a definition of what correct means otherwise how can we classify anything as "correct" or "incorrect". Then we can look at the prompts and see what was done in those prompts and those would be a "correct" way of prompting the LLM.


You don’t know it works. That you so glibly speak about products working is proof that your engineering judgment is impaired. You can’t infer the exact contents of a black box merely by looking at outside behavior.

The fundamental fallacy you are exhibiting here is similar to saying that rolling a six sided die and getting a “6” means that you will always get a 6 any time you roll it. And that if you get a 6 and wanted a 6, you must have therefore rolled those dice “correctly” and had you not gotten a 6 that would have meant you rolled them “wrong.”

You know that is not true.


>You don’t know it works. That you so glibly speak about products working is proof that your engineering judgment is impaired. You can’t infer the exact contents of a black box merely by looking at outside behavior.

I don't know the exact internals of a car. But I can infer my car works by driving it.

>The fundamental fallacy you are exhibiting here is similar to saying that rolling a six sided die and getting a “6” means that you will always get a 6 any time you roll it. And that if you get a 6 and wanted a 6, you must have therefore rolled those dice “correctly” and had you not gotten a 6 that would have meant you rolled them “wrong.”

Bro we rolled that dice MULTIPLE times. It's not a one time thing. And the "rolling" of the die is done with a CHAIN of MULTIPLE qureries strung together. This is not one roll. It's multitudes of data points. Yes results can be inconsistent from a technical standpoint, but the general result converges on a singular trend.

We know that much is true: a statistic and that is at most all we can say about reality as we know it as science formalized can only give a statistic as an answer.


"I don't know the exact internals of a car. But I can infer my car works by driving it."

No, you can't infer that it "works." Only that it CAN work. The car may be poisoning you with carbon monoxide. Your rear brakes may have become disconnected (happened to me). The antilock braking system may have a faulty sensor that only fails at very low speed, leading to them engaging when making a normal stop, but also preventing the mechanic from seeing the problem, because he didn't listen to your bug report and instead tried to repro the effect with high speed panic stops (also happened to me).

If I use a product and have a good experience, I can conclude that SOMETHING must be going well, but not that EVERYTHING is going well.

This is reasoning about evidence 101.


>No, you can't infer that it "works." Only that it CAN work. The car may be poisoning you with carbon monoxide. Your rear brakes may have become disconnected (happened to me). The antilock braking system may have a faulty sensor that only fails at very low speed, leading to them engaging when making a normal stop, but also preventing the mechanic from seeing the problem, because he didn't listen to your bug report and instead tried to repro the effect with high speed panic stops (also happened to me).

This is called pedantitic reasoning. You look like a drowning person trying to stay afloat.


Sorta?

The data being written to the disk is the same in CAV or CLV disks, but the player just needs to know how to spin the disk at the right speed so that the laser can read the pits/lands correctly. It is purely a detail about the speed that the disk is spun at so they can cram more data on it with CLV disks.

What CAV LaserDiscs allow for, though, is to make it extremely obvious where scanlines and blanking intervals are in the video signal.


It is really quite something how many people that have earned credibility designing well-loved tools seem to be true believers in the AI codswallop.


it's fascinating / astonishing


> nonzero risk of unfair judgement from a computer

I feel like this is really poor take on what justice really is. The law itself can be unjust. Empowering a seemingly “unbiased” machine with biased data or even just assuming that justice can be obtained from a “justice machine” is deeply flawed.

Whether you like it or not, the law is about making a persuasive argument and is inherently subject our biases. It’s a human abstraction to allow for us to have some structure and rules in how we go about things. It’s not something that is inherently fair or just.

Also, I find the entire premise of this study ludicrous. The common law of the US is based on case law. The statement in the abstract that “Consistent with our prior work, we find that the LLM adheres to the legally correct outcome significantly more often than human judges. In fact, the LLM makes no errors at all,” is pretentious applesauce. It is offensive that this argument is being made seriously.

Multiple US legal doctrines now accepted and form the basis of how the Constitution is interpreted were just made up out of thin air which the LLMs are now consuming to form the basis of their decisions.


I think you’re correct that they’re good at just ripping the band-aid off, but the details seem off. AFAIK, Apple has always had a license with ARM and a very unique one since they were one of the initial investors when it was spun out from Acorn. In fact, my understanding is that Apple is the one that insisted they call themselves Advanced RISC Machines Ltd. because they did not want Acorn (a competitor) in the name of a company they were investing in.


Correct, from the ARM Wikipedia entry:

The new Apple–ARM work would eventually evolve into the ARM6, first released in early 1992. Apple used the ARM6-based ARM610 as the basis for their Apple Newton PDA.


He has been proven to be an extremely unreliably narrator on multiple occasions and is prone to changing his story. I think he has always had such inclinations, but other folks kept him restrained and I’m not sure what happened there in the end.

I’m reminded that he is on the record as having initially said that he enjoyed working on the Dilbert TV show, but it was too much work and had the misfortune of being moved one of those “death” time slots. Then at some point he started baselessly claiming it was killed due to DEI.

Also, he has a very bizarre history of sockpuppeting that just raises more questions. He was called out by Metafilter for this and acted like he was playing some kind of 4D chess with them [1].

[1] https://mefiwiki.com/wiki/Scott_Adams,_plannedchaos


Or perhaps it was killed due to DEI but he didn't feel comfortable being honest about it at the time because there is a powerful taboo against white men claiming discrimination.


It’s really notable that post-Trump Scott Adams is the only one who is speaking truth to you here.


What is the approach here? LLM generated; human validated?


Yes


Aptos has been the default font for Microsoft Word since 2023.


With all the fanfare made over Calibri back when it was announced, TIL about Aptos


I enjoyed the argument that this is going to open up a new time point for digital forensics. Many people have doctored documents pretending to have made them in the past. Except they did not realize that the vintage software used font X, but the modern default is now Y. There have been a few court cases where essentially someone is able to say, “This font is clearly Calibri which did not exist at the time this document was supposedly printed.”

If you are a Deep Space 9 fan, this is where you get to scream, “It’s a fake!!!”



The more famous example being the Pakistani Prime Minister forging documents in Calibri dated before its release.

https://www.bbc.com/news/blogs-trending-40571708


Aptos is slightly wider and taller but looks very very similar to calibri, especially calibri a point larger.


The early history of AI/cybernetics seems poorly documented. There are a few books, some articles and some oral histories about what was going on with McCulloch and Pitts. It makes one wonder what might have been with a lot of things. Including if Pitts had lived longer, been able to get out of the rut he found himself in the end (to put it mildly) and hadn’t burned his PhD dissertation, but perhaps one of the more interesting comments that is directly relevant to all this lies in this fragment from a “New Scientist” article[1]:

> Worse, it seems other researchers deliberately stayed away. John McCarthy, who coined the term “artificial intelligence”, told Piccinini that when he and fellow AI founder Marvin Minsky got started, they chose to do their own thing rather than follow McCulloch because they didn’t want to be subsumed into his orbit.

[1] https://www.newscientist.com/article/mg23831800-300-how-a-fr...


The early history of AI/cybernetics seems poorly documented.

I guess it depends on what you mean by "documented". If you're talking about a historical retrospective, written after the fact by a documentarian / historian, then you're probably correct.

But in terms of primary sources, I'd say it's fairly well documented. A lot of the original documents related to the earlier days of AI are readily available[1]. And there are at least a few books from years ago that provide a sort of overview of the field at that moment in time. In aggregate, they provide at least a moderate coverage of the history of the field.

Consider also that the term "History of Artificial Inteligence" has its own Wikipedia page[2] which strikes me as reasonably comprehensive.

[1]: Here I refer to things like MIT CSAIL "AI Memo series"[3] and related[4][5], the Proceedings of the International Joint Conference on AI[6], the CMU AI Repository[7], etc.

[2]: https://en.wikipedia.org/wiki/History_of_artificial_intellig...

[3]: https://dspace.mit.edu/handle/1721.1/5460/browse?type=dateis...

[4]: https://dspace.mit.edu/handle/1721.1/39813

[5]: https://dspace.mit.edu/handle/1721.1/5461

[6]: https://www.ijcai.org/all_proceedings

[7]: https://www.cs.cmu.edu/Groups/AI/html/rep_info/intro.html


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: