Hacker Newsnew | past | comments | ask | show | jobs | submit | bigstrat2003's commentslogin

I'm not the same person, but I live in Denver and I go to a store to buy my components. We have a Micro Center here and I enjoy having a physical location I can go to, so I make sure to give them my patronage when I purchase stuff.

It's not exactly Valve's fault what Rockstar has decided to do with sales of their games.

It's because nobody was trying to take video game behavior scripts and declare them the future of all things technology.

Ok? I'm not going to change the definition of a 70 year old field because people are annoyed at chatgpt wrappers.

> Turns out experience can be self-limiting in the face of paradigm-shifting innovation.

It also turns out that experience can be what enables you to not waste time on trendy stuff which will never deliver on its promises. You are simply assuming that AI is a paradigm shift rather than a waste of time. Fine, but at least have the humility to acknowledge that reasonable people can disagree on this point instead of labeling everyone who disagrees with you as some out of touch fuddy-duddy.


What you described isn't a shallow dismissal. They tried it, found it to not be useful in solving the problems they face, and moved on. That's what any reasonable professional should do if a tool isn't providing them value. Just because you and they disagree on whether the tool provides value doesn't mean that they are "failing at their job".

Private ownership is a necessary, but not sufficient, condition to have a business which has a healthy relationship with its customers. You also need the owners to be people of reasonably good character who understand that the best way to run a business is a win-win approach on both sides, not people who see nothing wrong with extracting maximum profit from the business no matter whom it hurts. The PE horror stories you hear are cases where the owners are in the latter group.

You hypothesis then is that there is not a _single_ public company that has a healthy relationship with its company? Not one, in the entire global public space?

When does this relationship with customers happen? Is it at the IPO? When they file the paperwork? When they contemplate going public for the first time? Or is it that any founder who might one day decide to contemplate going public was doomed to unhealthy customer relations from birth?

The obvious next thing we in society should do is abolish public equity as a concept as a customer protection mechanism?


> Not one, in the entire global public space?

It is genuinely hard to think of one. I treat all companies as adversarial relationships, where I fully expect them to treat me as disposable at least over any time horizon greater than 1-2y. There are certainly some companies that are more likely to find a mutually beneficial equilibrium. I think of Target, IKEA, sometimes Apple. But I don’t trust any of those companies to take care of me in the future. But I also wouldn’t be the least bit surprised if my next interaction with any of those companies was bad. I just typically expect it to be more mutually beneficial than Comcast, Hertz, or Verizon.


Costco is public, according to wikipedia.

That is a good point. I wonder how they have managed not to succumb to the pressure to squeeze their users more.

Fortune 500 companies are particularly neurotic example of 'all' companies.

From what I can see, it's often when the founder loses control of the company (either voluntarily (e.g. retirement) or not) and it falls to the board (representing the shareholders) to appoint the CEO. At that point it's at best a toss up whether they'll appoint someone who actually intends to create value or someone who intends to extract value.

> The obvious next thing we in society should do is abolish public equity as a concept as a customer protection mechanism?

Abolishing public equity is quite drastic, but there are lots of other things we could (and IMO should) be doing to protect society from the negative externalities it causes. For example:

- Mandating worker representation on company boards. So shareholders still have some power, but less.

- Progressive corporation tax (larger companies pay more tax). This would bias the economy towards smaller companies which generally have less problematic externalities.


It's not impossible to run a publicly owned company in the US that isn't insanely hostile towards it's customers or employees... it's just really damn difficult because of bad legal precedent.

Dodge v. Ford is basically the source of all these headaches; the Dodge Brothers owned shares in Ford. Ford refused to pay the dividends he had to pay to the Dodge Brothers, suspecting that they'd use the dividends to start their own car company (he wasn't wrong about that part). The Dodge Brothers sued Ford, upon which Fords defense for not paying out dividends was "I'm investing it in my employees" (an obvious lie, it was very blatantly about not wanting to pay out). The judge sided with the Dodge Brothers and the legal opinion included a remark that the primary purpose of a director is to produce profit to the shareholders.

That's basically become US business doctrine ever since, being twisted into the job of the director being to maximize profits to the shareholders. It's slightly bunk doctrine as far as I know; the actual precedent would mostly translate to "the shareholders can fire the directors if they think they don't do a good job" (since it can be argued that as long as any solid justification exists, producing profit for the shareholders can be assumed[0]; Dodge v. Ford was largely Ford refusing to follow his contracts with money that Dodge knew Ford had in the bank), but nobody in the upper areas of management wants to risk facing lawsuits from shareholders arguing that they made decisions that go against shareholder supremacy[1]. And so, the threats of legal consequences morph into the worst form of corporate ghoulishness that's so pervasive across every publicly traded company in the US. It's why short-term decision making dominates long-term planning for pretty much every public company.

[0]: This is called the "business judgement rule", where courts will broadly defer the judgement on if a business is ran competently or not to the executives of that business.

[1]: Tragically, just because it's bunk legal theory, doesn't change that the potential and disastrous consequences of lawsuits in the US are a very real thing.


It is not broadly believed in corporate governance circles that there is a legal requirement to maximize shareholder value. Nor will you find court judgements that require it.

If anything Milton Friedman is more responsible for this idea that shareholder maximizing is the corporate goal. That is an efficient market argument though not a legal one and he framed it long after the dodge suit. He needed to frame that argument because so many firms were _not_ doing that.

But just because a Chicago school economist says something about governance doesn’t mean it’s broadly applicable in the same way an Austrian economists opinions about inflation aren’t iron rules about monetary policy.


It's not instant (well, sometimes it is), more of a slow but inexorable push down a hill. Some public companies are farther along the path than others, but if the company continues to exist and profit it's inevitable. For example, there are no S&P 500 companies with healthy customer relationships.

Theory of abundance, you could classify your approach as. Rather than artificial scarcity to exercise market power.

Could also consider: employee ownership and public ownership

People complain about the latter because they have higher expectations because the institution is supposed to serve them and often has all the diseases of true scale without being able to pick and choose customers. Private industry skates by because people assume it's out to screw them and they can cherry pick.


This shit right here is why people hate AI hype proponents. It's like it never crosses their mind that someone who disagrees with them might just be an intelligent person who tried it and found it was lacking. No, it's always "you're either doing it wrong or weren't really trying". Do you not see how condescending and annoying that is to people?

> But in the end engineers do appear to become more productive 'pairing' with an LLM

Quite the opposite: LLMs reduce productivity, they don't increase it. They merely give the illusion of productivity because you can generate code real fast, but that isn't actually useful when you spend time fixing all the mistakes it made. It is absolutely insane that companies are stupid enough to require people use something which cripples them.


> What I never understand is the population of coders that don’t see any value in coding agents or are aggressively against them, or people that deride LLMs as failing to be able to do X (or hallucinate etc) and are therefore useless and every thing is AI Slop, without recognizing that what we can do today is almost unrecognizeable from the world of 3 years ago.

I don't recognize that because it isn't true. I try the LLMs every now and then, and they still make the same stupid hallucinations that ChatGPT did on day 1. AI hype proponents love to make claims that the tech has improved a ton, but based on my experience trying to use it those claims are completely baseless.


> I try the LLMs every now and then, and they still make the same stupid hallucinations that ChatGPT did on day 1.

One of the tests I sometimes do of LLMs is a geometry puzzle:

  You're on the equator facing south. You move forward 10,000 km along the surface of the Earth. You are rotate 90° clockwise. You move another 10,000 km forward along the surface of the earth. Rotate another 90° clockwise, then move another 10,000 km forward along the surface of the Earth.

  Where are you now, and what direction are you facing?
They all used to get this wrong all the time. Now the best ones sometimes don't. (That said, only one to succed just as I write this comment was DeepSeek; the first I saw succeed was one of ChatGPT's models but that's now back to the usual error they all used to make).

Anecdotes are of course a bad way to study this kind of thing.

Unfortunately, so are the benchmarks, because the models have quickly saturated most of them, including traditional IQ tests (on the plus side, this has demonstrated that IQ tests are definitely a learnable skill, as LLMs loose 40-50 IQ points when going from public to private IQ tests) and stuff like the maths olympiad.

Right now, AFAICT the only open benchmarks are the METR time horizon metric, the ARC-AGI family of tests, and the "make me an SVG of ${…}" stuff inspired by Simon Willison's pelican on a bike.


Out of interest, was your intended answer "where you started, facing east"?

FWIW, Claude Opus 4.5 gets this right for me, assuming that is the intended answer. On request, it also gave me a Mathematica program which (after I fixed some trivial exceptions due to errors in units) informs me that using the ITRF00 datum the actual answer is 0.0177593 degrees north and 0.168379 west of where you started (about 11.7 miles away from the starting point) and your rotation is 89.98 degrees rather than 90.

(ChatGPT 5.1 Thinking, for me, get the wrong answer because it correctly gets near the South Pole and then follows a line of latitude 200 times round the South Pole for the second leg, which strikes me as a flatly incorrect interpretation of the words "move forward along the surface of the earth". Was that the "usual error they all used to make"?)


> Out of interest, was your intended answer "where you started, facing east"?

Or anything close to it so long as the logic is right, yes. I care about the reasoning failure, not the small difference between the exact quarter-circumferences of these great circles and 10,000km; (Not that it really matters, but now you've said the answer, this test becomes even less reliable than it already was).

> FWIW, Claude Opus 4.5 gets this right for me, assuming that is the intended answer.

Like I said, now the best ones sometimes don't [always get it wrong].

For me yesterday, Claude (albeit Sonnet 4.5, because my testing is cheap) avoided the south pole issue, but then got the third leg wrong and ended up at the north pole. A while back ChatGPT 5 (I looked the result up) got the answer right, yesterday GPT-5-thinking-mini (auto-selected by the system) got it wrong same way as you report on the south pole but then also got the equator wrong and ended up near the north pole.

"Never" to "unreliable success" is still an improvement.


Yeah, I'm pretty sure that's correct. Just whipped this up, using the WGS-84 datum.

  (use-modules (geo vincenty))
  
  (let walk ((p '(0 0 180))
             (i 0))
    (cond ((= i 3)
           (display p)
           (newline))
          (else
            (walk (apply vincenty
                         (list (car p) (cadr p) (+ 90 (caddr p)) 10000000))
                  (+ i 1)))))
Running this yields:

  (0.01777744062090717 0.16837322410251268 179.98234155229127)
Surely the discrepancy is down to spheroid vs sphere, yeah?

This fascinates me. Just observing but because it hasn't worked for you, everyone else must be lying? (I'm assuming that's what you mean by baseless)

How does that bridge get built? I can provide tangible real life examples but I've found push back from that in other online conversations.


> it hasn't worked for you, everyone else must be lying?

Well, some non-zero amount of you are probably very financially invested in AI, so lying is not out of the question

Or you simply have blinders on because of your financial investments. After all, emotional investment often follows financial investment

Or, you're just not as good as you think you are. Maybe you're talking to people who are much better at building software than you are, and they find the stuff the AI builds does not impress them, while you are not as skilled so you are impressed by it.

There are lots of reasons someone might disagree without thinking everyone else is lying


My boss has been passing off Claude generated code and documentation to me all year. It is consistently garbage. It consistently hallucinates. I consistently have to rewrite most, if not all, of what I'm handed.

I do also try and use Claude Code for certain tasks. More often than not, i regret it, but I've started to zero in on tasks it's helpful with (configuration and debugging, not so much coding).

But it's very easy then for me to hear people saying that AI gives them so much useful code, and for me to assume that they are like my boss: not examining that code carefully, or not holding their output to particularly high standards, or aren't responsible for the maintenance and thus don't need to care. That doesn't mean they're lying, but it doesn't mean they're right.


"Claude Code" by itself is not specific enough; which model are we talking about?

What have you tried? How much time have you spent? Using AI is it’s own skill set separate from programming

The difference is that the Internet was actually useful technology, whereas AI is not (so far at least).

In the last month I personally used (as in, it was useful) AI for this:

- LLM-powered transcription and translation made it so I could have a long conversation with my airport driver in Vietnam. - Helped me turn 5-10x ideas into working code and usable tools as I used to. - Nano Banana restored dozens of cherished family photos for a Christmas gift for my parents. - Help me correctly fix a ton of nuanced aria/accessibility issues in a production app. - Taught/explained a million things to me: difference between an aneurysm/stroke, why the rise of DX12/Vulkan gaming engines killed off nVidia SLI, political/economic/social parallels between 1920s and 2020s, etc...

Maybe everyone isn't using it yet, but that doesn't mean it isn't useful. Way too many people find real use every day in a lot of AI products. Just because MS Office Copilot sucks (and it does), doesn't mean it is all useless.


I think you're exaggerating a little, but aren't entirely wrong. The Internet has completely changed daily life for most of humanity. AI can mean a lot of things, but a lot of it is blown way out of proportion. I find LLMs useful to help me rephrase a sentence or explain some kind of topic, but it pales in comparison to email and web browsers, YouTube, and things like blogs.

More use cases for AI than blockchain so far.

Quite a low bar.

Block chain is more like some gooey organic substance on the ground than a bar.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: