Hacker Newsnew | past | comments | ask | show | jobs | submit | pton_xd's commentslogin

Agreed. The beauty of programming is that you're creating a "mathematical artifact." You can always drill down and figure out exactly what is going on and what is going to happen with a given set of inputs. Now with things like concurrency that's not exactly true, but, I think the sentiment still holds.

The more practical question is though, does that matter? Maybe not.


> The more practical question is though, does that matter?

I think it matters quite a lot.

Specifically for knowledge preservation and education.


Yep, LLM’s still run on hardware with the same fundamental architecture we’ve had for years, sitting under a standard operating system. The software is written in the same languages we’ve been using for a long time. Unless there’s some seismic shift where all of these layers go away we’ll be maintaining these systems for a while.

Yes, but : not in all cases really. There are plenty of throw away, one off, experimental, trial and error, low stakes, spammy, interactions with dumb-assery, manager replacing instances that are acceptable as being quickly created and forgotten black boxes of temporary barely working trash. (not that people will limit to the proper uses)

In a way this is also a mathematical artifact — after all tokens are selected through beam searching or some random sampling of likely successor tokens.

> AI browser automation is going to blow open all these militarized containers that use our own data against us.

I'm not sure what you mean by this. Do you mean that AI browser automation is going to give us back control over our data? How?

Aren't you starting a remote desktop session with Anthropic everytime you open your browser?


There's a million ways. Just off the top of my head: unified calendars, contacts and messaging across Google, Facebook, Microsoft, Apple, etc. The agent figures out which platform to go to and sends the message without you caring about the underlying platform.

> Do you mean that AI browser automation is going to give us back control over our data? How?

Narrator: It won't.


Yes and the scale was far larger, over $80 billion given to GM and Chrysler. GM was majority owned by the US government for a number of years after that too.

AI prose has been mediocre since the release of ChatGPT. My layman's interpretation is there's just no strong creativity / humor / etc signals to train on, as compared to say math or coding. Current models are "smarter" so when asked to produce eg a joke they think harder, but the end result always misses the mark just the same.

There's a difference between AI being bad at prose and storycraft. Good prose is totally achievable and it's just that it hasn't really been a priority for the tech shops, and I think they also often don't understand what makes really good prose so they're not good at optimizing for it anyhow. I expect given people's aversion to slop that the big laps will start to push hard on it soon and get their act together though.

There are some game systems that lend themselves to unit testing, like say map generation to ensure that the expected landmarks are placed reasonably, or rooms are connected, or whatever. But most game interactions are just not easily "unit testable" since they happen across frames (eg over time). How would you unit test an enemy that spawns, moves towards the player, and attacks?

I'm sure you could conjure up any number of ways to do that, but they won't be trivial, and maintaining those tests while you iterate will only slow you down. And what's the point? Even if the unit-move-and-attack test passes, it's not going to tell you if it looks good, or if it's fun.

Ultimately you just have to play the game, constantly, to make sure the interactions are fun and working as you expect.


It would depend on how things are architected, but you could definitely test the components of your example in isolation (e.g. spawn test, get the movement vector in response to an enemy within a certain proximity, test that the state is set to attacking, whatever that looks like). I don't disagree that it's a hard problem. I run into similar issues with systems that use ML as some part of their core, and I've never come up with a satisfying solution. My strategy these days is to test the things that it makes sense to test, and accept that for some things (especially dynamic behavior of the system) you just have to use it and test it that way.

> How would you unit test an enemy that spawns, moves towards the player, and attacks?

You use a second enemy that spawns, moves towards the "enemy", and attacks.


> How would you unit test an enemy that spawns, moves towards the player, and attacks?

You can easily write a 'simulation' version of your event loop and dependency inject that. Once time can be simulated, any deterministic interaction can be unit tested.


Others would quibble that those are integration tests, "UI" tests, or other higher-level tests, etc.

Which is all the same as what unit test was originally defined as.

You're right that "unit test" has taken on another, rather bizarre definition in the intervening years that doesn't reflect any kind of tests anyone actually writes in the real world, save where they are trying to write "unit tests" specifically to please the bizarre definition, but anyone concerned about definitional purity enough to quibble about it will use the original definition anyway...


A lot of games aren't deterministic within a scope of reasonable test coverage.

Set the same seed for the test?

> Third, some people are more sensitive to the kind of errors or style that LLMs tend to use. I frequently can't stand the output of LLMs, even if it technically works; it doesn't live to to my personal standards.

I've noticed the stronger my opinions are about how code should be written or structured, the less productive LLMs feel to me. Then I'm just fighting them at every step to do things "my way."

If I don't really have an opinion about what's going on, LLMs churning out hundreds of lines of mostly-working code is a huge boon. After all, I'd rather not spend the energy thinking through code I don't care about.


> To make this possible, we're building DeltaDB: a new kind of version control that tracks every operation, not just commits.

Let me guess: DeltaDB is free to use as long as we host your data and have free range on training AI based on your editor interactions.


Not sure what this guess is based on. Would that be a guess for git also, if mentioned by a company versus an individual?

My read was that they are pulling a Linus Torvalds with the Linux->Git move where both are innovations on their own, but work great together ( without dystopian universe instantiation )

CRDTs mentioned: https://zed.dev/blog/crdts


Definitely sounds very eerie. Luckily there are open source solutions that do just this with no AI integration:

https://github.com/atuinsh/atuin


how is atuin doing what deltaDB does?

ah not much at all, did too much selective reading on my end. Should have read the entire blog and not the quoted selection.

this doesn't seem related to the above post

that said atuin is excellent


Wouldn't it be more efficient to centralize the generation of electricity and take advantage of economies of scale?

Solar generation has little economies of scale: PV arrays scale linearly, unlike turbines and electromechanical generators. Batteries also scale basically linearly; maybe you can have a better deal if you buy a truly massive amount of batteries, but I'm not certain it's so dramatic.

Transmission costs seem to dominate the price structure; I currently pay a generating company about $0.1 / kWh, and pay Con Ed $0.25 / kWh for transmission of that energy. And this is in dense New York City; in suburbia or countryside the transmission lines have to be much longer.

Centralized generation makes sense when the efficiency scales wildly non-linearly with size, like it does with nuclear reactors.


Does solar scale linearly when you have to get onto roofs to install it? And when each roof is available for installation at different times, so only small crews can do it piecemeal?

It certainly complicates things a bit, but the roofs are independent, so several small teams can independently work in parallel. So yes, it's sort of linear.

Building a large solar installation may scale a bit sub-linearly, if you can e.g. order things in bulk at better prices, and have some electric assemblies done at a factory, more efficiently.


Upgrading transmission infrastructure costs a lot of money (and bureaucracy). Especially in Oregon and northern California where the lines probably should be buried to stop risking wildfires. I’m not sure which path is actually more cost effective for solar+battery.

Centralized generation is the riskiest for any economy. The targets to bomb (or local drones) are very well known and super easy to disrupt the entire economy. Solar on every roof is the most resilient and cheapest form of energy.

Centralization leads to economies of lobbying scale, well connected super rich can oil the machinery to suit their purpose, maximize wealth extraction from everyone, resulting in monopolies/oligopolies, laws to remove competition, laws to maximize profit (with pretenses of protecting people).

Warren Buffett does not own utilities out of the goodness of his heart, they are such spigots of money with zero competition.


Return on equity for utilities is relatively low due to capital intensity. They make a lot of money in absolute terms because 5% of a huge revenue figure is billions.

It's not just the generation; it's also the maintenance. If you own your own rooftop panels and a few go out, it's relatively expensive to bring someone out to replace them...if a mechanically and electrically equivalent replacement exists in 5 years. At utility scale, you're always replacing panels, so you have dedicated staff doing it.

Climbing roofs is in the top 10 deadliest jobs in America. It’s cheaper to drive out into a field and work on ground level equipment than to climb a height.

solar panels last 25-40 years. mechanically equivalent means "sits on a roof". electrically equivalent just means "connects to a wire".

Manufacturing defects happen, trees fall, and panels get dusty. They don't merely "sit on a roof." They're anchored, the anchors have spacing and a form factor, and the anchors pierce the roof's waterproofness. They're not electrically equivalent if they output a different voltage range.

Others have replied saying why this may not be the case, but even if it is — you also need to balance efficiency with other values, such as independence and resiliency.

I would gladly trade a bit of efficiency to not be dependent on the grid or on providers who can jack up the price on a whim outside of my control.


AI is only different if it reaches a hard takeoff state and becomes self-aware, self-motivated, and self-improving. Until then it's an amazing productivity tool, but only that. And even then we're still decades away from the impact being fully realized in society. Same as the internet.

Realistically most people became aware of the internet in the late 90s. Its impact was significantly realized not much more than a decade later.

In fact the current trends suggest its impact hasn't fully played out yet. We're only just seeing the internet-native generation start to move into politics where communication and organisation has the biggest impact on society. It seems the power of traditional propaganda centres in the corporate media has been, if not broken, badly degraded by the internet too.

Internet did not take away jobs (only relocated support/SWE from USA to India/Vietnam)

these AI "productivity" tools straight up eliminating jobs. and in turn wealth that otherwise supported families, humans, and powered economy. it is directly "removing" humans from workforce and from what that work was supporting.

not even hard takeoff is necessary for collapse.


Do we not have any sense of wonder in the world anymore? Referring to a system which can pass the Turing test as a "amazing productivity tool" is like viewing human civilization as purely measured by GDP growth.

Probably because we have been promised what AI can do in science fiction since before we were born, and the reality of LLMs is so limited in comparison. Instead of Data from Star Trek we got a hopped up ELIZA.

"Zig doesn’t have lambdas"

This surprises me (as a C++ guy). I use lambdas everywhere. What's the standard way of say defining a comparator when sorting an array in Zig?


Normal function declarations.

This is indeed a point which makes Zig inflexible.


By adopting a syntax like

    fn add(x: i32, i32) i32

they have said perma-goodbye to lambdas. They should have at-least considered

    fn add(x: i32, i32): i32


Why “perma goodbye”?

Go has a similar function declaration, and it supports anonymous functions/lambdas.

E.g. in go, an anonymous func like this could be defined as

foo := func(x int, _ int) int { … }

So I’d imagine in Zig it should be feasible to do something like

var foo = fn(x: i32, i32) i32 { … }

unless I’m missing something?


Anonymous functions aren't the same as lambda functions. People in the Go community keep asking for lambda functions and never get them. There should be no need for func/fn and explicit return. Because the arrow would break stuff is one of the reasons.

See

https://github.com/golang/go/issues/59122

https://github.com/golang/go/issues/21498

    res := is.Map(func(i int)int{return i+1}).Filter(func(i int) bool { return i % 2 == 0 }).
             Reduce(func(a, b int) int { return a + b })

vs

    res := is.Map((i) => i+1).Filter((i)=>i % 2 == 0).Reduce((a,b)=>a+b)


They could still fix it with arrow functions, but it’s always gonna look weird.

Some other people have tried to explain how they prefer types before variable declarations, and they’ve done a decent job of it, but it’s the function return type being buried that bothers me the most. Since I read method signatures far more often than method bodies.

fn i32 add(…) is always going to scan better to me.


OK, but with generics return type first tends to becomes a monster.


I used to be very enthusiastic about generic types. Now, well what else would you do? I don’t mean that as a rhetorical question. If someone came up with another way to represent functions that can take multiple types and knows what it will return, I’d be all over them.

Elixir is trying something, I don’t know yet whether it will be better. But their solution is based on a decision about how to do overloading that I suspect makes for maintenance problems later. So it’s gonna have to be good to offset the consequence.


You can declare an anonymous struct that has a function and reference that function inline (if you want).

There's a little more syntax than a dedicated language feature, but not a lot more.

What's "missing" in zig that lambda implementations normally have is capturing. In zig that's typically accomplished with a context parameter, again typically a struct.


So basically, Zig doesn't have lambdas, but because you still need lambdas, you need to reinvent the wheel each time you need it?

Why don't they just add lambdas?


> So basically...

Well, not really.

Consider lambdas in C++ (that was the perspective of the post I replied to). Before lambdas, you used functors to do the same thing. However, the syntax was slightly cumbersome and C++ has the design philosophy to add specialized features to optimize specialized cases, so they added lambdas, essentially as syntactic sugar over functors.

In zig the syntax to use an anonymous struct like a functor and/or lambda is pretty simple and the language has the philosophy to keep the language small.

Thus, no need for lambdas. There's no re-inventing anything, just using the language as it designed to be used.


Because to use lambdas you're asking the language to make implicit heap allocations for captured variables. Zig has a policy that all allocation and control flow are explicit and visible in the code, which you call re-inventing the wheel.

Lambdas are great for convenience and productivity. Eventually they can lead to memory cycles and leaks. The side-effect is that software starts to consume gigabytes of memory and many seconds for a task that should take a tiny fraction of that. Then developers either call someone who understands memory management and profiling, or their competition writes a better version of that software that is unimaginably faster. ex. https://filepilot.tech/


Nothing about lambdas requires heap allocation. See also: C++, Rust


If the lambda captures some value, and also outlives the current scope, then that captured value has to necessarily be heap allocated.


No, (in C++) the lambda can capture the the variable by value, and the lambda itself can be passed around by value. If you capture a variable by reference or pointer that your lambda outlives, your code got a serious bug.


And in Rust, it will enforce correct usage via the borrow checker - the outlive case simply will not compile.

If you do want it, you have the option to, say, heap allocate.


That would be a bug, so just... don't do that?

If you return a pointer to a local variable that outlives the scope, the pointer would be dangling. Does that mean we should ban pointers?

If you close over a pointer to a local variable that outlives the scope, the closure would be dangling. Does that mean we should ban closures?


Same as C, define a named function, and pass a pointer to the sorting function.


Unlike C you can stamp out a specialized and typesafe sort function via generics though:

https://ziglang.org/documentation/master/std/#std.sort.binar...


And what if you need to close over some local variable?


Not possible, you'll need to pass the captured variables explicitly into the 'lambda' via some sort of context parameter.

And considering the memory management magic that would need to be implemented by the compiler for 'painless capture' that's probably a good thing (e.g. there would almost certainly be a hidden heap allocation required which is a big no-no in Zig).


As far as I know lambdas in C++ will not heap allocate. Basically equivalent to manually defining a struct, with all your captures, and then stack allocating it. However if you assign a lambda to a std::function, and it's large enough, then you may get a heap allocation.

Making all allocations explicit is one thing I do really like about Zig.


If the lambda is a value type, you can just store whatever captures you want in the fields of this type, no need for heap allocations - they'll go on the stack just like anything else. You can even ask the user to explicitly specify which variables to capture, like in C++ lambdas, to be very explicit about the size of the lambda structure.


Why would capturing require a heap allocation? Neither Rust nor C++ do this.


You need to store the captured data somewhere if the lambda is called after the outer function returns. AFAIK C++ (or rather std::function) will heap-allocate if the capture size goes above some arbitrary limit (similar to small-string optimizations in std::string). Not sure how Rust handles this case, probably through some "I can't let you do that, Dave" restrictions ;)


The trick is that “if”. Rust won’t ever automatically heap allocate, but if your closure does get returned to a place where the capture would be dangling, it will fail to compile. You can then choose to heap allocate it if you wish, or do something else instead.

Heap allocating them is fairly rare, because most usages are in things like combinators, which have no reason to enlarge their scope like that.


But that's not a closure, a closure/lambda is not an std::function. It's its own type, basically syntactic sugar for a struct with the captures vars and operator().

Of course if you want to store it on a type-erased container like std::function then you may need to heap allocate. Rust's equivalent would be a Box<dyn Fn>.


...or escape analysis and lifetimes, but we already have Rust %)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: