Hacker Newsnew | past | comments | ask | show | jobs | submit | corytheboyd's commentslogin

I just recently decided to replace iterm2 with wezterm when I started moving macbook over to nix. iterm2 is about the only one that didn’t work well for this, since you can’t source control the configuration (import/export doesn’t cut it)

Any of the ones you mentioned would probably work good with nix too. I don’t really care about the config being scriptable at all, it was just the first terminal that easily let me set all of the keyboard shortcuts I wanted, so I stuck with it.


iTerm2 does support source control; I've got my settings in a git repo managed by Chezmoi. In the settings dialog, under "General" -> "Settings", there's an "External settings: Load settings from a custom folder or URL" option.


I've also gone wez from iterm2, and you hit upon why. Don't ever make me click on things. I can't script => modify => export clicking on things. When you make me click on things, you've defined my interface for me, which I did not ask for.

I suppose I'm a bit of an extremist, though.


Nah, I want my configurations to be deterministic.

I put config in dir, launch app. App should look like config.

If it doesn't it's the app's fault.

There are a limited number of applications I tolerate this behaviour from, but not many.


Almost entirely with you on that, actually. But OS and other environment differences frequently demand some sort of tweaking, which I absolutely do not want to do by hand if I've done it before.


Chezmoi can do conditional templating on config files, which is super nice.

But it's always better when the application itself is cross platform and uses just a single config file.

(Just setting all of the knobs on macOS is a massive hassle and only part of them can be automated in a deterministic way...)


It's about time I started using a dotfile manager that I didn't make entirely myself - thanks for the recommendation.

For more and more of the cross-platform headaches, I've actually found myself treating the OS as more of a virtual host, and spending the plurality of my time configuring layers that run in it (modify .zshrc where it can do the work of iterm/wezterm, if it can be done in .emacs then do it there).

I get the feeling that I'm not far off shipping personal nix containers around, but there's still a little too much friction between having containers work and working on the OS itself.


I spent a long time messing about with different solutions and Chezmoi is the one that's least confusing for me and fits the way I think the best.

I've been using it for years and didn't touch the templating until this spring actually, I just had if statements in my setup bash scripts =P


Completely agree, I hate the “hackathon” for so many reasons, guess I’ll vent here too. All of this from the perspective of one frustrated software engineer in web tech.

First of all, if you want innovation, why are you forcing it into a single week? You very likely have smart people with very good ideas, but they’re held back by your number-driven bullshit. These orgs actively kill innovation by reducing talent to quantifiable rows of data.

A product hobbled together from shit prototype code very obviously stands out. It has various pages that don’t quite look/work the same, Cross-functional things that “work everywhere else” don’t in some parts.

It rewards only the people who make good presentations, or pick the “current hype thing” to work on. Occasionally something good that addresses real problems is at least mentioned but the hype thing will always win (if judged by your SLT)

Shame on you if the slop prototype is handed off to some other team than the hackathon presenters. Presenters take all the promotion points, then implementers have to sort out a bunch of bullshit code, very likely being told to just ship the prototype “it works you idiots, I saw it in the demo, just ship it.” Which is so incredibly short sighted.

I think the depressing truth is your executives know it’s all hobbled together bullshit, but that it will sell anyway, so why invest time making it actually good? They all have their golden parachutes, what do they care about the suckers stuck on-call for the house-of-cards they were forced to build, despite possessing the talent to make it stable? All this stupidity happens over and over again, not because it is wise, or even the best way to do this, the truth is just a flaccid “eh, it’ll work though, fuck it, let’s get paid.”


You touched on this but to expand on "numbers driven bullshit" a bit, it seems to me the biggest drag on true innovation is not quantifiability per se but instead how organizations react to e.g. having some quantifiable target. It leaves things like refactoring for maintainability or questioning whether a money-making product could be improved out of reach. I've seen it happen multiple times where these two forces conspire to arrive at the "eh, fuck it" place--like the code is a huge mess and difficult to work on, and the product is "fine" in that it's making revenue although customers constantly complain about it. So instead of building the thing customers actually want in a sustainable way we just... do nothing.

We have to do better than that before congratulating ourselves about all the wonderful "innovation".


Maybe, maybe not, it’s hard to tell from articles like this from OSS projects what is generally going on, especially with corporate work. There is no such rhetoric at $job, but also, the massive AI investment seemingly has yet to shift the needle. If it doesn’t they’ll likely fire a bunch of people again and continue.


> […] and then ask about a list of foo

Not OP, but this is the part that I take issue with. I want to forget what tools are there and have the LLM figure out on its own which tool to use. Having to remember to add special words to encourage it to use specific tools (required a lot of the time, especially with esoteric tools) is annoying. I’m not saying this renders the whole thing “useless” because it’s good to have some idea of what you’re doing to guide the LLM anyway, but I wish it could do better here.


I've got a project that needs to run a special script and not just "make $target" at the command line in order to build, and with instructions in multiple . MD files, codex w/ gpt-5-high still forgets and runs make blindly which fails and it gets confused annoyingly often.

ooh, it does call make when I ask it to compile, and is able to call a couple other popular tools without having to refer to them by name. if I ask it to resize an image, it'll call imagemagik, or run ffmpeg and I don't need to refer to ffmpeg by name.

so at the end of the day, it seems they are their training data, so better write a popular blog post about your one-off MCP and the tools it exposes, and maybe the next version of the LLM will have your blog post in the training data and will automatically know how to use it without having to be told


Yeah, I've done this just now.

I installed ImageMagik on Windows.

Created a ".claude/skills/Image Files/" folder

Put an empty SKILLS.md file in it

and told Claude Code to fill in the SKILLS.md file itself with the path to the binaries.

and it created all the instructions itself including examples and troubleshooting

and in my project prompted

"@image.png is my base icon file, create all the .ico files for this project using your image skill"

and it all went smoothly


I’ll give it a fair go, but how is it not going to have the same problem of _maybe_ using MCP tools? The same problem of trying to add to your prompt “only answer if you are 100% correct”? A skill just sounds like more markdown that is fed into context, but with a cool name that sounds impressive, and some indexing of the defined skills on start (same as MCP tools?)


That is a really neat interview format, the lightning round of varying themes of common tasks! This right here proves to me you are a good interviewer:

> I'm not worried about whether the string split command takes its parameters in this or that order, I just want to know you know it exists

I’ve run quite a few “can you write code” interviews in the age of practical AI, and I don’t know if I’ve been lucky, am good at breaking through nonsense, or if internet claims are exaggerated, but I can hardly tell the difference between now and the before times. You get someone on a call, you explain a problem, you see how they approach it, you probe along the way. I don’t work for a giant FAANG-like, maybe that’s part of it.


> […] and then having a conversation about what is done, reasoning, etc.

Isn’t this where it would likely unravel?

The interviewer will know what the interesting parts of the exercise are, and ask the deep questions about them. Observe some more: do they know how to use an IDE, run their own program, cut through code to the parts that matter. Basically, can they do the things someone who wrote the code should trivially be able to do?

Since it was mentioned in a sibling comment: Even if the candidate used an LLM to write the code at home, I don’t care, so long as they ace the explanation part of the interview.


Agreed. It's one thing to ask the AI to solve the problem; it's another thing to be able to explain the way the problem was solved in real-time.

(Though you have to watch out for folks that are using the AI to answer your questions.)

In fact, I'm okay with people using AI to solve coding problems, as long as that is acceptable behavior at work as well. That should all be spelled out in the interview expectations.


> Though you have to watch out for folks that are using the AI to answer your questions.

Heh I do think that happened once (that I was aware of), but it was on a topic I knew a lot about, and it fell apart after layer one. Still, pretty lame, I’d much prefer a “I don’t know,” which I usually say if they start guessing.


Already a very neat project, but it would be really interesting to:

1. Display a progress bar for the memory limit being reached

2. Feed that progress back to the model

I would be so curious to watch it up to the kill cycle, see what happens, and the display would add tension.


Already a very neat project, but it would be really interesting to:

1. Display a progress bar for the memory limit being reached

2. Feed that progress back to the model

I would be so curious to watch it up to the kill cycle, see what happens, and the display would add tension.


Wrong thread, you probably meant to comment on this other RasPi LLM post? https://news.ycombinator.com/item?id=45396624


Oh shoot, yes I did! Thank you stranger


CONGRATULATIONS, YOU WON!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: