Hacker News new | past | comments | ask | show | jobs | submit | gertlex's comments login

I'm sure it's manageable with proper software, as I had no such issue back in the day with my Xperia Z5(?) compact.

(That said, I get similarly cranky about various gestures that just don't reliably work in some cases. I despair of the eventual day they (google in my case) no longer offer the 3 button home row on android phones)


I really appreciate this method of sharing workflows. Well catered to the audience. Actually was slightly hoping there'd be sound to the vid, too, but reading the list of actions after-the-fact was reasonable. I learned a few things I could do and/or approach differently in my own flows.

You mentioned the arcane keyboard shortcuts of tmux. I'm curious if you or others here have tried/use byobu (which I think of as a wrapper around tmux, basing most commands on the F# row). I was shown it a decade ago and have used it since (after a couple prior years of primitive tmux use).


glad you've enjoyed it :) i was trying to find something that was clear while still being easy to skim.

> You mentioned the arcane keyboard shortcuts of tmux.

oh, i've remapped almost all the shortcuts in tmux. `ctrl-k` is not the default prefix and `h` is not the default key for "select pane left".

i haven't tried byobu but from skimming the readme i expect it not to have a ton other than nicer default key bindings, and i'd rather not add more layers to my terminal.


Gotcha! My main reason for advocating for byobu is it's more beginner/defaults-friendly. I've never customized it, nor fully learned its hotkeys.

In my case, we have dev robots with byobu installed, and it's much easier to train non-SW engineers (i.e. HW folks, technicians, QA) on its use (primarily for remote session persistence).

(This is also why I don't do much/heavy customization these days: for uniformity between local and robot machines...)


Regarding your edit:

The setup you describe, "how do I get my motor to turn", is lower level than ROS. Typically if I had a robot with a microcontroller to control a motor, I'd write some bespoke code (or find libraries, such as Adafruit's for example, probably) for my particular hardware. Most recently, I've just asked ChatGPT for such code (having done it myself the harder ways in years past, so admittedly I know reasonably well what to ask...).

Once you achieve code that moves the motor at various speeds or directions, you might e.g. connect to the Arduino from a computer (e.g. raspberry pi), write a python "ROS Node" script that listens to a ROS topic and sends serial commands from the Pi to the Arduino.

Then, if you attach a laser range-finder or 2D lidar to the Pi, and look for a corresponding ROS node for that laser hardware (on github via google), you would run that additional ROS Node...

And finally you might write the "main" script as a third ROS Node that:

- Interprets the laser data it gets from the laser node's published ROS topic - Has some logic for the robot that interprets that data and chooses to set speed/direction of the motor.

This is all very ad hoc description but hopefully somewhat helpful... You also asked "what is ROS", to which my own typical answer is: A framework for Nodes to communicate with each other, that happens to have a lot of open source nodes available for common hardware.

(I write this with ROS1 in mind, having hardly touched ROS2)


> > a few years ago, the IRS stopped allowing the $1M to be deducted

> It was Trump's 2017 Tax Cuts and Jobs Act, which amended IRS code.

And took effect in 2022 (per what I've read elsewhere, and other comments on this post; could be off by a year)

(just clarifying that the effect was "a few years ago", but I agree that it's important to know the origin of it, which you were pointing out)


I think partially dismissing the question due to the bill happening "under trump" doesn't help the conversation here. If the bill was sponsored by particular reps/senators, then it's worth identifying those, so their voters can factor this bill in to their decision to vote for/against in the future, etc.


Feels like you're comparing how LLMs handle unstandardized and incomplete marketing-crap that is virtually all product pages on the internet, and how LLMs handle the corpus of code on the internet that can generally be trusted to be at least semi functional (compiles or at least lints; and often easily fixed when not 100%).

Two very different combinations it seems to me...

If the former combination was working, we'd be using chatgpt to fill our amazon carts by now. We'd probably be sanity checking the contents, but expecting pretty good initial results. That's where the suitability of AI for lots of coding-type work feels like it's at.


Product ingredient lists are mandated by law and follow a standard. Hard to imagine a better codified NLP problem


I hadn't considered that, admittedly. It seems like that would make the information highly likely to be present...

I've admittedly got an absence of anecdata of my own here, though: I don't go buying things with ingredient lists online much. I was pleasantly surprised to see a very readable list when I checked a toothpaste page on amazon just.


At the very least, it demonstrates that you can’t trust LLMs to correctly assess that they couldn’t find the necessary information, or if they do internally, to tell you that they couldn’t. The analogous gaps of awareness and acknowledgment likely apply to their reasoning about code.


Another comment mentioned along the lines of, "it's the goto used by developers in readmes", and I suspect it's more specifically javascript-adjacent developers (as is the case where I work)

The "render locally" situation was enough friction to keep me happy with my .jpgs and .pngs generated from various sources and/or screenshotting.


I don't know if this helps you but the Mermaid plugin in JetBrains has an export feature which can save you a step. But I find Mermaid diagrams so limiting and the syntax more immature than PlantUML so it's very rare that I bother

The "in readmes" is a special case because the markdown rendering in both GitHub and GitLab support it without drama


What single emoji would you choose instead?

It seems reasonable to me.


Beach umbrella, couch, person in lotus position, a book…

Nothing outrageous, but it’s an interesting shift of perspective.


The couch emoji I'd agree with! I suspect the others are far less common activities than browsing phone or lounging :)


the dancing or partying ones, although admittedly we all scroll the phone more


> You need to be strategic about where you live (e.g. buying the house ...

I wonder what % (presumably low) of the population can live in SFHs and achieve this cities like Seattle.

I should try finding if there's available work that's made visualizations of this sort of things ("How many homes could be within X miles or minutes of A B and C" for SFH, Quadplex, 5-over-1s etc.)


Walk Score can provide an estimate of walkability for any given address.

https://www.walkscore.com/


You aren’t exactly going to find an SFH in the suburbs that is much cheaper. So you have a point, but you have to choose between an SFH, a similar priced townhome (basically an SFH without a yard), or a condo with an HOA, all basically unaffordable unless you want to commute from Kent or Marysville. Seattle still has density (the townhome I live in in Ballard is one of three that used to be one SFH).


Indeed. My intended purpose of such a tool would be to crudely illustrate the impracticality of everyone aspiring to such housing ;)


GIMP did this too.

Bless Irfanview and Inkscape for having color icons still...


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: