Hacker Newsnew | past | comments | ask | show | jobs | submit | koreth1's commentslogin

> I've seen people who prefer to say "hey siri set alarm clock for 10 AM" rather than use the UI. Which makes sense, because language is the way people literally have evolved specialized organs for.

I don't think it's necessary to resort to evolutionary-biology explanations for that.

When I use voice to set my alarm, it's usually because my phone isn't in my hand. Maybe it's across the room from me. And speaking to it is more efficient than walking over to it, picking it up, and navigating to the alarm-setting UI. A voice command is a more streamlined UI for that specific task than a GUI is.

I don't think that example says much about chatbots, really, because the value is mostly the hands-free aspect, not the speak-it-in-English aspect.


Even when my phone is in my hand I'll use voice for a number of commands, because it's faster.


I'd love to know the kind of phone you're using where the voice commands are faster than touchscreen navigation.

Most of the practical day to day tasks on the Androids I've used are 5-10 taps away from a lock screen, and get far less dirty looks from those around me.


My favorite voice command is to set a timer.

If I use the touchscreen I have to:

1 unlock the phone - easy, but takes an active swipe

2 go to the clock app - i might not have been on the home screen, maybe a swipe or two to get there

3 set the timer to what I want - and here it COMPLETELY falls down, since it probably is showing how long the last timer I set was, and if that's not what I want, I have to fiddle with it.

If I do it with my voice I don't even have to look away from what I'm currently doing. AND I can say "90 seconds" or "10 minutes" or "3 hours" or even (at least on an iPhone) "set a timer for 3PM" and it will set it to what I say without me having to select numbers on a touchscreen.

And 95% of the time there's nobody around who's gonna give me a dirty look for it.


and less mental overhead. Go to the home screen, find the clock app, go to the alarm tab, set the time, set the label, turn it on, get annoyed by the number of alarms that are there that I should delete so there isn't a million of them. Or just ask Siri to do it.


One thing people forget is that if you do it by hand you can do it even when people are listening, or when it’s loud. Meaning its working more reliable. And in your brain you only have to store one execution instead of two. So I usually prefer the more reliable approach.

I don’t know any people that do Siri except the people that have really bad eyes


God I miss physical buttons and controls. being able to do something without even looking at it.


I'd sort of roughly approached this technique with my own channel organization over time without thinking about it systematically, but this is a helpful crystallization of what I'd been trying to achieve. I'm glad this was posted.

Definitely agree with others that Slack needs a richer selection of notification mechanisms, both for new content in channels and for mentions. For mentions, there's no level between "I demand immediate attention from this person" and "the characters that make up this person's name happen to be in the text of my message."


> I'll start by saying I'm skeptical of the answer and ask it to state its reasoning.

How do you tell if it's actually stating the reasoning that got it to its answer originally, as opposed to constructing a plausible-sounding explanation after the fact? Or is the goal just to see if it detects mistakes, rather than to actually get it to explain how it arrived at the answer?


The act of making it state its reasoning can help it uncover mistakes. Note that I'm asking a second model to do this; not the original one, otherwise I would not expect a different result.


I would totally expect a different result even on the same model. Especially if you're doing this via a chat interface (vs API) where you can't control the temperature parameters.

But yes, it'll be more effective on a different model.


When I was living in China I got used to crossing large streets one lane at a time. Pedestrians stand on the lane markers with cars whizzing by on either side while they wait for a gap big enough to cross the next lane. It's not great for safety, to put it mildly, but the drivers expect it and it's the only way to get across the road in some places. I was freaked out by it but eventually it became habit.

Then I came back to the US and forgot to switch back to US-style street crossing behavior at first. No physical harm done, but I was very embarrassed when people slammed on their brakes at the sight of me in the middle of the road.


As a satisfied customer of yours, the prospect of having to give up Graphite is the main thing keeping me from giving jj a try at my day job.

Ironic, since if there are a bunch of people in my boat, the lack of us in jj's user base will make it that much harder for jj to cross the "popular enough to be worth supporting" threshold.


My ideal is really just a version of `gt sync` and `gt submit` that handle updating the Graphite + Github server-side of things let you use `jj` for everything else, I think it could feel super nice. Probably not as simple as my dreams, but hopefully something we can get to with enough interest!


I didn't get a diagnosis at Kaiser SF, but I was able to get meds through them. Maybe this will be of use to you.

I was diagnosed by a non-Kaiser psychiatrist I found on my own. After trying different prescriptions, we eventually settled on Concerta. I stayed on that (and continued seeing the same psychiatrist, whose service I paid for out of pocket) for about 4 years.

Then my psychiatrist had some family stuff come up and had to move out of California. Since she was no longer going to be licensed here, she couldn't keep prescribing my meds to me. But she was able to write a letter describing my situation and laying out how she'd arrived at the prescription I was on, with particular emphasis on the fact that she hadn't seen any evidence of misuse on my part. I gave that letter to my Kaiser primary care doctor, who agreed to take over the prescription. After that I was able to get my meds from Kaiser each month without any issues.

I imagine this kind of setup depends on your primary care doctor; I may have just gotten lucky with mine.


This looks really useful! Wish I'd had something like this when I was learning Mandarin.

I'm curious what determines whether or not you add a given language to the list. DeepL and Claude, at least, have usable translation ability in more languages than the app currently supports. Is there a lot of manual effort required for each language, or do you want to keep the list limited just to avoid overwhelming users?


Thanks! I'm glad people like it; I'm hopeless at marketing it to Language Learners:tm:, but programmers seem to love it and it's nice to get some positive feedback.

DeepL is actually pretty limited in what it supports. Unless I've missed a new language, Nuenki supports all of DeepL's languages.

Some of the additional ones are supported via Claude only and, where permitting, Groq. Groq is far faster than Claude; in languages that DeepL supports, DeepL handles visible text and Claude handles text that you haven't scrolled to yet. Claude-only languages are a bit of a worse experience.

It's pretty easy for me to add a language. It's all stored in a centralised toml file, which happens to be open source - https://github.com/Alex-Programs/nuenki-languages/blob/maste... - and it's about a 20 minute job to add a language, test it, etc. Then it's about half an hour and 5 USD to benchmark whether Llama is any good at it, and if so enable Groq and make the experience a bit more pleasant. I'm currently working on improving the translation quality benchmark (https://nuenki.app/blog/the_best_translator_is_a_hybrid_tran...), because people seem to like it and there's definitely a lot of room for improvement.

That 20 minute number is without updating the big language cloud on the website, which is a bit finicky; iirc I haven't added Vietnamese to it yet.

If anyone here has any requests, I'd gladly add them!


A request to add Tamil, a widely spoken and one of the earliest classical languages! Thanks!!


This is maybe not in the spirit of OP's question, but I do it by having successfully made the case early in my company's lifetime that we should open-source most of our code.

Nearly every piece of code I write at work is part of one of those public, Apache-licensed code bases. Which means I spend most of my time working on OSS.

Are these projects the kind of thing anyone else will ever use? Probably not, so long as the company stays in business. The business case my team made was focused on transparency and long-term viability: our customers can see exactly what we're doing with their data and how our systems work, and if we go under, they have a realistic way to continue using our software. This hasn't ended up being a huge selling point, but customers have definitely mentioned it as one of the things they liked about us.


On some level, though this isn't quite what the person you're replying to was saying, it doesn't really matter whether AI actually can do any entry-level jobs. What matters is whether potential employers think it can.

To impact the labor market, they don't have to be correct about AI's performance, just confident enough in their high opinions of it to slow or stop their hiring.

Maybe in the long term, this will correct itself after the AI tools fail to get the job done (assuming they do fail, of course). But that doesn't help someone looking for a job today.


> That people like video formats isn't really surprising to me since it's everywhere, but I still don't fully understand the appeal.

Me either, but I have a hunch about why.

Are you a fast reader?

I am, at least compared to the population at large. And one of the reasons I can't stand video as a format for learning about coding topics is that it is so frustratingly slow compared to my reading speed. To get anywhere close, I have to crank the playback speed up so high that I start having trouble understanding what the presenter is saying. That's on top of other things like poor searchability and no way to copy-paste code snippets.

The decline of reading skills, at least in the US, is pretty well-documented. And my hunch is that for the increasingly large number of people coming into the industry who don't read quickly or well, the efficiency of learning from videos is closer to parity with text. What's more, I suspect there's high correlation between lower reading skills and dislike of the act of reading, so videos are a way to avoid having to do something unpleasant.

I have no solid evidence to back any of this up, but it seems at least as plausible to me as any other explanations I've run across.


That’s a really interesting take. I say that as I’m the opposite — a slow reader — and I, too, cannot stand learning via video.

I’m by no means a weak reader, I love reading and do so often. I just find myself re-reading complex sections to ensure that I understand 100%.

I also like to be able to read something and then follow it on a train of thought. For example, if a post/article says that X causes Y because of Z I want to find out why Z causes it. What causes Z to be etc.

With a video I find this sort of learning to be inefficient and less effective while also making the whole experience a bit rigid. I also find that videos tend to leave out less glamorous details as they don’t video well if that makes sense


I'm also a slow-reader by your standards, re-reading to me is part of the learning process. Going over text with your eyes is not reading, let alone learning.

I think your dislike of video over text is because you're a quick learner. Like you said, going on a tangent and researching some word or sentence or statement makes you a thorough learner I think. Eventually you have a quicker and bigger grasp of the subject at hand, which is the whole point if you ask me.


Thanks mate! I think I consider myself a slow reader as I’ve grown up with my mother and sister who both read at some ungodly pace. They’ll finish 5 books for every one which I finish.

I do agree with the thorough learner aspect. I think having come from physical engineering backgrounds helps a lot with that.

When studying aerospace, for example, there was a lot of ‘but why’ which usually ended up leading to ‘natural phenomenon’ after abstracting far enough.


Alternatively: you can listen to audio while commuting or driving or cleaning or working out. I love audio for higher level things and to get an overview of the topic. Then text to dive into the details.


Another big driver to move from text to video: It is easier to monetise video via YouTube compared to a blog. People with millions of subscriptions on YouTube aren't creating FE learning material out of the goodness of their hearts; it is a big business. Also, video is almost always lower information density compared to text, so it is easier for your net to capture more customers.


And you can't just search in it. It's truly trashy format for anything other that presentation or lecture. For simple information sharing it's horrible.


We millenials ruined the doorbell industry by texting “here”. (Always connected)

Gen Z just sends you a picture of your door. (Mobile broadband)

What we perceive as the best way is often just driven by the technology available when we learned how to operate in the world.


I think you nailed it.

Another example of advertising destroying the world.


It can be quite difficult to follow programming topics over audio only, so it's not interchangeable with video in this case.


I have a fairly fast reading speed, but I mostly consume my non fic (not technical) books in audio format.

Why? Attention span. If someone is reading to me, I tend to get 'pushed along' and it makes it easy to slog through a non fiction book that really could have been a pamphlet but the author needed it to be 400 pages. If I space out listening, it's usually not a problem because most non fic books are so repetitive. I suspect that's the secret behind video's popularity, people's attention is in short supply.


I’m a pretty slow reader. I tend to reread sections, pause and play around with the ideas that come into my head, get lost while doing that and have to start over… I still prefer reading specifically because it allows me to do all that at my own pace. I don’t have to feel rushed along by a presenter or actively pause, rewind, try to scrub the timeline to find a point I want to rehash etc.


I really think you have got a point, I'd add however that reading is more cognitive effort than watching a video, at a basic level (that is, information in the text or video put aside).

Just see how hard it is to read more than a few paragraphs when tired before bed vs. how hard it is to watch something in the same state.

I think this gets added to the point you are making about reading skills declining.


People learn best in different ways. Some learn best by reading, some by tinkering, some by watching and listening. I heard this over and over in school and college.

I don’t think it has anything to do with reading speed. When taking in complex technical information, you spend more time thinking and trying to understand than actually reading.

If you’re finding that you can quickly race through content, it probably just means you find the content easy to understand because you’re already familiar with many concepts.


> no solid evidence

IMO you don’t need any. The correctness of your conclusion is self-evident. Proof by common sense, QED.


I happen to agree with the conclusion also. And you don't need a rigorous proof to do what you want to do. But I often find that people appeal/resort to "common sense" when they don't have a coherent argument, and just can't conceive of any other point of view.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: