Hacker Newsnew | past | comments | ask | show | jobs | submit | aantix's commentslogin

And genetics. Caffeine is a net negative for slow metabolizers.

And other environmental factors that affect enzyme levels, as seems to be the case for me.

Not if you’re a slow metabolizer. 15% of the population.

CYP1A2

Increased heart attack risk: A 2006 study found that slow metabolizers who drank four or more cups of coffee per day had a 64% increased risk of a nonfatal myocardial infarction (heart attack) compared to those drinking less than one cup daily. The risk was even higher for slow metabolizers under age 50, who experienced more than four times the risk

No increased risk for fast metabolizers: In the same study, fast metabolizers did not experience an increased risk of heart attack, even with high coffee consumption.


How does one find out being a slow or a fast metabolizer? DNA test?

Get your dna sequenced. A simple 23andme test would do.

https://www.geneticlifehacks.com/caffeine-metabolism-and-you...

Like the other commenter alluded to, if you consume caffeine and your BP remains really elevated past two hours, you’re probably a slow metabolizer.


You can do the basic continuous blood pressure monitoring. Not super precise but should work for categorizing.

It's a progression.

To achieve simplicity, you must first outline the complex process, identify commonalities, and then simplify.

You can't reach simplicity without going through and organizing our messy, internal thoughts.


Yeah this exactly.

Simplicity is borne out of messy, complex stuff. That one then has to do the work of removing the stuff that doesnt matter / rethinking how stuff fits together.

Most people dont really have the mental energy or discipline to go through this process, so by on large, complexity exists in the world.


Will there ever be an official Google Doc/Google Drive MCP server?

Something with OAuth authentication.

Our org isn't interested in running a local, unofficial MCP server and having users create their own API keys.


They have hinted at it?


Has anyone run with `dangerously skip permissions` and had something catastrophic happen?

Are there internal guardrails within Claude Code to prevent such incidents?

rm -rf, drop database, etc?


I don't know about Claude Code, but here's my story. With Replit, I have a bunch of tasks that I want Replit to do at the end of a coding session -- push to Github, update user visible Changelogs, etc. It's a list in my replit.md file.

A couple of weeks ago I asked it to "clean up" instead of the word I usually use and it ended up deleting both my production and dev databases (a little bit my fault too -- I thought it deleted the dev database so I asked it to copy over from production, but it had deleted the production database and so it then copied production back to dev, leaving me with no data in either; I was also able to reconstruct my content from a ETL export I had handy).

This was after the replit production db database wipe-out story that had gone viral (which was different, that dev was pushing things on purpose). I have no doubt it's pretty easy to do something similar in Claude Code, especially as Replit uses Claude models.

Anyway, I'm still working on things in Replit and having a very good time. I have a bunch of personal purpose-built utilities that have changed my daily tech life in significant ways. What vibe coding does allow me to do is grind on "n" of unrelated projects in mini-sprints. There is personal, intellectual, and project cost to this context switching, but I'm exploring some projects I've had on my lists for a long time, and I'm also building my base replit.md requirements to match my own project tendencies.

I vibe coded a couple of things that I think could be interesting to a broader userbase, but I've stepped back and re-implemented some of the back-end things to a more specific, higher-end vibe coded environment standard. I've also re-started a few projects from scratch with my evolved replit.md... I built an alpha, saw some issues, upgraded my instructions, built it again as a beta, saw some issues... working on a beta+ version.

I'm finding the process to be valuable. I think this will be something I commit to commercially, but I'm also willing to be patient to see what each of the next few months brings in terms of upgraded maturity and improved devops.


Claude Code has minimal internal guardrails against destructive operations when using --dangerously-skip-permissions, which is why it's a major security risk for production environments regardless of how convenient it seems.


An over eager helm update lead to some "uh oh, I hope the volume is still there" and it was. Otherwise no, haven't had anything bad happen. Of course, it's just a matter of time, and with the most recent version it's easy to toggle permissions back on without having to restart Claude Code, so for spicy tasks I tend to disable YOLO mode.


I run it locally all the time. Nothing catastrophic happened so far.


It commits sometimes when I'm not ready, that's about it.


I don't think "IDK Claude did that" is a valid excuse. Immediate rejection.

AI may be multi-threaded, but there's still a human, global interpreter lock in place. :D

If you put the code up for review, regardless of the source, you should fundamentally understand how it works.

This raises a broader point about AI and productivity: while AI promises parallelism, there's still the human in the middle who is responsible for the code.

The promise of "parallelism" is overstated.

100's of PRs should not be trusted. Or at least not without the c-suite understanding such risks. Maybe you're a small startup looking to get out the door as quickly as possible, so.. YOLO.

But it's going to be a hot mess. A "clean up in aisle nine" level mess.


> I don't think "IDK Claude did that" is a valid excuse. Immediate rejection.

I strongly agree, however manager^x do not and want see report the massive "productivity" gains.


Sane CTOs think "Claude did that" is invalid. I assure you: those leaders exist. Refuse to work for idiots who think bots can be held accountable. You must must understand every line of code yourself.

"Claude did that" is functionally equivalent to "idk I copied that from r/programming" and is totally unacceptable for a professional


> You must must understand every line of code yourself.

I have never seen this standard reached for any real codebase of any size.

Even in projects with a reputation for a strong review culture, people know who the "easy" reviewers are and target them for the dicey stuff (they are often the most overloaded... which only causes them to get more overloaded). I've seen people explicitly state they are just "rubber stamping" PRs. Literally no one reviews every line of third-party dependencies, and especially not when they are updated routinely. I've seen over a million lines of security-sensitive third-party code integrated and pushed out to hundreds of millions of users by a handful of developers in a matter of months. I've seen developers write their new green-field code as a third-party library to circumvent the review process that would have been applied if it had been developed as a series of first-party PRs. None of that had anything to do with AI. It all predated AI coding tools. That is how humans behave.

Does this create ticking time-bombs? It absolutely does. You do the best you can. You triage and deal with the most important things according to your best judgment, and circle back to the rest as time and attention allow. If your judgment is good, it's mostly okay. Some day it might not be. But I do not think that you can argue that the optimal level of risk is zero, outside of a few specialized contexts like space shuttles and nuclear reactors.

I know. It hurts my soul, too. But reality isn't pretty, and worse is better.


I think the "you" in the quote is referring to the programmer of the PR, not the reviewer. I agree that it's probably unrealistic to expect reviewers to understand every line of code in a PR. That's why it's crucial that the programmers of said PRs themselves understand every line of code. I'll go one step further:

If you submit a PR and you yourself can not personally vouch for every line of code as a professional…then you are not a professional. You are a hack.

That is why these code generation tools are so dangerous. Sure, it's theoretically possible that a programmer can rely on them for offering suggestions of new code and then "write" that code for a PR such that full human understanding is maintained and true craft is preserved. The reality is, that's not what's happening. At all. And it's a full-blown crisis.


This.

I know many who have it on from high that they must use AI. One place even has bonuses tied not to productivity, but how much they use AI.

Meanwhile managers ask if AI is writing so much code why aren't they seeing it on topline productivity numbers.


How does maximizing AI use prevents developers from reading their code? Especially if bonuses are not tied to productivity as you say. Just treat AI as a higher level IDE/editor.


There's more code to read as unskilled or sleepy developers push tons of sloppy changes. The code works, mostly, So either one loses more time chasing subtle issues or one yolos the approvals to have time for one's own coding workload.


I don't understand how your comment relates to what I've been responding to.

>> I know many who have it on from high that they must use AI. One place even has bonuses tied not to productivity, but how much they use AI.

> How does maximizing AI use prevents developers from reading their code?

In my mind developers are responsible for the code they push, no matter whether it was copy pasted or generated by AI. The comment I responded to specifically said "bonuses tied not to productivity, but how much they use AI". I don't see that using AI for everything automatically implies having no standards or not holding responsibility for code you push.

If managers force developers to purposefully lower standards just to increase PRs per unit of time, that's another story. And in my opinion that's a problem of engeneering & organisational culture, not necessarily a problem with maximizing AI usage. If an org is OK with pushing AI slop no one understands, it will be OK with pushing handwritten slop as well.


> If managers force developers to purposefully lower standards just to increase PRs per unit of time

That's basically what I'm referring to.


You tell them clippy’s revengeance pr caused an outage worth millions of dollars because of push for productivity and they shouldn’t bother you for a couple of months.


> The promise of "parallelism" is overstated.

100% my takeaway after trying to parallelize using worktrees. While Claude has no problem managing more than one context instance, I sure as hell do. It’s exhausting, to the point of slowing me down.


That's an intended effect. It doesn't matter to those in power who know what AI is really for. Once you get so exhausted that you can't work any more, there will be a hundred bright-eyed naïve programmers who will step into your place and who think they can do better. Until they burn out in a few years time.


I have been wondering when I would start to feel aged out of the tech industry… gosh is it here already?


I don't know if it is but I'm certainly glad I left tech a long time ago...


As someone who doesn't use AI for writing code, why can't you just ask Claude to write up an explanation of each change for code review? Then at least you can look at whether the explanation seems sane.


Because the explanations will often not be sane; when they are sane, they will focus on irrelevant details and be maddeningly padded out unless you put inordinate effort into trying to control the AI's writing style.

Ask pretty much any FOSS developer who has received AI-generated (both code and explanations) PRs on GitHub (and when you complain about these, the author will almost always use the same AI to generate responses) about their experiences. It's a huge time sink if you don't cut them off. There are plenty of projects out there now that have explicit policy documentation against such submissions and even boilerplate messages for rejecting them.


It will fairly confidently state changes are "correct" for whatever reason it makes up. This becomes more of an issue with things that might be edge cases or vague requirements, in which case it's better to have AI write tests instead of the code.


This can be dangerous, because Claude doesn't truly understand why it did something. Whatever it writes a post-hoc justification which may or may not be accurate to the "intent". This is because these are still autoregressive models --- they have only the context to go on, not prior intent.


Indeed. Watching it (well, Anthropic, really) cheat at Baba Is You and then try to give a rationalization for how it came up with the solution (qv. https://news.ycombinator.com/item?id=44473615) is quite instructive.


Claude also doesn't know, because Claude dreamt up changes that didn't work, then "fixed" them, "fixed" them again and in the process left swathes of code that isn't reached.


I've been experimenting with Claude, and feel like it works quite well if I micromanage it. I will ask it: "Ok, but why this way and not the simpler way? And it will go "You are absolutely right" and implement the changes exactly how I want them. At least I think it does. Repeatedly, I've looked at a PR I created (and review myself, as I'm not using it "on production"), and found some pretty useless stuff mixed into otherwise solid PRs. These things are so easily missed.

That said, the models, or to be more precise, the tools surrounding it and the craft of interacting with it, are still improving at a pace where I now believe we will get to a point where "hand-crafted" code is the exception in a matter of years.


AI is not a human. If it understands things it doesn't understand things like you or I. This means it can misunderstand things in ways we can't understand.


AI is not sentient, so it does not “understand” anything. I don’t expect the autocomplete of my messenger app to understand its output, so why should I expect Claude to understand its output?


Yeah in a perfect world I absolutely agree. But the reality I'm observing is that everything continues to always be behind schedule (not a new phenomena) and if anything expectations from project leads / management are just getting less realistic leaving Jr devs in even more of a position of "no time to be curious or learn deeply just get it done" and teamleads reviewing PRs in a maybe even worse position of "no time to get into a deep review / mentorship session just figure out if it breaks anything or not". And ultimately clients are still just living in fantasy land in terms of expectations both in terms of build out time for basic features / patches and also how much AI razzle-dazzle they expect for new project proposals.

Nothing can move fast enough to keep up with these hype-fueled TED talk expectations all the way up the chain.

I don't know if there's any solution and I'm sure it's not like this everywhere but I'm also sure I'm not alone. At this point I'm just trying to keep my feet wet on "AI" related projects until the hype dust settles so I can reassess what this industry even is anymore. Maybe it's not too late to get a single subject credential and go teach math or finger painting or something.


> I don't think "IDK Claude did that" is a valid excuse. Immediate rejection.

That will work, but only until the people filing these PRs go crying to their managers that you refuse to merge any of their code, at which point you'll be given a stern reprimand from your betters to stop being so picky. Have fun vibe-reviewing.


It's insane that any company would just be OK with "IDK Claude did that" any more than a 2010 version of that company would be OK with "IDK I copy pasted from StackOverflow." Have engineering managers actually drank this Kool-aid to the point where they're actually OK with their direct reports just chucking PRs over the wall that they don't even understand?


It is even more funnier when you realize that because Claude and all AI models are trained on data including stackoverflow.

So I guess if you asked Claude why it did that, the truth of it might be "IDK I copy pasted from StackOverflow"

The same stuff pasted with a different sticker. Looks good to me.


Ha, I genuinely laughed at that. Thank you!


This is exactly how I see it. Is not about the tool, is how it is used. In 1990 that would have been “IDK I got it from a BBS” and in 1980 “got if from a magazine“. It doesn’t matter how you get there, you have to understand it. BTW I had a similar problem as I was manager in HW development, where the value of a resistor had no documented calculation. I would ask: where does it came from? If the answer was “I tried and it worked”, or “tested in lab until I found it” or in the 2000 “I run many simulations and was the best value” I would reject and ask for proper calculations, with WCA.


As vibe coding becomes more commonplace you'll see these historical safeguards erode. That is the danger IMO.

You're right, saying you got something off SO would get you laughed out of programming circles back in the day. We should be applying the same shame to people who vibe code, not encourage it, if we want human-parseable and maintainable software.


> That is the danger IMO.

For whom is this a danger for?

If we're paid to dig ditches and fill them, who are we to question our supreme leaders? They control the purse strings, so of course they know best.


> If we're paid to dig ditches and fill them

This is a very cruel punishment sometimes used in forced labor camps. You are describing torture.


I don't think it's common, but I've definitely seen it

I've also seen "Ask ChatGPT if you're doing X right?", and basically signing off whatever it recommends without checking

At this point I'm pretty confident I could trojan horse whatever decision I want from certain people by sending enough screenshots of ChatGPT agreeing with me


I don't think this is an AI specific thing. I work in the field, and so I'm around some of the most enthusiastic adopters of LLMs, and from what I see, engineering cultures surrounding LLM usage typically match the org's previous general engineering culture.

So, for example, by and large the orgs I've seen chucking Claude PRs over the wall with little review were previously chucking 100% human written PRs over the wall with little review.

Similarly, the teams I see effectively using test suites to guide their code generation are the same teams that effectively use test suites to guide their general software engineering workflows.


How long are you spending with a given team, and where per se in their "AI lifecycle?" I would expect (for example) a sales engineer to see this differently than a support engineer, if support engineers still existed.


Depends on your incentives. People anecdotally seem far more impressed with buggy stuff shipped fast than good stuff shipped slowly.

Lots of companies just accept bugs as something that happens.


Depends on your problem space too.

Calendar app for local social clubs? Ship it and fix it later.

B2B payments software that triggers funds transfers? JFC I hope you PIP people for that.


If it pushes some nice velocity metric, most managers would be ok. Though you have to word it a bit differently of course.


The same developers submitting Claude submissions can take 1-2 minutes asking for an explanation of what they're submitting and how it works. Might even learn.

Stack Overflow had enough provenance of copying and pasting. Models may not. Provenance remains a thing or it can add risk to the code.


"Look, the build is green and CI belongs to another team, how perfectionist do you need us to be about this?" is the sort of response I would generally expect, and also in the case where AI was used.


Of course they are okay with it. They changed the job function to be just that with forced(!) AI adoption.


What about "the compiler did that" ?


At least I would not accept from my team. Is borderline infuriating. And I would promptly insinuate, if that is the answer, next time I do not need you, I will ask directly Claude, you can stay home!


Maybe check what's their workload otherwise. Most engineers I've worked with want to do a good job and ship something useful. It's possible they're offloading work to the LLM because they're under a lot of pressure. (And in this case you can't make them stay home)


> I don't think "IDK Claude did that" is a valid excuse.

It's not, and yet I have seen that offered as an excuse several times.


Did you push back?


> If you put the code up for review, regardless of the source, you should fundamentally understand how it works.

Inb4 the chorus of whining from AI hypists accusing you of being an coastal elitist intellectual jerk for daring to ask that they might want to LEARN something.

I am so over this anti-intellectual garbage. It's gotten to such a ridiculous place in our society and is literally going to get tons of people killed.


I understand and agree with your frustration, but this is not what discourse here is supposed to look like.


I would love to see a HUD that allows me to see the change that corresponds to Claude Code's TODO item.

I don't want inline comments as those accumulate, don't get cleaned up appropriately by the LLM.


There’s a “feel” to the way Claude Code outputs the text. And for input as well.

Sadly, this is lost with conductor.

I just don’t feel as joyful using it.


Ah, we’d love for Conductor to feel magical to use — we spend a lot of time thinking about this. Any chance you could say more about what’s lost?


Is some of the intermediary output being suppressed? This gives the feeling of CC "working" on my behalf.

Being able to quickly hit escape to interject. Escape again to see the conversation history.

The key bindings should be exactly the same.

I don't think I'm looking for an interface to replace CC - it's a great interactive terminal app.

I just want a better way to manage the sessions.


There needs to be a mind shift. It will probably take a generation.

Being online is not the same as being in the real world.

You have to take risks, including speaking with people, face to face, and forming meaningful relationships.

Swiping right is not the same as approaching someone attractive in person.

Complaining on Reddit is not the same as talking directly with lawmakers.

Interpersonal communication, persuasion, is hard work that should be re-embraced.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: