Hacker News new | past | comments | ask | show | jobs | submit | aleks224's comments login

For a set of primitive operations (such as those in Turing machine), how can you be sure that it spans all possible computations? It sounds that's one of the challenges Turing had - so he went at lengths to show that by various "soft arguments", e.g. that it's possible to build a VM (Turing machine that runs other Turing machines IIRC). Later on, it was shown that the Turing machine is equivalent to machines other folks came up with (Goedel's recursive functions, lambda calculus, etc) and a consensus emerged that they are all describing a full set of computations.


>The only way to mitigate this is to use computer-readable proofs, which can be verified by algorithms to ensure correctness.

This seems like a strong overstatement. We got by for more than 2000 years without computer-readable proofs, relying on intuition and validation.


With lots of major mistakes along the way.


If I'm understanding this correctly, with your Lisp IDE you could have an image sub-editor which may expose functions for image modification (eg. cropping), which could be called from the parent-editor. Is this accurate?

Also, what is it about Lisp specifically that makes it suitable for this undertaking?


> If I'm understanding this correctly, with your Lisp IDE you could have an image sub-editor which may expose functions for image modification (eg. cropping), which could be called from the parent-editor. Is this accurate?

Yep, and, presumably, you could then interact with it with your mouse, like draw something in it.

(One correction though: it doesn't have to be a Lisp IDE, but just any Runic document.)

There are a few facts at play: 1. Lenses are cells, which means they are just graphical objects responsible for their own graphical output and input handling (among many other things). 2. An image editor would be a cell as well. 3. A lens could, at runtime, dynamically, inherit from the image editor via an :is-a relationship, and, thus, become an image editor too.

Of course this would require some UI programming to get right, but that's the idea.

> Also, what is it about Lisp specifically that makes it suitable for this undertaking?

Please, see: https://project-mage.org/why-common-lisp

TLDR: It's an image-based language, and interactivity is a top-priority for power use. For instance, if something goes wrong, you don't want the application to crash. Incremental development for GUIs in general is pretty crucial. So, the only other candidate could be Smalltalk, but I like Lisp better.


Very interesting! My understanding is that you're thinking about this more in terms of an 'application environment for power-users' than in terms of a 'multi-faceted IDE with lensing'.

To clarify, the specificity of Emacs is that it fully exposes it's internal function sets to the world. This could be done by other applications in an organized way. For example, in the picture-editing app example, it would amount to allowing scripting over the features that the app exposes. The scripting feature would come from the environment, not from anything specific the app itself does (apart from being built in that environment). The previously mentioned IDE could then be thought of simply the multi-tasking environment in which such generic applications are running.

Does this roughly correspond to what the project is about?


You are absolutely on point, yes. The building blocks themselves are what the users will extend upon and use. That plays a very big role in composition and reuse. There will also be configurations and contexts (which are really quite simple mechanisms, really) that will factor into this, too (for the purposes of customization so that the users don't have to modify the original code to change some behavior or slot). Of course, prototype OO itself has a key agency here.

I also like to think about this in terms of "building blocks", not just an exposition of API. So, Emacs has the notion of a buffer for its building block (the only one, I believe). Cells and lenses will be building blocks.


> A trisector is a person who has, he thinks, succeeded in dividing any angle in three equal parts using straightedge and compass alone. He comes when he sends you his trisection in the mail and asks your opinion, or (worse) calls you to discuss his work, or (worse still) shows up in person.


The undecidability property proven here doesn't imply that there exists at least one Diophantine equation for which we'll never know if it's solvable or not, does it?


In [0], Carl and Moroz give an explicit polynomial in 3639528+1 variables such that: a well-formed formula is a theorem in the first order predicate calculus if and only if the polynomial parametrized by the Diophantine coding of the formula (a single natural number) has a solution in N^{3639528}.

From this, they get an explicit Diophantine equation such that: the Godel-Bernays set theory is consistent if and only if that Diophantine equation has no solutions (and thus the same is true for ZFC, since NBG is a conservative extension of ZFC).

[0] https://link.springer.com/article/10.1007/s10958-014-1830-2


Does there exist a set of yes/no problems such that:

- there's no general algorithm that can solve an arbitrary problem from the set (the whole thing is undecidable)

- each problem in isolation _can_ be solved. there's no single problem that's impossible to solve


I don’t think so, at least if you assume that each concrete solution can be expressed in finite length in a formal language with a finite alphabet, and can be mechanically checked (which is generally the case for mathematical proofs). Because then you could just enumerate all strings of that language until you find one that describes the solution to the given problem, which by your second item would be guaranteed to exist, and thus the procedure be guaranteed to terminate, contradicting your first item.


Thanks for the answer, it helps piece together the puzzle. I believe there's a problem with this reasoning:

> each concrete solution can be expressed in finite length in a formal language with a finite alphabet

Suppose that's the case, the problem is that the resulting language of all the finite proofs can still be infinite and we again cannot enumerate and check all the solutions (since the final set is infinite). Therefore we're short of a method to decide yes/no for each question.

This appears to be exactly the case in the example provided by @ykonstant

> Consider the sequence of yes/no problems P_K = {Is there a solution to Q=0 in [-K,K]^n?} parametrized by a positive integer K.

For each yes/no question, the workload is finite, but for the union of all yes/no questions, the workload is not finite.


> Suppose that's the case, the problem is that the resulting language of all the finite proofs can still be infinite and we again cannot enumerate and check all the solutions (since the final set is infinite).

The set of all strings in the language is only countably infinite (= same size as tne natural numbers), which means you will reach the solution string after a finite time (just count 1, 2, 3, ... until you reach the corresponding natural number).

What I'm saying is that if each individual problem has a solution in the form of an individual finite proof (the premise of your second point), then the above gives you a general algorithm for finding the proof for any given of those individual problems (contradicting the premise of your first point).

What this doesn't give you is a way to prove that a proof exists for all individual prohlems, because that indeed would take infinite time.

Ykonstant is correct in that your use of "undecidable" was confused; I just ignored that.


You are right, I was wrong. If there's a way to evaluate each single yes/no question in a finite number of steps, then the set of all problems is decidable, since any question can be resolved in a finite number of steps.

That would seem to imply that each problem (seen as a set of sub-problems) contains some undecidable sub-problems.. so is there something like the most primitive undecidable problem..


The way you are phrasing your question is confusing; without the parenthesis, your two statements are identical. The only meaningful way to interpret "The whole thing is undecidable" is whether you can or cannot decide all (infinitely many) statements at once.

If that is what you mean, then yes: take any undecidable Diophantine equation Q=0 of, say, n variables. Consider the sequence of yes/no problems P_K = {Is there a solution to Q=0 in [-K,K]^n?} parametrized by a positive integer K. Each of those problems is decidable in isolation, but the totality cannot be, since that would decide if Q=0 has a solution or not.


Thank you for suggesting this example.

For the undecidable Diophantine equation Q=0 of n variables - I'm assuming that the fact that someone could hit a solution randomly (by randomly generating the n variables and getting lucky) does not contradict with its undecidability.

Still I'm not clear what happens if such a solution would be randomly hit for an equation that's mapped to another hard question such as ZFC consistency. It would imply that a question such as ZFC consistency could be solved by randomly generating an answer, which seemingly doesn't make sense?


My understanding:

There's no algorithm to decide. But for any equation we can be lucky to find a solution or a proof that there's no solution.

But this doesn't prove that there is an equation for which we'll never know if it's solvable or not.


From my understanding, while that's is technically true, given a consistent axiomatic system (like ZFC[1], the foundation of mathematics we use) there exists a diophantine equation that can't be proven to have no solutions in that system (even though it has no solutions). This mathoverflow answer[2] gives the equation and a link to the paper that shows how to calculate the constants (the numbers are huge!).

What that means in practice is that although what you wrote is true, for some diophantine equations we'd have to come up with new axioms to be able to write a proof of the inexistence of its solutions. But then, how can we be sure that the the new axioms are consistent?

[1] I'm assuming ZFC is consistent; if it's not then it can prove anything, including the existence of solutions for any equations at all

[2] https://mathoverflow.net/a/81986


Thanks.

I'm somewhat lost, but it seems to work Gödel like.

The statement is true (equation has no solution) but we can't prove it.


See https://news.ycombinator.com/item?id=34384838, I think it disproves your last sentence — at least when assuming that all solutions and all proofs of non-existence of a solution are expressible in a shared formal language.


I believe the culprit is the assumption the grants/tenure-track systems make about applicants/assistant professors. That assumption is negative, as if new hires will try _not to do research_. It's also about the number of grad school offerings, it just seems so huge at the moment, which again forces introducing such metrics on what is considered "success" in academia.


Of course researchers will want to do research. The question is, how do you select for those whose research will ever amount to something truly interesting, and avoid giving resources to those whose research will not?

You can try looking at the past, and derive criteria for it from that, via machine learning for example, but I would be hesitant to leave something like that up to a machine. Also times change, so criteria that worked in the past might not work now. Also, if you learn those criteria once, and then fix them, people will just game them.


>The question is, how do you select for those whose research will ever amount to something truly interesting

You can't, as "truly interesting" is context dependent, changes over time and something that everyone deems as futile may become interesting - that's the point of research. You just increase the bar of entry to get people who work very hard and leave it to them to decide.


Working very hard is not well-defined in this context. What does it even mean? You can work hard when you have a clear goal, let's say put those 10 barrels onto that truck over there. Or, let's write a new web browser within 2 years. Putting out 20 papers per year can also be considered working very hard.

You don't want people who work very hard. You want people who will EVENTUALLY put out new, original, and truly interesting research. HOW they do this is not up to you.

And there is a difference between truly interesting research, and just busy work research. It's not that easy to identify truly interesting research without the benefit of hindsight. It is somewhat easier to identify busy work research for an objective subject matter expert (but of course this is not 100% either, and personal preferences can definitely cloud the experts judgement).


You need people who got excellent grades in their undergrad program, who have somehow demonstrated that they like the field they’re going into academia for (as in clubs, extracurriculars, and whatnot), and who peers recommend as being likely to do novel research. The last one requires, gasp, talking to the applicant and seeing if they’re full of shit.


That's already done now. People in Academia have excellent grades, and they like what they do. How would peers know about their ability to do novel research, not having done any themselves?


Current system is like measuring programmer productivity by lines of code so I think it is hard to do worse than what we currently do.


> Richard stallman himself got repetitive strain injury from emacs

Is this really true? Some notes at https://stallman.org/stallman-computing.html indicate it was a different type of strain injury, unrelated to ctrl/meta key:

> In the mid 90s I had bad hand pain, so bad that most of the day I could only type with one finger. The FSF hired typists for me part of the day, and part of the day I tolerated the pain. After a few years I found out that this was due to the hard keys of my keyboard. I switched to a keyboard with lighter key pressure and the problem mostly went away.

> My problem was not carpal tunnel syndrome: I avoid that by keeping my wrists pretty straight as I type. There are several kinds of hand injuries that can be caused by repetitive stress; don't assume you have the one you heard of.


Mine was more ulnar tunnel (emacs pinky) than carpel. I too had the problem of the hard keys. From what I can tell my problem was that my muscles grew so tight that there was no space for my nerves, causing owner tunnel numbness. Neurologist tested my nerves and found no damage, and the massage therapist I later visited told me that this could happen. It also explains why rock climber's stretches helped me make the problem go away. But to this day it comes right back the minute I turn on emacs.


I really want a basically traditional keyboard, except with a thinner space key that permits two more modifier keys under each thumb. The anatomy of the hand is meant for the thumb to work in opposition with the other four fingers.

Sadly the ErgoDox doesn't fit my hands so it's uncomfortable to use the thumb clusters. The Kinesis does if I place it very low, but I'd like to retain the rest of my muscle memory for key location.


At what point in time did encrypting microcode in processors become common? Is this a relatively recent practice?


IIRC, it happened when IME was introduced. Maybe 2008?

Then AMD PSP did the same starting in 2013.


> Our financial regulatory system still hasn’t fully figured out how to address the risks of the derivatives, securitizations, and money market mutual funds that comprised Shadow Banking 1.0, but we’re already facing the prospect of Shadow Banking 2.0 in the form of decentralized finance, or “DeFi.”

>The stakes are much higher when money is involved, and if DeFi is permitted to develop without any regulatory intervention, it will magnify the tendencies towards heightened leverage, rigidity, and runs that characterized Shadow Banking 1.0.


>An "elite controller" is a person living with HIV who is able to maintain undetectable viral loads for at least 12 months despite not having started antiretroviral therapy (ART) . Elite controllers are rare: for every two hundred people living with HIV, approximately one may be an elite controller (0.5%).

> It is not entirely understood why some patients are able to achieve undetectable viral loads without ART.

https://www.aidsmap.com/about-hiv/faq/what-elite-controller


And undetectable === untransmittable. Just in case people here didn't know.


> And undetectable === untransmittable. Just in case people here didn't know.

In no way saying this is not a good thing, but do we know if this can be sustained for the whole lifetime?

Or is it that 20, 30 years later, as people get older the person eventually gets AIDS?


As long as they take their daily ART pills, they are safe from AIDS in specific, which is basically a death sentence. But things are not black and white. Someone who has ever been HIV positive counts as immuno compromised for their entire life. Studies have shown that there is basically no shortening of life expectancy, if you start ART early enough in the process.

> if the person with HIV started ART with a CD4 count above 500, they would be expected to live to the age of 87 – a little longer than those without HIV.

https://www.aidsmap.com/news/mar-2020/yes-same-life-expectan...


I've heard HIV patients describe ART as "aging" their organs. obviously preferable to dying horribly from AIDS, but still gnarly to contend with.

it's really interesting to me that their health becomes poorer earlier, but that it doesn't affect mortality.


This might maybe be because a lot of effort is now poured on mid to end of life care, with the improving life expectancy? So the science behind pushing along a decaying body gets better?


I sure hope so! I don't have HIV but I do have a genetic disorder that's wrecking my body. I'm determined to live as long as possible.


My lovely wife is a live-liver-transplant kid and I too hope she outlives us all. Alas the 'organ preserving' drugs (the ones you take to help kidneys handle immunosuppressants) have difficult side effects. I'm not talking incomfort or pain, but pregnancy-terminating. Not the best way to end your first pregnancy...

And as it's 'just' prevention, it's hard to know whether it will reliably improve this individual outcome...


I didn't know this. Could you explain the reason?


Viral load. Number of viral copies per volume of fluid.

Merely exposing someone to a virus does not necessarily result in widespread infection. A low enough number of copies could be neutralized by the host's immune response before it has a chance to spread significantly. How low this number is depends on the virus. For example, COVID-19 load is higher in symptomatic patients.

Chronic viral infections are managed by reducing as much as possible the number of viral copies in circulation. Risk of transmission is mitigated when number of copies is low enough. There are studies showing risk of HIV transmission approaches zero when HIV load < 200 copies/ml. Ideal would be HIV load < 50 copies/ml or undetectable.

https://en.wikipedia.org/wiki/Viral_load

https://en.wikipedia.org/wiki/Minimal_infective_dose

https://en.wikipedia.org/wiki/Management_of_HIV/AIDS

https://en.wikipedia.org/wiki/Viral_load_monitoring_for_HIV


At that low virion count it’s more about the probability that a virion will attach to the correct CD4 receptor and be able to work inside the cell. It’s not about the host's immune system defeating or being defeated on a number basis.


To add to this, how many virions are needed for a successful replication to start depends on the virus.

https://www.virology.ws/2011/01/21/are-all-virus-particles-i...


There is an "independent action hypothesis" in virology. It states that each virus particle (virion) has an independent probability of starting the infection in the body.

https://royalsocietypublishing.org/doi/10.1098/rspb.2009.006...


So it it turns out true, there is no "safe level" - we could talk about an extremely small probability, but it would still be higher than 0 - not exactly a risk one would like to take.


Yes. Assume 1 virion has a 0.001 chance to start an infection. You could infect 1 person with 692 virions, resulting in a 50% change of them getting ill (1-0.999^692). Or you could infect 692 people each with 1 virion, with 50% chance of at least 1 of them getting ill (35% chance: 1 ill, 12% chance: 2 ill, 3% chance: 3 ill, ...).


CDC backing up this claim. Having an undetectable viral load prevents transmission during sex —- but likely not during needle sharing. The answer to why seems to be an obvious “because there are less virus particles”. I’m linking a couple of scientific studies following couples having condomless sex where one partner is HIV positive, on ART with low viral load. Among the about 2000 couples in these studies there were no transmissions.

https://www.cdc.gov/hiv/basics/livingwithhiv/protecting-othe... https://www.nejm.org/doi/full/10.1056/NEJMoa1600693#t=articl... https://pubmed.ncbi.nlm.nih.gov/27404185/


It does sound suspiciously like wishful thinking though, doesn’t it? You can’t sample all parts and all fluids of someone’s body all the time. Maybe trace undetectable levels of the virus can be transmitted, and of course they are going to look like a “no transmission” case if you lack the ability to detect that amount of virus. And that’s either not a problem (if trace levels = no symptoms), or a ticking time bomb waiting to find a host with the conditions that would enable mass viral replication back up to detectable levels.


There is no such thing as a trace/undetectable transmission. These studies have been going on a long time now with large numbers of participants, replicated in multiple countries/different populations. The results are very strong and not based on measuring viral levels.


> You can’t sample all parts and all fluids of someone’s body all the time. Maybe trace undetectable levels of the virus can be transmitted…

You don’t need to be a 100% sure. Once the risk becomes small enough you can spend your time worrying about other risks.


Now that may be entirely the right posture, but the triple equals signs in the post upthread did rather imply certainty.


I took that to just be because we’re on a programmer heavy site rather than an extreme display of certainty


The standard you are applying is literally not possible to imply anywhere.

Hell, an HIV viron could be created spontaneously through quantum tunneling in your body with some non-zero probability. I think it would be legitimate to triple equal signs that to "not going to happen" though.


Risks like these are orthogonal and additive. There’s no reason to discard any risk but to hedge them.


Activity-based risks are not additive.

Time spent driving is not time spent having sex (well, hopefully).

So if risk(dying while driving) ~ risk(dying because of sex) ~ 0, you can easily ignore both of those.


Be careful. The infinite sum of epsilons over an infinite domain and over the time domain can and usually does converge to a finite value in real life.

Also, you can avoid having sex with a HIV-positive person. You can probably not avoid driving. So the fact that a risk may be low is not sufficient to take such risk if you can hedge against it.


It is because everyone dies relatively quickly that you can ignore ultra low risks. Suppose your lifespan was potentially 1 trillion years, how low would your risk tolerance need to be?

Meanwhile doing some once a day with a 1 in 1,000,000,000 chance of possibility killing you in 40 years just isn’t worth considering.


> Suppose your lifespan was potentially 1 trillion years, how low would your risk tolerance need to be?

Depends what your quality lifespan is. If I get sick of everything, that time isn't worth much.

There's also a "time-value" of life.

Cool stuff I can do today may be worth a lot more to me than things I can do 500B years from now. Especially since the person that would be doing them is someone I cannot identify with at all.


I don’t think most peoples 25 year old self can really identify what their 85 year old self will be like. Yet saving for retirement isn’t just about them some time in the future it’s about making you feel better now.


> I don’t think most peoples 25 year old self can really identify what their 85 year old self will be like.

Yes, and I believe this is a reason why people make choices that adversely affect their 50 year old self. Who's that guy in the future going to be? Why should I care about making it better for him?


You could say the same thing about people drinking heavily only to wake up the next day with a massive hangover. So it isn’t the differences between them that’s causing people to screw over their future self.


I can't comprehend 1000-year-from-now me. I don't think I would take many actions to improve life for 1000-year-from-now me even if I knew I could live millennia. Doing so does not buy me any comfort today and does not let me "give" to any person that I identify with or recognize as part of me.

I can imagine time scales on a par with how long I've lived and to a lesser extent to the ages of people I identify with.

(500 year from now me would see the first paragraph as exceptionally foolish... and also inevitable. But that fact doesn't change the truth of what I'm saying).


risk(dying while (driving while having sex)) >> 0...


We feel fairly comfortable in stating that HIV is not transmissible surface-to-surface despite the mathematical fact that strictly there is some miniscule non-zero chance that it could be, this is basically the same thing.


You need to transmit a certain amount of HIV particles to cause infection. It is called inoculum. In normal sex, it is quite a lot, I wasn't able to find an exact number, but it is in tens of thousands in ther least.

The vast majority of virions seems to be killed by the immune system before they can actually take hold.

https://journals.plos.org/plospathogens/article?id=10.1371/j...


So some other points.

Some Sex lubes can make it easier for infections to get into the body. https://en.wikipedia.org/wiki/Personal_lubricant#Water-based

Thing is I wonder if a higher vitamin A and zinc intake can help because all epithelial cells need retinol (vitamin A), zinc is needed to move retinol around the body as Retinol Binding Protein (RBP) for epithelial cells. Alcohol can reduce fat soluble vitamins stored in the liver which can then reduce the VitA stored in the liver, you see this most easily with alcoholics and hypovitaminosis A, but even short term binge drinking can reduce levels enough to increase risks of transmission which probably explains man flu!

I found a cadaver study where they measured vitA content in the livers of dead people but didnt say how they died, knowing that you can die from ingesting too much vitA in one go like some arctic explorers did when eating polar bear livers decades ago and died. In order to get the highest amounts of vitA seen in some of these cadavers, you'd have to consume 30,000ui of VitA every day for at least a year without your body using any! Thats a lot of Vit A!!!

Anyone who paid attention to the Ebola outbreak in Africa a few years back, may remember that some viruses like Ebola can hide in parts of the body where the immune system can not go, namely the eyes and the testes and sometimes the brain, so they may have remembered that blokes testing negative for Ebola but previously had it, could still be passing it on through semen.

The thing with HIV, its typically passed through anal intercourse, not vaginal where there is sufficient lubrication, but its passed more frequently in Africa because they have a tendancy to have dry vaginal intercourse, because they actually try to reduce the wetness of the vagina which then creates microtears and sores and then its easy for HIV to get into the bloodstream, just like cuts and sores in the mouth also making it easy for HIV to get into the blood.

The whole 80's govt advice/warnings on HIV were directed at everyone because they didnt want to stigmatise those engaging in non vaginal intercourse.

The biggest risk from HIV is getting it into the blood, like most things which are pathogenic.

Now you can still have a highly effective immune system which can lock down viruses, but there is always the risk it gets into those parts of a body where the immune system cant go and I dont know what they test now a days.


> The whole 80's govt advice/warnings on HIV were directed at everyone because they didnt want to stigmatise those engaging in non vaginal intercourse.

Gay people were so stigmatized at the time that the government did nothing when they thought it was a 'gay disease'; they just let an epidemic spread and people die. Very few in the 1980s worried about offending gay people. Once it was spreading to people who weren't gay, then I'm sure health information was provided to those people.


I'm not going to respond to a good chunk of this even though I'm itching too.

But I just have to laugh out loud at the thought of governments in the 80s not stigmatizing the disease and specifically queer people having anal sex.

Just so laughably the opposite of the real experience of an entire lost generation of gay men.


I had no idea "dry sex" was a thing. I am speechless. wtf?


Its like genital mutilation. These are legacy solutions when legacy forms of govt/social control like religions, royalty or cults were the main educational source for surviving back then. Ironically there still is some truth in things like religious fasting provided your glutathione levels are high enough.

There's no excuse for it in todays day and age, but what can you do? I sometimes think democracy is a soap opera for serious people, whilst the military run the real show.


Some cultures believe lubrication is a sign of poor virtue or infidelity. Women use chalk, cornstarch, etc, to ensure they stay dry and to (supposedly) enhance the pleasure of their partner.


It makes zero sense, but yeah.

There was a thing not long ago where talcum powder, I think Johnson & Johnson, had some asbestos contamination (asbestos being another mineral that can come from the same mines, apparently). And people were suing because it gave them cancer of the uterus and other internal parts of the female reproductive system. Which makes you go: Wait a second. How the hell does that happen?

That's how.


I find it hard to believe. Wouldn't it make more sense that the powder was in contact with the external reproductive parts for extended amounts of time (which is its normal usage) and as a result some parts managed to get inside?


I don't think people are intentionally pushing talc through their cervix, if that's what you mean. But, pack your vagina with the stuff every day, and I imagine you can have all kinds of problems. If I were to go any further than that about exactly how it works I'd be speculating.


OK, I see. Because back in the day it was more or less standard to use powder in the groin area and leave it for hours in order to prevent skin irritation in babies, especially in the summer. So since it was standard usage, people should reasonably expect that the powder sold for this purpose was sufficiently examined and fit for the purpose.

When I first heard about asbestos in children talc powder, I simply thought it was just one of these "bad pharma" conspiracy theories (there was another one - about asbestos in sanitary towels). It seemed so far-fetched - who in their right mind wouldn't make sure to thoroughly check a product designed to have long-term contact with babies' skin against well-known carcinogenic substance? Well, it turns out this time it happened not in China but right in the heart of the USA.


> Anyone who paid attention to the Ebola outbreak in Africa a few years back, may remember that some viruses like Ebola can hide in parts of the body where the immune system can not go, namely the eyes and the testes and sometimes the brain, so they may have remembered that blokes testing negative for Ebola but previously had it, could still be passing it on through semen.

You might find this interesting, the "few years back" might be now again: https://www.nature.com/articles/s41586-021-03901-9


Nothing would surprise me! I know a mad scientist who works at the UK's Porton Down, we discussed alot, but the systems are not in place to stop people shipping stuff around. Until such times as that happens, any country with the know how can release anything anywhere.

I think this Hollywood film Dont Look Up will be a poignant message to everyone.


> Some Sex lubes can make it easier for infections to get into the body. https://en.wikipedia.org/wiki/Personal_lubricant#Water-based

I would not use Wikipedia for health information with serious consequences.


Presumably the viral load required for transmission is greater than the viral load required for detection.


My guess is it is more of a probabilistic thing than a strict threshold.


Yes. Super equals was inappropriate imo.


I don't know that it's inappropriate.

I mean, technically, there's always a possibility of getting HIV. Particles in your body could spontaneously quantum tunnel at the same time into the form of an HIV virus. This is basically impossible, but not actually (p = 0.00) impossible.

At some point you have to look at a probability and say - that's basically not transmissible.


I mean just based on the definition that seems like the literal meaning to me. But i'm not a PHD


That seems like a big if? maybe 0.5% of people actually are succeptable to the lowered viral load. Have studies been done that would detect such a population?


One reason we believe this is that we can typically detect virus quantities way too small to reliably replicate in culture, and believe that in vivo infection is even harder than in culture (because of innate immune response and the body not being composed just of the most susceptible cells).

You generally can't measure infectious dose directly without a highly unethical challenge study. You can sometimes know concentrations that did or didn't result in infection in various real world scenarios, and sometimes you have circumstantial evidence (e.g. you can know how much virus a person sheds, and what proportion of a room with certain ventilation quantities got infected).


The usual way of achieving undetectable viral load is via anti-retroviral drugs and I think this works in most people, at least so long as they get early enough treatment and keep getting it. The main reason to measure viral load in the first place is to get an idea of how well the drugs are working and if they need to be changed.


People on ART also often (perhaps usually) have undetectable viral loads


that's what I would think, it's got to be quantifiable: some sensitivity metric of the concentration of virus particles the device can detect vs the minimum quantity which is needed for infection. It's clear we can articulate these matters in the abstract, how does one go about actually measuring such a thing?


Undetectable sometimes means less than 200 or less than 50 viral copies per ml (depending on device sensitivity). In contrast the peak in the acute phase is 10^7 copies per ml (and settles to around 10^5 in the chronic phase), the probability of transmission is infinitesimal. There are many studies following very high risk cohorts (sex workers, intravenous drug users, etc) that have been able to empirically confirm this.



No detectable virus means no transmissible virus.


To reiterate the question, why is that?


Detection means there is enough virus physically present in the sample for the detection technology to identify it.

Infection requires virus particles to be physically present in transmission vectors (fluids, droplets, etc). There's generally also a dose-response effect where more particles means more chance of evading the immune system well enough to establish replication in the victim.

So a lack of anything to detect, means a lack of anything to spread an infection.


> So a lack of anything to detect, means a lack of anything to spread an infection.

One should add that for that the threshold for detection has to be lower than the threshold for infection.

Example: let the detection threshold be 10 particles/ml and the infection threshold be 100 particles/ml (*) -> then undetectable implies that it is very improbable that an infection will take place.

(*) This is a very crude description. Think of it like this: Every single virion (virus particle) has a very low probability of causing an infection itself but there is a high number of them and for one them it might just work out (higher viral load -> higher risk of successful infection)


> One should add that for that the threshold for detection has to be lower than the threshold for infection.

This statement is logical and makes perfect sense - but it's narrower than the original one.


Thank you, makes sense!


I'm not a biologist and this is based on general knowledge and some quick google searches but:

We're quite good at detecting viruses if we really want to. PCR can amplify DNA so much that we can detect even 50 viruses per milliliter of blood. A milliliter seems like actually a lot of blood to get into someone else and your innate immune system is capable of finding and neutralizing small amounts of contagions relatively well even if it's never seen it before. I do suspect this is a statistical impossibility though probably you could somehow get incredibly unlucky, for example in your blood momentarily all the free floating viruses end up in the same bit of blood and that somehow gets into someone else, but I think we can all realize that probability is tiny and in practice I don't think there are any examples of transmission with undetectable levels of HIV.


Well put. We can actually detect well below 50 copies/mL in the laboratory setting as well; we don’t go below this to maintain adequate test specificity and sensitivity in the clinical setting.


Not to put you on the spot but from my cellular bio lab I took it felt like PCR should be capable of detecting literally a single example of the targeted bit of DNA in the sample. Is the problem with detecting below 50 that the sample is too easily contaminated or is it that the primers can spontaneously bind to things they aren't supposed to at a high enough rate to invalidate the results?


> A milliliter seems like actually a lot of blood to get into someone else

1 cm^3 is indeed pretty big. That’s 1,000 little millimetre cubes, and we can detect as few as 50 virions in that space!

I will point out though that 1 ml is low for semen (more like 2.5 ml or more).


That's the million dollar question


False. Detectable is an experimental limitation of a given measurement system. Even one single virion can infect a host.


There is no data that supports this assertion whatsoever. There is a lot of evidence going against it. Theoretically it may be possible, but the probability of this is on the order of 0 and so close to 0 it will likely never happen.

Think of it like chemistry. Sure, two covalently bonded hydrogen atoms at STP could have nuclei 1cm from each other due to random chance. However, the probability of this happening is so low that it is effectively 0.


The assertion is still false then.


It's effectively true.

If one bounds the risk from HIV from an undetectable partner to an unmeasurably small epsilon, dwarfed by other risks by many orders of magnitude--- the edges where the assertion are false don't matter.


It depends on if your body is able to detect it better than a machine…


I'm going from memory, and I can't remember how many years ago I saw this on television, but I thought it had something to do with a genetic mutation that likely gained a foothold in the Western European population sometime around the plague of the Middle Ages. If I remember correctly, if a person carried one copy of the gene, they could get infected with HIV, but it wouldn't develop into AIDS. If a person carried two copies, they would not even get infected with HIV.

Anybody remember hearing something like this?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: