Hacker News new | past | comments | ask | show | jobs | submit | xooxies's comments login


Avoiding manufactured food is absurd to a level I didn't think needed explanation.

Even from a logistics POV we're 8 billion on this planet, concentrated in cities. Everyone following that philosophy would bring a food chain collapse.


If we were on less censored forums would just ask you to "post body". Since that's not considered a valid argument here instead i'll just gesture to the countless innovations that have been developed by humans that turn out to have massive negative health consequences. What gives you the confidence that our current food manufacturing techniques won't turn out to be one of those things? Would you have made this argument about cigarettes in the 20s?


People are forgetting we had famines a few centuries ago in the west, and still have famines in many places. Sure, they are also usually associated with governing issues and other complicating factors, but still. The number of peoole being alive is my answer to whether food manufacturing is a net negative or not.

While the pendulum has swung way past the equilibrium for us, rejecting whole categories of food that tend to be nutrious, easily preserved is just not realistic.

To me there are dozens of other levers we can pull to deal with health improvement. As pointed out in the other threads, not all OECD countries are facing what the US are facing.

PS: do I get all my points accepted as truth if I can prove a BMI that satisfies you ? Would 22 do it ?


Not necessarily since BMI doesn't take into account muscle mass. But, 22 does mean that we can't just discard your opinion out of hand


Good thing you don't need to follow what everyone else does :)


I can only hope that the 0days used were reported.


From their pdf report [0]

>>> In the case of Kaspersky Lab, the attack took advantage of a zero-day (CVE-2015-2360) in the WindowsKernel, patched by Microsoft on June 9 2015 and possibly up to two other, currently patched vulnerabilities, which were zeroday at that time.

[0] https://securelist.com/files/2015/06/The_Mystery_of_Duqu_2_0...


This is completely unacceptable. There really should be more pentesting of medical gear and industrial control systems in general.


Related: Karen Sandler's LC2012 keynote, which discusses the challenges of finding anything out about life-sustaining medical technology: https://www.youtube.com/watch?v=5XDTQLa3NjE


It's surprising that the FDA doesn't require that as part of their certification procedure.


I can think of many words to describe my reaction to this, but "surprise" is not one of them. I have never seen a regulatory body impose any meaningful security testing. Even supposedly "secure" standards are often self-certified, not tested.


If you take that too far you end up with medical gear that is so locked down the owner can not make any changes themself, but must ask for permission to make any changes.


I was under the impression that medical devices were already locked down like that due to e.g. FDA regulations. Not having security either seems like a "worst of both worlds" situation.


With no security people get access on their own......


...at the risk of bricking devices and legal threats? Sure. Both of which are concerns with some teeth for such mission critical devices as... your xbox.

Jailbroken medical devices sounds like a great way for a doctor to loose their license, a developer to end up with manslaughter charges, and patients being unable to tell their doctors about their full medical situation, lest their doctors run screaming from the impending liability lawsuit.

I think the correct answer to "let people modify their own devices" is "fix the regulations" not "intentionally weaken security in matters of potential life and death."


I'd argue that the very fact that it's physically possible to hack these sort of devices is a huge red flag.

There shouldn't be pentesting, because there shouldn't be anything to pentest.

Something like a medical device, you keep the outside communication part airgapped from anything that could cause harm. If that means you have to duplicate things, then it means you have to duplicate things.


Implantable devices typically have to have a wireless interface of some sort. The alternative is to put a physical port on someone's body, which is a great way to cause infections.


So you have a wireless interface for the non-able-to-kill-you stuff.

But if you're having to change the code of something that's been implanted, you've already done something horribly wrong.


For example, I think pacemakers can set the range of acceptable heart rates by radio. So now, you have to have logic checking that it doesn't get set to 0. But what happens if there's a buffer overflow in that checking?

Edit: here's a paper demonstrating similar attacks: http://www.secure-medicine.org/public/publications/icd-study...


See, that's an example of something that I do not think should be settable via radio. Maybe inductive or sonic communication.

Alternatively, you have the software check the set range, but the hardware (or a second non-connected processor) also checks it separately to make sure it's sane anyways.


Pacemakers and defibrilators are configured to match an individual patient. Part of that configuration includes what parts of the heart to stimulate. This configuration may need to change as the patient's condition changes.

You also need to be able to turn of the pacemaker entirely to monitor how the heart operates without stimulation. You may need to turn of the defibrilator during certain medical procedures...


But... but... I want a datajack at the base of my skull!


Me too, I've been searching for an excuse to get the Motoko haircut...


> There shouldn't be pentesting, because there shouldn't be anything to pentest.

At the face this seems like the right approach, but you end up with security-through-obscurity protecting (yet another) critical system. If you don't hire pentesters, how can you be sure there is no way to maliciously interact with the device? Because you didn't intend for there to be? Its a good thing vulnerabilities are never unintentional right?

Its also not very resilient to a change in the software's requirements; maybe its okay for a pacemaker not to have a password when you need to open up a patient's chest to connect to it. Maybe those extra microseconds you save mean less blood loss and pain for the patient undergoing surgery, an undeniable win. But this operation turns out to be expensive (by every conceivable measure), so the next model ships with bluetooth, now we have a problem.

See also:

http://blog.ioactive.com/2013/02/broken-hearts-how-plausible...


Perhaps I was overstating things. I'm not saying "don't hire pentesters". I'm saying "don't implement security in software when it can be done in hardware". If all interfaces that could cause damage are not physically connected to an external interface, the chances of anything being able to coerce them into doing something bad are... slim, at best. Not none, never none. But slim.

Certainly slimmer than just stuffing everything in the same basket and calling it a day.

And your second part is exactly why medical devices are not, or at the very least, should not, have changes done to them after-the-fact.


  > And your second part is exactly why medical devices are not, or at
  > the very least, should not, have changes done to them after-the-fact.
I don't think this is self-evident. There are already a number of devices on the market that basically require physician tuning post-implantation. Deep brain stimulators and cochlear implants spring immediately to mind, and there are likely many other examples. There are many reasons for this: there may be too many parameters for the physician to realistically adjust intraoperatively, assessing efficacy may require activities that are impossible to perform in the OR (ie observing gait), the parameters may be time-varying, it may not be possible to have the patient awake during surgery, the patient may be pre-lingual, etc.

Even worse, both DBS and cochlear implants have relatively short loop-latencies between the physician "turning the knob" and observing the effect (seconds). There are emerging medical implants where the loop-latency may be in the hours-to-weeks range. That's pretty much going to require tuning post-surgery which in turn basically requires a wireless interface of some kind.

Finally, while updating the firmware of a medical device (eg to give new capabilities) should certainly not be done lightly, but it is far and away preferable to going under the knife again to receive a new device.

Bottom line, the right thing to do is get security right, not limit what physicians can do with the devices.


Security necessarily comes in layers. Some will fail; hopefully not all of them. Adding protections at the hardware part of the stack is a fantastic idea, but insufficient for something lives are literally, directly depending on.

Medical devices do need to be accessed. Diagnostics need to be performed to make sure they're functioning. Sometimes batteries even need to be changed. Insulin pumps and the like need to be given more medicine, and the dosage may need to be adjusted. Et cetera.


Diagnostics are a good example of something that can be separated from the actual safety-critical stuff. Have a dumb output from the safety-critical processor that the radio interface can read.

If you're changing batteries or medicine, you're going to be accessing the device physically anyways, and so can change settings / dosages then.

Et cetera.


A battery change of a pacemaker or an ICD is required every 6 years or so, if not later. You may need to change settings far earlier than that.


Things like pacemakers often have radio interfaces, because the alternative is to cut someone open.


I didn't say "don't have a radio interface". I said, "don't have a radio interface to the able-to-kill-you parts". There is a distinction.

Also, how does that work? Water is a pretty good attenuator of radio waves.


A lot of them are more "inductive" than "radio" and work by magnetic coupling rather than electromagnetic coupling.

Put a coil outside the metal body of the device and not too far from the skin. Then have another coil that you place on the skin, in the same region as the under-skin coil. Run a 1kHz sine wave through the external coil and you'll make SOME voltage on the internal coil. That allows you to charge.

For bonus points you can also run a communication protocol at say 100kHz (gotta get a couple of decades of frequency difference) and since you're charging the device, you can afford to "waste" a lot of power to get the signal out.

I don't do this stuff myself, but my dad did for quite a number of years.


Yep. NFC is wonderful for that sort of thing. And better than radio since it's inherently shorter range (O(r^-4) versus O(r^-2), for the same reasons as active radar). But still not perfect.


They also need to be "able to kill you" so that the patient can be defibrilated if their heart stops beating. If you do that when their heart is functioning it tends to have the opposite effect.


Right. But that functionality can be airgapped from the readout parts.


Some sandboxing could be done, but if an attacker roots a pacemaker or an insulin pump it will be extremely difficult if not impossible to prevent them from convincing the device to perform its intended function at an unintended time.


Airgapped. Not just sandboxed. As in two separate processors, with no overlapping RAM / etc.

You can't do your example, because there's no way to get at the internal clock from the radio.


the problem here is the medical community (especially hospitals) are increasingly pressured to increase productivity to offset cost increases. one way is to automate some of the dreary stuff like monitoring of medical devices in a hospital ward. if youve been in a relatively modern hospital lately, youll find they have wall mounted dashboards which can monitor all the vitals and devices for patients without the nurse needing to walk around and check rooms. this lets them respond to changes as they come up, as opposed to needing to wait till they make their rounds to the room.

this sort of thing needs a network. I'm not familiar, but I thought in the US, HIPPA covered some of this stuff.. not sure how stringent it is, or if it covers scenarios like this.. perhaps it doesnt at all (I am thinking like PCI, where certain data must be encrypted, etc).


It's only a fallacy if there is no basis for the caustivie chain that results in the effect we find objectionable. That is to say, if there is no clear way to get from point A to point B, it's a fallacy. If the chain of causility from point A to point B is quite clear, it's not.


It's also not a fallacy when an extensive history of similar encroachment exists. At some point, the burden of proof has to shift to the party who maintains that the government won't eventually abuse a given power.

IMHO there is no more certain indication that someone has been educated beyond their intelligence than when they sign onto an Internet forum and start braying about how the slippery-slope argument is a "logical fallacy." While technically correct, they betray an ignorance of how human beings -- a decidedly non-logical species -- actually behave.


It's still a need to know if you work in the reverse engineering, security, vulnerability analysis, etc. realms.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: