Hacker News new | past | comments | ask | show | jobs | submit | vhvjkyhkogvv's comments login

It may be that travel never has big enough margins to support a 10k employee middleman.


Expedia and Booking have been around nearly 25 years and both have well over 20k employees. Seems like travel has more than enough margins to support multiple companies that size.


There’s never been an extended global shutdown of travel like now though.


Yes, everyone fully understands with travel at a complete standstill, none of these companies are viable if the current situation were the norm.

But the current situation is not the norm, people will eventually get back to travelling a lot, and when they do there will be plenty of economic activity to support large companies in the travel business.


An extended shutdown with shut borders may have long-term effects on consumer's willingness and desire to travel. If the travel industry remains shut down for the next 12 months it's not unreasonable to think that the travel industry wouldn't fully recover for 10 years.


Or maybe pent up demand will have the opposite effect. There's no way to know.

I've had the same thoughts about the restaurant industry. Will lockdowns create a generation of home cooks, or will be so tired of staying in that the restaurant industry booms after this is over?


9/11 had a similarly chilling impact that shook the airline industry to it's core. Covid is likely going to be worse for the industry than that, but it's not without precedent


Kinda makes you wonder why they aren't using methods from the multi armed bandit literature.


Multi armed bandit methods work best with immediate success-fail metrics. This one has time delays.

An example of how machine learning goes wrong is if a treatment slows down the progression but increases the death rate. Given exponential ramp up in the incoming cases, it will look good until the final horrifying numbers are in. You need to slice and dice the numbers by cohort to detect/react to this.


I decided that some numbers on how things go wrong would help.

Suppose that the treatment increased deaths by 50% but delayed death by a week. And we have a doubling rate for the disease of 1 week.

Back of the envelope that means that the treatment will have 1.5x the deaths from when the disease happened 0.5 times as much for 0.75 of the deaths at any point in time. It looks like it saves 25% of lives when in fact it kills 50% more people. The raw numbers will look good until you look at a cohort over time.

Current doubling time for deaths has been about 3 days. My assumption of a week is therefore optimistic. Perhaps we get there with social distancing.


"Multi armed bandit methods work best with immediate success-fail metrics. This one has time delays."

Well, sure, but everything works best with immediate success-fail metrics. That's one of the most basic results from learning theory is that the longer the latency between stimulus and response the slower the learning rate can be. I'm not sure how multi-armed bandit is special in this regard in any particular dimension. All learning techniques are going to be susceptible to the problem you outline in your second paragraph.

This is one of those "there is no perfect solution" situations. It's really easy to say that out loud. It's quite difficult to internalize it.

(Also, just as a note to your other post, bear in mind that our hard-core "social distancing" efforts in the US are just about to reach approx. 1 incubation period. It is only just this week that we're going to start seeing the results of that, and it'll phase in as slowly as our efforts 1-2 weeks ago did. My state just went to full lockdown today, though we've been on a looser lockdown for a week before that.)


Everything works better with immediate success/fail metrics. However the simplest approach is easiest to analyze, and is easiest to analyze after the fact in as many ways as you want. The more complex the decision making, the less we should be willing to put it under the control of a computer program. (Unless that program has been well-studied for our exact problem so that we trust it more.)

Which medicine looks effective? Which medicine gets people out of the hospital faster? What underlying conditions interacted badly with given medicines? These questions do not have to be asked up front. But they can be answered afterwards. And knowing the answers, matters.

Here is an example. Suppose that we find one medication that gets people out of bed faster but kills some. In areas with overwhelmed hospitals, cycling people through the bed may save net lives. If your hospital is not overwhelmed, you wouldn't want to give that medicine. Now I'm not saying that any of these medicines will come to a conclusion like that. But they could. And if one did, I definitely want human judgement to be applied about when to use it


I don't think anyone is proposing actually removing all humans from the loop, so I think that's an argument against a strawman.

Even if they were proposing it, there's no realistic chance of it happening.

I don't want people blindly copying "standard" scientific procedures either, where we run high-stastistical-power studies for months with double-blind scenarios then carefully peer-review it and come up with some result somewhere in 2022.


So, hopefully there will be blinded researchers who analyse the data.

They'll probably use sequential stopping rules to take samples of incoming data.

If one of the treatments works much much better, then they'll almost certainly recommend that (but doctors will probably figure this out first, anyway).


For someone unfamiliar with machine learning literature, can you please briefly explain how it helps here?


in a world where you have many options and have to figure out which is best by repeated experimentation, but where experimentation itself has some cost, you have a multi-armed bandit problem. (the name is supposed to evoke a room full of slot machines -- you want to find the one with the highest payouts by repeatedly playing them, while losing as little money as possible before you find it.)

for example, if you have a few medications, you might start by trying them all equally at random and then as data comes in, use a bandit algorithm to gradually shift more and more new patients onto the ones that prove most effective, in a way that optimally trades off accurately estimating the effects with wasting time testing the less effective drugs.

interestingly, the first formulation of the problem is due to Dr. Thompson at the Yale Pathology Department in the 1930s; he came up with Thompson sampling. So these are techniques that were originally designed for medical trials.

I think that designers of medical trials probably do have a good grasp of this stuff (some statistical estimators that originated in the medical world have even been successfully imported into reinforcement learning/MAB research) so probably they would be using a bandit-like technique if they felt it made sense.


Coordination is complicated. Bandit problems also assume clear/instant payout per levee pull. I suppose WHO could do this in the back end.


Every patient would be treated with random drug/treatment. With accumulated treatment results, a multi armed bandit algorithm would adjust probabilities so, that most effective treatment would be used more often.

For example, in Thompson sampling probability of choosing option is equal to probability of that option being the best option given evidence so far.

Aim is to maximize reward (successful treatments), while spending little as possible time on exploration (testing less effective treatments).


Delay is a real problem here.


Trial would take longer.


Do you have a source for the last part?


A good synopsis of current understanding (as of last Friday) was a UCSF Medicine Grand Rounds broadcast. I've bookmarked the most relevant slide to our discussion in particular: https://youtu.be/bt-BzEve46Y?t=2349

Significant lung and myocardial injury and papers have honed in on ARDS as a real problem.


This is a wonderful resource. Thank you for pointing it out.


Not OP, but here's a Lancet correspondence on the topic (includes references):

https://www.thelancet.com/journals/lancet/article/PIIS0140-6...


Last night I watched a YouTube video about COVID-19 symptoms and stages of the disease and it did touch on the points brought up by OP.


Could you share the url ?


Not GP but this video gives a good overview: https://www.youtube.com/watch?v=BtN-goy9VOY


Maybe OP is referring to this one?

https://www.youtube.com/watch?v=BtN-goy9VOY


This is specifically the video I watched yesterday.

https://www.youtube.com/watch?v=OOJqHPfG7pA


oh I should have guessed :)

if you want some interesting details from American MDs there's this podcast

http://www.microbe.tv/twiv/twiv-593/

lots of subtle details that you can't get on simplified stats


This doesn't need a source. It's common clinical knowledge and OP is right. Unfortunately, preliminary results using our usual immunomodulatory drugs have been catastrophic.


No, sorry, you still need a source. In fact cytokine storms aren't the only driver of pneumonia and we don't know yet exactly what the mechanism(s) behind ARDS is. Immunology is outrageously complicated, and not well suited to that kind of pronouncement.

You just have to do the science, there's no way around it. And realistically we may not have time.


I will reformulate, then.

In the current state of knowledge about both ARDS and Covid and given the time available, clinicians rely on traditional teachings to care for covid-related ARDS until solid evidence can be provided.


Which company do you work for and does it share your values in this question?


I work for a small robotics company in the UK, I haven't worked in social media or communication if that's your question.


"The grapes are sour anyway" is the standard open source answer for decades..


Because I don't have static ip.


Most people keep the same IP address for months. Your machine can update the address a public domain or subdomain points to on the rare occasion it reboots. If content is cached by hosts that have previously viewed it then this wont even cause an interruption.

With IPv6 one would suppose that it would be easy to keep the same address given that there is virtually no chance that anyone could accidentally be assigned it.


Comparing a stable sort to a normal sort is unfair. Compare it with std::stable_sort.


That is a fair point, I missed that quadsort is claimed to be stable.


I agree is unfair. The point of my benchmark was that the OP's claim "quad sort is faster than quicksort" is false.


libstdc++'s std::sort doesn't implement quicksort per se. It implements introsort[1]. I'm curious how a pure C implementation of introsort would fare against std::sort.

[1]: <https://en.wikipedia.org/wiki/Introsort>


" It begins with quicksort, it switches to heapsort when the recursion depth exceeds a level based on (the logarithm of) the number of elements being sorted and it switches to insertionsort when the number of elements is below some threshold. "


It's just quick sort with insertion sort for small base cases.


You forgot about heapsort. It's a combination of three sorting algorithms, not two. However trivial the difference may seem, I'd still prefer to look at a "C introsort vs C++ introsort" benchmark than a "C quicksort vs C++ often quicksort, but not really" benchmark.


Too late to edit but I read the article again and they compares it favorably to qsort so it makes sense to point out it's worse than std sort.


The reason cafes offer wifi is that otherwise people would stop coming.


[citation needed]

When was the last time you chose your coffee shop based on whether the WiFi was offered?


When my homework assignment required internet for research and the coffee shop was my choice for studying.


Yesterday.


Every coffee shop doesn't have wifi apparently. I wouldn't go to a coffee shop without wifi.


If you base it origin couldn't that have false positives too? Like say other, less privileged apps?


A cost of doing business is one of the most essential expenses there is.


The absurdity of an “essential price”... essential to what, class?


To doing business? Doesn't seem absurd at all. Do you think business should just land in your lap for free without having to spend anything for it? That what seems absurd to me.


"business" is just one mode of industrial production, even in a marketplace. So, essential for business, perhaps, but hardly essential in a general sense.


If an owner of the business cannot afford or won't frivolously spend $5/day on a coffee, he or she won't spend money on a SaaS feature that he or she can get for free. Even if he would, to get $500/mo the SaaS would need to find one hundred of such owners.

If SaaS is to price its service at $100/mo, then people for whom extra $5.00/day is a big deal won't be looking at this service at all, won't be contacting the owner about implementing yet another feature, etc. To get to $500/mo revenue, SaaS would need to find 5 owners.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: