> seems dumb to have electricity needing to be wasted when there is seawater to desalinate
That's a much more complicated problem. On an energy market, you have only one price to look at, and the battery operator can always buy, sell, or hold energy. The article here talks about optimizing this problem at 5-minute to several-hour intervals.
If you drop excess power into desalination, however, now you have two prices to worry about: energy and water. I also doubt we have 5-minute spot markets for water, so the operator must probably commit to some medium-term water delivery regardless of price.
This means that a desalinating firm takes on much more risk. This might still be profitable, but it's a long-term play based on a deep model of expected energy prices (i.e. knowing that energy is "always" almost free at noon in summer) rather than short-term time-shifting.
Desal plants are also extraordinarily expensive and need to operate at very high 'capacity factors' in order to payoff the capital investment that was required to build them. Operating for a a few hours every day because your operating costs are low/negative only works if you don't have a hugely expensive piece of infrastructure depreciating as you wait for those prices to come down.
could we build them different if the goal is just to waste excess energy?
Why couldn’t it just be a giant heating element and some sort of steam condenser at the top and some way to flush it periodically?
It might burn some laughable 3kWh per kg of water, but who cares? every water utility on the coast could add a few megawatts of tea kettles and get opportunistic little splashes of water in volumes small enough they can probably already handle them and the brine discharge would be so small, disperse, and infrequent it’d be easier to deal with, and it’d basically cost nothing
It doesn't only need to make economic sense now but you also need to be fairly certain that all the battery capacity that is likely to be added to the grid in the future will still allow this to be profitable until your expected break even point.
> Adjust for those factors and the increased incidence disappears.
'Adjusting' for those factors builds in the assumption that they're independent of the thing you're trying to measure. If living near a smokestack is undesirable, then poorer/marginalized people will live there even if it also causes cancer.
I assume they meant if you look at roughly the same socioeconomic group that lives 500 miles from refineries as opposed to 500 meters you'll find similar numbers for cancer/other stuff. I'm not on either side of the fence because I don't know, just pointing out what was meant. I'd welcome statistics from either case.
The challenge is that it’s very unlikely that race/socioeconomic factors are causal in and of themselves, the reason why you would adjust for those variables is because they are tightly correlated with other causal factors that aren’t being observed directly, e.g. poorer healthcare availability, poorer access to healthy foods, etc.
Environmental pollution very reasonably can be hypothesized to be a causal mechanism behind cancer rates. Exposure to which is going to be heavily correlated with race and socioeconomics.
I may be misinterpreting OP, but their statement came off as “cancer maps are just maps of where poor non white people live, so it’s not the pollution”, but you can’t just “control” for things that way. Given the fact that environmental pollution is a hazard, there’s a reason why that demographic lives there that makes the exposure to pollution not independent from the demographic characteristics of the population.
Isn't causality transitive though? It sounds like you're saying that low socioeconomic status causes poorer access to healthcare and healthy foods, and that those cause worse health outcomes. Yet you're claiming that low socioeconomic status doesn't cause worse health outcomes. That seems wrong to me.
Surely that's incorrect. The most obvious scenario is A causes B, B correlates with A, but B does not cause A. Whether causality is transitive is irrelevant.
The quote is typically brought up when there isn’t a direct causal relationship between two variables, not when the causality is reversed. e.g. ice cream sales and drownings. In both cases heat drives behavior, but neither cause each other.
> There isn't clear caselaw on this yet, but given that material you share with third parties other than your lawyers almost always is and sharing your strategy musing would probably be disastrous, you should plan accordingly.
Wouldn't this have largely been settled already with cloud computing? If I use gmail to write an e-mail to my lawyer, then I can't imagine that e-mail is unprivileged because it's "shared" with Google.
> First they were supposed to prevent COVID. That turned out to not be true,
In fact, the vaccines did a fantastic job at preventing covid's spread. The problem was that they were developed for (and had clinical trial results for) the variant-free covid-classic. By the time they were widely deployed, the virus in circulation was already one of several more-easily-spread variants.
Had covid not mutated into variants, it seems very likely that widespread vaccination would have given us true 'herd immunity'.
> That turned out to not be true, so then they were supposed to lessen the symptoms.
They were also always supposed to do this, by making infection less severe and more easily fought-off. Many vaccines operate this way, including the seasonal flu vaccine.
Remember that until clinical trial results came in, we weren't sure exactly what the vaccine was going to do. Something that "merely" helped keep elderly people alive would have been good enough, and by that standard what we ended up with was a fantastic success.
> Everyone I know who still frequently gets COVID had the shot.
As phrased, you've walked right into selection bias; perhaps those people you know choose to get the booster vaccines because they recognize they're more likely to be exposed to covid.
> There were tons of surprise side effects especially relating to women’s periods getting screwed up.
Treatments have side effects, but for pandemic covid the right comparison was "vaccine plus none-to-mild covid" versus "no vaccine plus unmitigated component."
Infection with covid has plenty of 'side effects', and women frequently report disrupted menstrual cycles from infections.
> The official story was that we were trying to make COVID die out before it mutated again, but to do that would have required vaccinating all animals as well.
As far as I'm aware, every circulating covid variant that we've traced has likely come from mutations inside humans, not zoonotic intermixing.
> Despite the above, there was tremendous pressure from left wing institutions to vaccinate everyone, proving that their values were about pharmaceutical profits instead of my body my choice.
Now you're just being disingenuous. "Left wing institutions" were at the forefront of vaccine equality among nations, arguing strongly for coordinated buying approaches to ensure access to developing nations. If anything, that contributed to early politicization of the vaccine as the counter-reaction was "if this vaccine is so good, we need to have it for ourselves first."
The values involved were simply that of the common/societal good against individualist freedom, the same ones at play in lockdown/distancing rules.
> Anyone who investigated the origin story became spontaneously racist. But then later morality reversed and it was fine.
Note that this has absolutely nothing to do with vaccines.
> > The official story was that we were trying to make COVID die out before it mutated again, but to do that would have required vaccinating all animals as well.
> As far as I'm aware, every circulating covid variant that we've traced has likely come from mutations inside humans, not zoonotic intermixing.
IIRC the original Omicron has a strange lineage that looks like a second spillover event, from some animal that had been infected with one of the first variants instead of any that were going around at the time.
> Many times it is very difficult to “get into” a new field if you don’t have an author known by the editors.
Although there's plenty of critique to go around about the review system, machine learning here typically uses double-blind peer review for the major conferences. That blinding is often imperfect (e.g. if a paper very obviously uses a dataset or cluster proprietary to a major company), but it's not precise enough to reject a paper based on the author being an unknown.
> The reason could be that clever titles add "novelty", but not much substance.
Another reason might be that clever titles stand out as bold claims, working counter to the common practice of academic humility. If a paper seems to be downplaying its own significance, then why should a casual reader (or reviewer, at first impression) give it the benefit of the doubt?
That's not to say that papers should over-claim, and I suspect that doing so might lead to a harsh counter-reaction from reviewers who feel like they've been set up to have their time wasted. Nonetheless, "project confidence" might be good practice in academia as well as one's social life.
Looking at the differences between the rejected and accepted papers, I don't think it's quite a matter of 'avoiding citations'. The changes seem to break along two lines.
1. Avoid overly general citations. The rejected paper leads with references to image captioning tasks in general and visual question-answering, neither of which is directly advanced by the described study. The accepted paper avoids these general citations in favour of more specific literature that works directly on the image-comparison task.
2. Don't lead with citations. The accepted paper has its citations at the end of the introduction, on page 2.
I think that each change is reasonably justified.
In avoiding overly-general citations, the common practice in machine learning literature is to publish short papers (10 pages or fewer for the main body), and column inches spent in an exhaustive literature review are inches not spent clearly describing the new study.
Placing citations towards the end of the introduction is consistent with the "inverted pyramid" school of writing, most commonly seen in journalism. Leaving the review process out of it for the moment, an ordinary researcher reading the article probably would rather know what the paper is claiming more than what the paper is citing. A page-one that can tell a reader whether they'll be interested in the rest of the article does readers a service.
Without taking a stand on which metric is better for social and lifestyle comparisons, the grandparent poster said 'valuation of assets' rather than inflation. Per Yahoo Finance (https://finance.yahoo.com/quote/%5ESP500TR/), the S&P500 is about 340% of its mid-2015 level on a total return basis (reinvesting dividends).
Today's $700-millionaire would have been a 'mere' $200-millionaire in mid-2015 if invested fully in equities. That allocation is probably ballpark reasonable, since with that kind of net worth investment horizons are very long (multiple generations of inheritors) and the investors themselves can be nearly risk-neutral rather than risk-averse.
> And is this a real thing that can be done? How do you make the sonic boom go away?
You can't completely make the sonic boom go away, but you can change how it's experienced on the ground through careful design of the airplane shape. The X-59 Quesst (https://en.wikipedia.org/wiki/Lockheed_Martin_X-59_Quesst#De...) is an experimental supersonic aircraft that reportedly reduces the sound of the sonic boom to something comparable to a 'car door closing', mostly by ensuring that different shockwaves created by different parts of the plane don't combine into a single, stronger boom.
> can it be used to make non-supersonic planes quieter, too?
Sadly no; the principles of design are very different for sonic booms and ordinary plane noise. For ordinary aircraft, the big causes of noise are the engines themselves and turbulence over the airframe.
That being said, the problem of ordinary aircraft noise is usually limited to the areas near an airport. Sonic booms are noticed underneath the entire flight path of a supersonic aircraft, even when it's flying at altitude.
> A weakness that goes hand-in-hand with the lack of peer review
Peer review is not well equipped to catch fraud and deliberate deception. A half-competent fraud will result in data that looks reasonable at first glance, and peer reviewers aren't in the business of trying to replicate studies and results.
Instead, peer review is better at catching papers that either have internal quality problems (e.g. proofs or arguments that don't prove what they claim to prove) or are missing links to a crucial part of the literature (e.g. claiming an already-known result as novel). Here, the value of peer review is more ambiguous. It certainly improves the quality of the paper, but it also delays its publication by a few months.
The machine learning literature gets around this by having almost everything available in preprint with peer-reviewed conferences acting as post-facto gatekeepers, but that just reintroduces the problem of non-peer-reviewed research being seen and cited.
That's a much more complicated problem. On an energy market, you have only one price to look at, and the battery operator can always buy, sell, or hold energy. The article here talks about optimizing this problem at 5-minute to several-hour intervals.
If you drop excess power into desalination, however, now you have two prices to worry about: energy and water. I also doubt we have 5-minute spot markets for water, so the operator must probably commit to some medium-term water delivery regardless of price.
This means that a desalinating firm takes on much more risk. This might still be profitable, but it's a long-term play based on a deep model of expected energy prices (i.e. knowing that energy is "always" almost free at noon in summer) rather than short-term time-shifting.
reply