What's the granularity of the prediction (I'm not sure if that's the correct word? I'm not a meteorologist)? Region level (100s km)? City level (a few 10s of kms)? Block level (a few kms)?
When you say it's on par with global weather models, how is "weather prediction accuracy" measured?
Very good question! These models are trained on ~40 years of ERA5 data –you can think of past forecasts from numerical models integrated with real observational data to have a continuous distribution of weather parameters (temperature, wind etc.) Therefore model resolution is 0.25 degrees (28km x 28km at the equator).
The way accuracy is measured is through picking targets (say temperature at 2 meters, at x,y lat & lon and forecasted 24h ago) and comparing them on RMSE and ACC (anomaly correlation co-efficient). For instance, in Google Graphcast paper they pick 1380 targets and the model out performs NWPs in 90% of them.
To add to this, there are other ML models with higher resolution. For example, Google's MetNet-3 uses satellite radar images and ground measurements, and its resolution is 1km x 1km. And we are currently working on training a "nano" version of this!
1. Glue4 is not a query storage nor technically a "real-time" database, so the trade offs are different (you don't have to use a proprietary SDK to query your data, everything is in your Redux store as a JSON object). As well, this is a plug and play Redux add-on, you don't need to change anything about your current application.
2. It's represented as JSON-serializable data. The storage abstraction is an implementation detail. Currently it's literally serialised to disk, but we can just as well store it in DynamoDB, Mongo, PostgreSQL's JSONB column, etc. We will eventually support complex data (like images, videos, BigInt, etc) and transparently serialise/deserialise them for you and persist them for you in the appropriate storage (i.e. if you add an image to your Redux state as a user avatar, we will upload it to a blob storage and send it through a CDN for future fetches, all transparent to you as the developer).
2.a.: Yes, for the current version that is the case. Local persistence (offline first) is definitely something we want to support down the road!
3. A high level roadmap is actually available on our website if you scroll down :).
You are referring to Pokémon (notice the o where there was an a previously). And you are correct, the é has an accent on it (though I've always wondered why--Pokemon is an abbreviation for "Pocket Monsters", and neither of the e's either word contains have an accented e!).
My complaint is that a number on the stack before a command can mean two different things in different contexts. Apparently it is the responsibility of each command to consume numbers off the stack and choose what to do with them.
...Which doesn't sound as bad to me today as it did yesterday. I dunno.
> were HIV to mutate away from being able to bind to those chunks it too would become less virulent when attacking the real proteins.
Forgive my ignorance, but are there any indicators to predict whether a virus will evolve one way or another?
What I mean is, once it comes in contact with these mimic proteins, instead of mutating away from binding to these proteins, can it not mutate in such a way that it increases its virility so that it will bind onto more proteins in an effort to combat "false positives"? So it'll become even more virulent in individuals that have not received this treatment, and maintain its current virility in individuals receiving such treatments?
The mutations themselves are random. Generally it's reasonable to think of mutations as movement across an energetic landscape. And like in physics, nearly anything is possible if you give it enough time/energy. However, if constrained, the shortest path is usually the most likely.
Back to your question - is there an indicator to predict whether a virus will evolve towards or away from its effectiveness. Ideally, you'd put such constraints on the virus in other ways (in addition to the above concept/treatment), that it wouldn't have the energy/time to get to that much more complicated state of binding more proteins. Every additional mechanism a virus uses is a significant penalty against something as compact and efficient as a virus. But in the end there is likely no way to make any treatment perfect and unovercomable. Best we can do is defend.
What happens if you don't set keymap "gg", "G", etc? Would they not work, or...? (Currently not on a *nix machine to be able to test it out, or else I wouldn't ask such a simple question :P)
"G" works with a number for recalling shell history. For example, "4G" recalls the fourth command in the shell history. However, "G" alone without a number does not work as it does in vi (jump to last line). I suppose that was the desired functionality.
It makes sense for "gg" to be absent, as it is not a vi command. It is a "vimism". The typical vi command for moving to the first line is "1G".
My guess is, that you redefine them to mean the semantically same in a new domain. What I mean is that gg and G lets you go to the top of the file and to the bottom of the file, right? But on the command-line, what is the top of your file? And what is the bottom?
I can only speak for the Emacs capabilities of readline: all commands which would usually change the line (previous/next-line, beginning/end-of-buffer) will use the history as the buffer. All of them preserve the line you are currently typing, which will be the end of the buffer.
The abstract makes no mention of -130°C, but digging into the actual paper (can be read here: www.21cm.com/pdfs/cryopreservation_advances.pdf), it does mention:
"Fig. 13. Confinement of ice formation to the pelvis of a rabbit kidney that was perfused with M22 at 22°C for 25min, bisected, and allowed to passively cool in air in a CryoStar freezer at about 130°C... The vast majority of the kidney appears to have vitrified and is indistinguishable from the appearance of the kidney prior to cooling." (Emphasis mine)
If you just need occasional printing done locally (so you can pick it up, inspect it, or ask questions in person), you can use this site to find a shop doing 3D printing with this website:
http://www.3dhubs.com/
Might be less hassle than renting an actual machine.
The Newsweek headline is pretty misleading, although the study lends itself to being misread.
It looks as though the "emitter" of information (the person who begins the communication) used two different motor movements -- one that represents a 1 and the other that represents a 0 -- to communicate a stream of binary information. An EEG picked up those motor-based signals from the subject's brain.
Then that binary information was sent to another location, where transcranial magnetic stimulation was used on the "receivers," who sensed the 1s as visual stimuli and the 0s as no visual stimuli. After each bit was communicated, they told the researchers whether they saw any visual stimuli.
In other words, the "emitter" didn't just think of a word and that word somehow magically popped into the heads of the receivers. We're a loooooong way off from that.
In fact, if I'm understanding this correctly, the entire gimmick about words like "hola" and "ciao" being transmitted is misleading, or at least irrelevant in terms of what was actually achieved.
Really, the best way to sum this up is:
Binary information is transmitted with significant accuracy using non-invasive neural sensors (for detection) and non-invasive neural stimuli (for reception).
And indeed, I have a sneaking suspicion that neither the accurate detection of binary signals using EEG nor the accurate communication of binary signals using TMS is particularly groundbreaking ... in which case, the only thing novel about this study is that the researchers decided to pair the two.
EDIT: Actually, the paper seems to be implying that accurate communication of binary signals in a computer-to-brain interface is indeed groundbreaking (From the introduction: "the realization of non-invasive CBI in humans remains elusive") but if that were so, one would think that the EEG component of the study introduces needlessly complexity; the researchers could just as easily begin with a predefined stream of 1s and 0s, rather than trying to extract them from someone else's brain.
>In fact, if I'm understanding this correctly, the entire gimmick about words like "hola" and "ciao" being transmitted is misleading, or at least irrelevant in terms of what was actually achieved.
You're absolutely right; this study is neither novel nor groundbreaking.
On the sending side, forget 90%, success rates of 99% are easy to achieve with some types of BCI systems such as SSVEP or P300. As for the information detected, you might have heard of the P300 speller[1] which demonstrated an easy way to accurately spell out words using a BCI headset.
On the receiving side we see why they opted for binary communication. The 'telepathy' was nothing more than a flash of light being visible for a value of '1', achieved by blasting a region of the brain associated with vision with magnetic fields about as strong as an MRI. That sounds cool, but it is a technique that has been in use for over a decade[2].
So tl;dr, what these researchers have achieved is essentially stringing together two or three decades-old technologies in a not terribly original way.
It's also ridiculous—outright intellectual fraud, that they've located the subjects in different cities, as if the internet connection would make it more of a breakthrough.
I have a sneaking suspicion that neither the accurate detection of binary signals using EEG nor the accurate communication of binary signals using TMS is particularly groundbreaking... in which case, the only thing novel about this study is that the researchers decided to pair the two.
Indeed: to put it charitably, the whole thing is a farce. The EEG part is nothing new (note it's an off-the shelf component from one of their sponsors/employers); and neither are electrically-induced phosphenes [1] which, by the way, in this case (non-invasive TMS) consist of
little more than single, barely perceptible, indistinct flashes induced every time the device is turned on, one for each 'bit' --about the crudest possible way to 'send' information directly into a brain, if you ask me.
But then, once you read their conflict of interest statement, all that ridiculous overselling suddenly makes much more sense.
An old metaphor: conventional brain-to-brain communication is literally serialization. The thought is serialized as a sequence of letters or sounds, as writing or talking, and deserialized by the reader/listener. As with data structures and JSON/XML, the structures and serial representation might be similar or utterly different. But this mapping is taken care of at each end, so that people can communicate with a common tongue or lingua franca, even when their own internal representations differ greatly from it, and from each other.
My takehome is that without an intermediate serial representation (and mapping to/from), direct brain-to-brain communication will have to tackle mapping between these differences in internal representation in some other way (assuming such differences exist, which I believe they do).
>> In other words, the "emitter" didn't just think of a word and that word somehow magically popped into the heads of the receivers. We're a loooooong way off from that.
No we're not, LOL. We already have it - it's called spoken language.
When you say it's on par with global weather models, how is "weather prediction accuracy" measured?
Cool, none-the-less!