> “Although the study found inadequate sleep duration was not an issue in brain atrophy in this study, we cannot say there is no association,” she said, noting that a previous CARDIA study showed that shorter sleep was associated with worse white matter integrity, indicating lower cognitive functioning.
That quote seems to directly contradict the headline.
From the public results only[1] (I don't have a copy of the whole study) they studied the following things looking for correlation with brain decline:
* short sleep duration
* sleep quality
* difficulty initiating sleep (DIS)
* difficulty maintaining sleep (DMS)
* early morning awakening (EMA)
* daytime sleepiness
They only found that the middle four were correlated. I don't know what exactly "sleep quality" is but the others are pretty easy to understand. And the point is that the duration of a person's sleep is not what mattered, it was the quality.
Also, worth saying: these things were based on self-reported data, which is basically crap.
>To estimate the effects of sleep quality on the brain, the researchers surveyed approximately 600 adults on how well they slept. The participants were asked the same questions five years later and underwent brain scans 10 years after this.
This is press-release science. Maybe the latter three things you can remember, but I have sensors and whatnot in a fancypants mattress (i.e. I'm highly motivated to know), and my subjective opinion of my prior night's sleep is pretty uncorrelated with what they say. I couldn't begin to tell you the quality of my sleep from a week ago.
you're wrong. The only evidence we have for gravity is the correlation of mass and attraction. The theory of gravity does not contain any hint of a reason why, it simply points out what we can observe. Correlation is the only evidence we have.
And you have not ruled out that complaining about sleep in various ways isn't a direct side effect of brain shrinkage so the hypothesis remains open.
Sleep quality, if excluding DIS, DMS, EMA, usually will refer to things like apneas, nasal congestion, digestion, noise or light in the room, etc. Disturbances that don't wake the person but do tax the brain.
I would describe low sleep quality as "difficulty entering or maintaining the restorative phases of sleep." It's the thing a sleep clinic measures with an EEG.
There are tasks, which are implemented as part of the runtime and they appear to plan to integrate libuv in the future. Some of the runtime seems to be fairly easy to hack and have somewhat nice ways of interoperating with both C, C++ and Rust.
It cost 6 billion to build the current collider (in an existing tunnel) and CERN has a budget of 1-2 billion per year. In terms of data they are easily producing 100x of previous colliders.
I tried AMD exactly once for Linux graphics, it was so unstable that I bought a NVIDIA card within a month and have never looked back. Potentially NVidias approach of staying as far away as possible from the Linux kernel abstractions is much saner than to play ball with them.
The immense majority of Linux users use these "abstractions". Even on the steam survey which is going to be extremely biased to favor nvidia users Intel/AMD GPUs are the majority, and if you include the Steam Deck then it's a no contest.
Compare the number of issues here: https://wiki.archlinux.org/title/Vulkan. The only issue with the Nvidia driver is that there might be another (open-source) driver installed. AMD has several drivers which fail in different situations. The same has been true for OpenGL implementations, the only truly good implementation is the one by Nvidia, this is even well known in the PC game development industry.
What is this trying to prove? It's just one random page of a publicly editable wiki. AMD/Intel having more entries may just be because it is more popular (which it is indeed), or because (being opensource) it is actually easier to debug+solve problems, and/or a million other reasons.
Frankly, I think all the argument you need is that despite the huge advantage nvidia has on other platforms, it is almost completely reversed (even on steam) for Linux. It conveys very heavily that at least nvidia has a very poor reputation on Linux desktops.
It's a similar situation to terminology for the atom.
It originally in Greek meant "the smallest indivisible unit of matter".
Scientists then took the name and named various elements (hydrogen, gold, etc) as various atoms.
So, this is like when computing took the idea of a neuron as "the smallest indivisible unit of memory and calculation" and ran with it.
Fast forward to now, when we know that each "atom" has a bunch of smaller stuff internally, but by now it's too late to change the terminology.
And now we also know that a biological "neuron" is something more like an embedded CPU or FPGA in its own right, each with a bunch of computing and storage capability and modes.
It is a very fad driven field. Everyone brands everything. It isn't enough to give things boring titles like, stacked open linear dynamical system with selective observations and learned timestep.
that's half of it, the other half is pure social linguistics.
try talking about stacked open linear dynamical system for more than three times and you're bound to figure out a token that conveys the same but is quicker to produce
it's turtles all the way down with LLM And your comment. people are just trying to maximize their token conversations
That you can't backpropagate is a common misconception. There is recent work (I am one of the co-authors), that derives a precise analog of the backpropagation algorithm in spiking neural networks (https://arxiv.org/abs/2009.08378). It computes exact gradients and requires only communication at spike times during the backward pass. The reason this works is because "x is not differentiable" lacks a statement "with respect to y". It turns out that since gradient computation is local, a gradient is defined almost everywhere. The gradient computation is only ill-defined at places where a spike would get added or deleted. This is similar to how in a ReLU network an exact zero input is "non-differentiable".
I've got a technique where I simply record every spike event, its tick, source & target in a SQL database. This allows for quick recursive queries for determining contributors to output. I feel if we stop chasing biological emulation and superscalar, we can start to get clever with more traditional data-wrangling methods.
I don't think the answer to a high quality SNN model is going to require purpose-built hardware or a supercomputer to run. I think you will see emergence even with extremely rudimentary single core CPU-bound models if everything else is done well.
Event-driven is a superpower when architected correctly in software. This means you can pick clever algorithms and lazily-evaluate the network. Implications being that networks that would ordinarily be impossible to simulate in real time can now be simulated this way. You could have neuron state in offline storage that is brought online as relevant action potentials are enqueued for future execution.
> The gradient computation is only ill-defined at places where a spike would get added or deleted.
I feel like I'm missing something here. Like if you do it naively, the gradient is zero when a spike isn't added or deleted. And infinite when it is. Which is completely unhelpful.
Now the "natural" solution is to invent some differentiable approximation of the spiking network, and compute derivates of that, and hope that the approximation is close enough that optimizing it leads to the spiking network learning something useful.
A more principled version might be to inject some noise into the network. This would mean that you have a probability of spiking in a certain pattern (or better, a class of patterns that all have the same semantic). You could differentiate the probability of correct output with respect to the weights and try to drive it towards 1.
People have tried using probabilistic SNNs because the gradient is well defined there. The issue is it’s computationally intractable to calculate so you still have to use an approximation and you’re thus not better off than the people using a surrogate gradient of the non-probabilistic SNN.
Trying to figure that out right now :), there is a tension between an algorithm like this and the biophysical reality of actual brains. My intuition is that whatever the brain does is not so much an approximation of any known algorithm, but rather an analog of such an algorithm. Now that we know how in a spiking neural network gradient computation looks like, we can ask what assumptions are violated in the brain, just like people have done before for the vanilla backpropagation algorithm. There is some recent concrete experimental evidence that points towards a solution.
There is nothing theoretical that stops anyone from trying that. It is just a matter of engineering and architecture. It is (in principle) possible to define "spiking transformers" and there is more than one way to do so, but this will not mean necessarily that you will immediately get good performance. Also current generation hardware doesn't as efficiently simulate SNN, so this is another constraint.
That quote seems to directly contradict the headline.