Hacker News new | past | comments | ask | show | jobs | submit login

Is it necessary to simulate the quantum chemistry of a biological neural network in order to functionally approximate a BNN with an ANN?

A biological systems and fields model for cognition:

Spreading activation in a dynamic graph with cycles and magnitudes ("activation potentials",) that change as neurally-regulated heart-generated electron potentials (and,) reverberate fluidically with intersecting paths. And a partially extra-cerebral induced field which nonlinearly affects the original signal source through local feedback; Representational shift.

Representational shift: "Neurons Are Fickle. Electric Fields Are More Reliable for Information" (2022) https://neurosciencenews.com/electric-field-neuroscience-201...

Spreading activation: https://en.wikipedia.org/wiki/Spreading_activation

Re: 11D (11-Dimensional) biological network hyperparameters, ripples in (hippocampal, prefrontal,) association networks: https://news.ycombinator.com/item?id=18218504

M-theory String theory is also 11D, but IIUC they're not the same dimensions

Diffusion suggests fluids, which in physics and chaos theory suggests Bernoulli's fluid models (and other non-differentiable compact descriptions like Navier-Stokes), which are part of SQG Superfluid Quantum Gravity postulates.

Can e.g. ONNX or RDF with or without bnodes represent a complete connectome image/map?

Connectome: https://en.wikipedia.org/wiki/Connectome




Wave Field recordings are probably the most complete known descriptions of the brain and its nonlinear fields?

How such fields relate to one or more Quantum Wave functions might entail near-necessity of QFT: Quantum Fourier Transform.

When you replace the Self-attention Network part of a Transformer algorithm with classical FFT Fast Fourier Transform: ... From https://medium.com/syncedreview/google-replaces-bert-self-at... :

> > New research from a Google team proposes replacing the self-attention sublayers with simple linear transformations that “mix” input tokens to significantly speed up the transformer encoder with limited accuracy cost. Even more surprisingly, the team discovers that replacing the self-attention sublayer with a standard, unparameterized Fourier Transform achieves 92 percent of the accuracy of BERT on the GLUE benchmark, with training times that are seven times faster on GPUs and twice as fast on TPUs."

> > Would Transformers (with self-attention) make what things better? Maybe QFT? There are quantum chemical interactions in the brain. Are they necessary or relevant for what fidelity of emulation of a non-discrete brain?

> Quantum Fourier Transform: https://en.wikipedia.org/wiki/Quantum_Fourier_transform


The QFT acronym annoyingly reminds ne rather of Quantum Field Theory than Quantum Fourier Transforms ...


Yeah. And resolve QFT + { QG || SQG }





Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: