Hacker News new | past | comments | ask | show | jobs | submit login
Karl Friston: a neuroscientist who might hold the key to true AI (wired.com)
190 points by MKais on Nov 20, 2018 | hide | past | favorite | 98 comments



It's worth noting that 'free energy' is just the 'evidence lower bound' that is optimized by a large portion of today's machine learning algorithms (i.e. variational auto-encoders).

It's also worth noting that 'predictive coding' - a dominant paradigm in neuroscience - is a form of free energy minimization.

Moreover, free energy minimization (as predictive coding) approximates the backpropagation algorithm [1], but in a biologically plausible fashion. In fact, most biologically plausible deep learning approaches use some form of prediction error signal, and are therefore functionally akin to predictive coding.

Which is all just to say that the notion of free energy minimization is somewhat commonplace in both neuroscience and machine learning.

[1] https://www.ncbi.nlm.nih.gov/pubmed/28333583


Clickbait article.

It is noteworthy that Friston has, as of November 2018, neither (1) formalised free energy minimisation (FIM) with sufficient precision that it goes beyond a vague research heuristic, that can (and is) adapted in ad-hoc ways; nor (2) come up with sufficient empirical evidence for his claim that FIM is how human or animal brains works -- despite the recent revolution in our ability to measure live neurons, and despite having been asked (in private) by working neuro-scientist, including at his university.


Agreed. No technical detail means quackery, something the field always had in abundance.


Oh boy, yes. Neuroscience is all about serving the egos of the scientists, and it can often be anything but scientific.

(Although, as a failed vision scientist myself, I may be credibly accused of some disqualifying bias in this regard).


The title is definitely clickbait-ey but the article reads like a fairly honest (maybe too positive) profile of Friston.

For what is worth, I agree that FIM not being formalised is a point against it, but I wouldn't say there's anything ad-hoc about how it applies when fitting it to e.g. schizophrenia.

For what is worth, this quote sums up how I look at it[1]:

>Friston mentions many times that free energy is “almost tautological”, and one of the neuroscientists I talked to who claimed to half-understand it said it should be viewed more as an elegant way of looking at things than as a scientific theory per se.

1. http://slatestarcodex.com/2018/03/04/god-help-us-lets-try-to...


Well, a significant portion of empirical neuroscience works under the assumption that parts of the brain operate according to a predictive coding scheme, and there are countless studies that support this notion.

As predictive coding is a form of free energy minimization (under Gaussian assumptions), this implicitly provides empirical evidence.

As for the request to test the idea on live neurons, "In vitro neural networks minimise variational free energy" [1]

https://www.biorxiv.org/content/early/2018/05/16/323550


Notes from the last time I tried to understand this - https://www.lesswrong.com/posts/wpZJvgQ4HvJE2bysy/god-help-u...


Thanks - I found this very useful.

> From the Alius interview:

"The free energy principle stands in stark distinction to things like predictive coding and the Bayesian brain hypothesis. This is because the free energy principle is what it is — a principle. Like Hamilton’s Principle of Stationary Action, it cannot be falsified. It cannot be disproven. In fact, there’s not much you can do with it, unless you ask whether measurable systems conform to the principle."

This is a big kahuna burger of a bullet to bite!


That was great, thanks.

This all reminds me of Socrates' claim in the Theatetus that "Philosophy begins in wonder," wonder being roughly equivalent to the the desire to reduce uncertainty, and the Platonic idea that philosophy is central to the good life.

Which leads to the ubiquitous Whitehead quote about western philosophy consisting of footnotes to Plato...


Thanks, it was a great read!

From what I get this whole thing is more like an abstract ruleset describing how decision making in the brain works, rather than a brain model. Or am I wrong, is there anyone who built a network model based on this theory?


In terms of the free energy 'principle', it makes no predictions about how free energy minimized. But there have been multiple process theories suggested, most notably predictive coding (which is a dominant paradigm in neuroscience) [1] and variational message passing [2].

[1] https://en.wikipedia.org/wiki/Predictive_coding [2] http://www.jmlr.org/papers/volume6/winn05a/winn05a.pdf


Isn't variational message-passing the algorithmic-level theory about where predictive coding comes from?


I think you might be right, a quote from Friston on the relationship (in reference to belief propagation):

"We turn to the equivalent message passing for continuous variables, which transpires to be predictive coding [...]"

It could be that belief propagation is in the context of discrete variables, whereas predictive coding is in the context of continuous, both of which are a form of (variational) message passing.


I endorse the responses labeled with my name.


I’ve seen Friston speak a few times. My favorite quote along these lines is that “your arm moves because you predict it will, and your motor system seeks to minimize prediction error.”

He’s been a huge figure in human neuroscience, bringing statistics to all those psychologists with fMRI scanners


Most grad-level Deep Learning classes have a week or so devoted to "Approximate Bayes" methods. And it's conceivable future updates to all popular probabilistic programming languages will include "programmable" rather than "fixed-function" inference methods.

"Inference Metaprogramming" paper

https://people.csail.mit.edu/rinard/paper/pldi18.pdf

Latest state-of-the-art research will be presented at upcoming NeuroIPS conference

Symposium on Advances in Approximate Bayesian Inference

http://approximateinference.org/accepted/

I think the most fascinating aspect is that Friston and his team are working within the field of Computational and Algorithmic Psychiatry. I mean this pre-print is really interesting: using video game play to diagnose disorder.

Active Inference in OpenAI Gym: A Paradigm for Computational Investigations Into Psychiatric Illness

https://www.biologicalpsychiatrycnni.org/article/S2451-9022(...


Since you mention Bayesian methods, I thought I may randomly ask you - have you come across any good work about applications of subjective Bayesian statistics in AI?

I was particularly interested in subjective Bayes theory due to the way it seems to interleave human input with mathematical theory.

I first learned about it from a non-fiction book in which these techniques were used by scientists in the US to locate Russian ICBMs that were test-fired during the Cold War and landed in the ocean. The wisdom of experts was quantified and fed into a simple Bayesian subjective probability calculation which lead to prioritization of target areas to investigate and the US located on either the first or second try - I can't recall. I've seen a few other interesting applications of this as well.

I'm not an expert in this area, but you sound like you might be - so I thought I'd take the change to ask :)


> Since you mention Bayesian methods, I thought I may randomly ask you - have you come across any good work about applications of subjective Bayesian statistics in AI?

https://www.youtube.com/watch?v=O0MF-r9PsvE

https://arxiv.org/abs/1809.10756 (by my adviser)

https://probprog.cc/ (chaired by my adviser, new)


I'm not an expert either, but you might be interested in the book 'Superforecasting' by Tetlock and Gardner - they have done some (IMHO) very interesting research on predictions markets. It might be the kind of thing you're looking to research more of!


Thanks for the recommendation - that does look very interesting indeed! I will find myself a copy and have a read. Hacker News book club comes through again!! :)


For anyone who is interested in a tutorial and actual implementation of active inference (an idea based on Free Energy Principle), here's one in Python https://kaiu.me/2017/07/11/introducing-the-deep-active-infer...

I have been trying to understand FEP, and so far my understanding is that essentially the agent tries to learn the generative model that most closely explains observations and then tries to act in a way that are more likely to cause the environment to generate its preferred observations (say pH and temperature in the right range).

The problem with this approach is in scalability of inference and candidate model generation. By the time you provide model for the agent, you as a designer have coded much of your knowledge already and hence constrain the agent. True AI will build model from the scratch, and not just learn model complexity.


>True AI will build model from the scratch, and not just learn model complexity.

There's no such thing as truly learning "from scratch" -- the No Free Lunch Theorem holds no matter what. What you can do is find a sufficiently large (ex: Turing-complete) hypothesis class, and make simplifying assumptions to allow it to be feasibly learnable (such as regularization or priors).


The No Free Lunch Theorem is irrelevant to the real world [0][1]. It assumes all functions, even those with infinite algorithmic complexity, are equally likely.

You should look into algorithmic probability for a better foundation.

[0] http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.540....

[1] https://arxiv.org/abs/1111.3846.pdf


On the one hand, yes, the No Free Lunch Theorem seems to intuitively rely on the set-theoretic definition of functions, rather than building on a constructive foundation to hypothesize that functions which are "physically harder", in some sense, are less likely.

On the other hand, algorithmic probability requires first defining a Turing machine, rendering the Solomonoff Measure defined only up to a specific programming language, which can bias it some arbitrary amount. That's on top of the Solomonoff Measure itself being incomputable, and so utterly useless as a foundation for real-world machine learning and computational cognitive science.

I agree that positing a Bayesian prior on functions/programs/causal structures gets you around the No Free Lunch Theorem. The question just then ends up being: what sort of hypothesis space, and what sort of prior, sufficiently resemble the real world (the data-generating process) to allow for learning from a given data set? That's a matter of science.


I'm not sure I follow you.

I didn't mean intelligence in any abstract problem-space. I meant intelligence in the world and type of problems we humans deal with (in fact, I'm unsure what process should we call intelligence in non-human context).

In the context I'm talking about, we at least have one algorithm that has build models from the scratch: evolution by natural selection.


The idea is that you learn a model by calculating the derivative of free energy with respect to your model parameters.


Yes, but you have to specify a generative model (or at least put boundaries to it). Then you learn params of that model.

I was talking about learning the model structure also.


Some attempts have been made in the form of Bayesian model reduction [1].

The idea is to 'carve' out the structure of your model using free energy minimization.

[1] https://arxiv.org/abs/1805.07092


The complex systems people used to discuss the problem of agents with internal models making models of other agents [1].

Similarly biologists are interested in how a living thing 'organises itself' in the world, maintains its structure and how its sensing and action is coupled to the environment[2].

This sounds like a similar approach, however fuzzy. Isn't it just saying 'can we look for principles that define how living creatures should organise the effort (energy/information) it makes sense to put into "recognising/ predicting/ acting in / being in" the world?'

Makes sense there could be some shared mechanisms, though I'd personally be surprised if they are universal, as differing life-forms seem suited for differing levels of environmental change. This is something lots of people have looked at (it's fun), and agree the Wired article doesn't give a clear answer.

1. Can't recall the paper, but think it was Doyne Farmer ( or Chris Langton?) arguing that if your agent had complexity N, then you should spend sqrt(N) complexity modelling another agent

2. e.g. Maturana & Varela, summary of autopoeisis here http://supergoodtech.com/tomquick/phd/autopoiesis.html but I'm sure lots of other biologists have good theories


By no means will I be ever able to grasp Friston's theory, but the free energy minimisation vaguely reminds me of Curiosity-driven reinforcement learning. Can anyone with more understanding than me confirm or deny this apparent resemblance?


There are similarities. The difference in two approaches are: - FEP is Bayesian in nature, while there's usually no notion of uncertainties in curiosity driven RL - In FEP, there's no explicit weighting of explore/exploit tradeoff. It automatically emerges from equations - FEP, since it's Bayesian, allows for more complex reasoning (like counterfactuals) - Curiosity driven RL is scalable while FEP is not feasible for anything other than simple models


Excuse me, another followup question (can't edit on mobile): can you ELI5 how do exploitation and exploration "emerge" naturally instead of the tradeoff being explicitly coded as in RL?


As a general answer, the theory suggests that organisms maximize a quantity known as model evidence, which is just a way of saying 'how much evidence does some data provide for my model of the world?'

There are two complementary ways to maximize this - change your model or change your world.

If we now grant that actions also maximize model evidence, then actions can either be conducted to sample data that make the model a better fit of the data (exploration), or they can be conducted to sample observations that are consistent with the current model (exploitation).


And the optimization process itself would determine whether updating the model or changing the world is optimal, I guess. Thanks.


The equation for free-energy/ELBO has two terms, an energy and an entropy. You can rewrite it as "log-likelihood minus KL from prior". If you write your model in a certain way, you can then read it as, "Fit to the data, minus cost" (second formulation) or "Accuracy + exploitation + exploration" (first formulation).


In forumations of FEP, there are two terms: cost and ambiguity. Minimisation of this combined term happens in a Bayesian optimal way. So you don't have to explicitly code weights for exploration and exploitation.

Although what you do have to code is prior preferences, and since it is a distribution, you implicitly code the range of those preferences. But once you do that the FEP, algorithm figures out when to collect more data to build a better model and when to use the existing model to get near the prior preferences.


I see. Much more elegant than explicitly coding the trade-off, actually :)


Many thanks for your answer.


There is a large overlap, for instance, the popular VIME exploration algorithm [1] uses part of the free energy objective function.

However, free energy isn't a theory of curiosity per se, its posed as description of self-organisation. It just so happens that you can express the free energy functional in terms of epistemic (curious) and instrumental (reward) components.

[1] https://arxiv.org/abs/1605.09674


Excuse me, another followup question (can't edit on mobile): can you ELI5 how do exploitation and exploration "emerge" naturally instead of the tradeoff being explicitly coded as in RL?


I see, thanks. Is any AI research center/company trying to model this idea, instead of applying RL?


That has been my go-to comparison for a while as well, and it sounds like what they refer to in this article when they talk about using active inference-based AI vs classical reinforcement learning.


Yes, although looking at what other people have commented, "our" intuition is quite basic. I will try to read more about FEP.


If you liked this, you may also like "Am I autistic? An intellectual autobiography"[0] by Karl Friston. It doesn't go into his free energy idea at all, but is more about the person behind the idea. My favourite line "I remember being asked [by an educational psychologist] whether I thought the puppets in Thunderbirds ever got hungry".

[0] https://www.aliusresearch.org/uploads/9/1/6/0/91600416/frist...


The article is too long for the idea it tries to convey. I like to read to broaden my mind, not for readings sake.


The article tells a story; it's not meant for people trying to grasp technical details. Furthermore, it can be argued that reading (well-written texts like this) for readings sake also broadens your mind.


At least provide a summary. I mean, "After completing his medical studies, Friston moved to Oxford and spent two years as a resident trainee at a Victorian-era hospital called Littlemore. Founded under the 1845 Lunacy Act, Littlemore had originally been instituted to help transfer all “pauper lunatics” from workhouses to hospitals. By the mid-1980s, when Friston arrived, it was one of the last of the old asylums on the outskirts of England’s cities."

is a story.

But as a neuroscientist with an interest in machine learning, I want to know the idea, not the history of Littlemore, attended by this scientist whose tools and methods I have used(Friston motion parameters, I am looking at you).


In that case the wikipedia article is probably a decent starting point to see whether you are interested or not: https://en.wikipedia.org/wiki/Free_energy_principle


I'd agree but the "free energy principle" is first mentioned a good 1,000+ words into the article.


As far as I can tell the “free energy principle” is just asserting that the brain is approximately Bayesian and is doing some kind of variational inference, right? I’m not sure how revolutionary that is.

(I’m predisposed not to like Friston because his work in fMRI plays fast and loose with the idea of “causality”.)


The 'revolutionary' aspect is the suggestion that a single celled organism is also doing variational inference. Or, more accurately, can be described as such.


The trouble is hitting the right "happy medium" between (variational) inference as an explanation of the sensory and motor cortices, and variational inference as a universal theory of everything.


Man, I can barely understand any of this. Is there a more ELI5 type explanation anywhere?


Intelligence automatically emerging in nature is very likely and an obvious requirement to humans rising to be the dominant species on earth. FEP makes it seem like this is a new idea. How we think about the meta, and reconstruct ideas from our own perspective has been embedded in human adaptation as long as recorded history. To model "true" AI from a human perspective using FEP you need to model AI from the initial frame of reference, where human intelligence emerged automatically. This could perhaps be done through manipulating a fundamental components of our brains or simulating scenarios where this could of happened.


I found this interview with Karl Friston helpful to understand free energy principle from a high level:

https://www.youtube.com/watch?v=NIu_dJGyIQI


I think true AI will not be a computer program that suddenly becomes human like. It will be a human that becomes more and more cyborg like https://techcrunch.com/2018/11/01/thomas-reardon-and-ctrl-la... Soon humans will have more and more brain surgery adding cyborg features to their non-artificial-intelligence, just their natural-intelligence until at some point, they will be so machine like, boom, AI.


Reminds me of the philosophical question of what happens to one's consciousness if you'd replace their neurons, one at a time, by electronic equivalents.


It may be a fallacy to assume a neuron is less complex than a brain. Depends on how one measures complexity and at what scale... but living systems -- unlike no living systems -- strangely get more complex the closer in one goes. That is, it's fairly trivial to simulate an earthworm... it's trickier to simulate the components of the earthworm.


This is a completely minor point here... but "fallacy" means that there's something wrong with the argument, if you have a disagreement about facts or assumptions then the word "fallacy" doesn't really apply (you can just say "wrong" instead).


It suffers from the fallacy of petitio principii, in that it assumes arguendo that consciousness is comprised of neurons (and, as mentioned above, that a neuron is less complex than consciousness.) But it's not stated as an 'argument' in any case so perhaps the term fallacy was out-of-place.


Petitio principii is when the premise assumes the truth of the conclusion, but since there is no argument and no conclusion it's impossible for the statement to suffer from that fallacy.


You're right. Shouldn't have posted that. I just wanted the last word. (Dammit... did it again!)


Who’s going first https://www.neuralink.com/ ? 50% off if you mention hacker news and free brain ointment for after surgery pain.


Interesting thought experimement, but it'd need to define "consciousness" which is always what makes those problems unsolvable


What an interesting thought...


and more generally, the ship of Theseus


> Friston found time for other pursuits as well. At age 19, he spent an entire school vacation trying to squeeze all of physics on one page. He failed but did manage to fit all of quantum mechanics.

Is the page available to read?


https://www.aliusresearch.org/uploads/9/1/6/0/91600416/frist...

Page 6

Though not fully legible as captured in that pdf.


Thanks but "not fully legible" means absolutely unreadable in this case. I've tried zooming in yet couldn't recognize a single letter (those in the title don't count).


Thinking about this with my engineering hat on -- If I wanted to guide the behaviour of such a system, I would have to influence the prediction somehow - and then the system would act to change the state of the world to match that prediction and/or update the prediction with more information about what is actually going on (by actively making observations etc...). This seems like quite an elegant and neat little lever for high-level control/objective setting. A bit like a Picardian 'make it so' button...


This almost sounds like a special case of Jeremy England's dissipation-driven adaptation theory. Does anyone know the overlap/differences between these theories (other than specificity)?


Ctrl-F England and here you are. I’ve been searching for someone more informed than me that has compared and contrasted the two but haven’t found anything.


Second. Check out David Bohm's idea of wholeness and harmony, in "on creativity." Not what you are looking for, but another puzzle piece with the same scent


Blog post about understanding Fristons ideas, also consider the amount of reseacher comments it has provoked: http://slatestarcodex.com/2018/03/04/god-help-us-lets-try-to...


Well, here is what I believe brains do:

https://news.ycombinator.com/item?id=9022206

It makes total sense for the brain's job to be minimizing surprise, because minimizing surprise is the best and most basic strategy for survival.


With all due respect, one sentence explaining how you think the mind works isn't really worth much. It doesn't amount to much more than "the brain tries to explain reality." Yes, ok, but how do you translate that into some algorithm? How does it relate to gradient descent methods on neural networks?


Surprise can be defined as an unexpected difference or anomaly. One Class Classification or anomaly detection can be a very good trainer for a generic AI.


>With all due respect, one sentence explaining how you think the mind works isn't really worth much.

With all due respect, one sentence can be worth a lot.

Some examples:

> F = MA

Another

> E = MC^2

And another

> G_{\mu, \nu} = 8 \pi G (T_{P\mu, \nu} _ \rho_{\Lambda} g_{\mu, \mu})

Another example

>To be, or not to be; that is the question;

Et cetera, et cetera. The length of something does not necessarily imply that an idea is weak, maybe the idea is really deep? Dismissing an idea based on length is idiotic.

Sorry for the rant.


None of these mean anything without their respective context. In fact, they're all pretty pedestrian taken as a single sentence.


This is incomplete, but it sounds similar to the memory-prediction framework theory (https://en.wikipedia.org/wiki/Memory-prediction_framework) and similar theories. Although I might be biased / primed to interpret it this way.

Basically, according to such theories, we don't really "decide" anything; we carry out what we predict we're going to do, by hierarchically modelling patterns of input and output together in a hierarchy. E.g. "I am eating an apple" -> "I see an apple" and "I'm bringing the apple closer to my mouth" -> "I see lines and colour" and "I tense my hand and move my arm"

Adding a biological perspective, my opinion is that motivation arises from attention modulation by neurotransmitters like dopamine and noradrenaline, and feeding this back into the abstract theory, the hierarchy of recognition favours converging on models with high stimulation weight ("I am eating tasty food" or "I am avoiding a car crash" rather than "I am completing boring paperwork")


I see no reference to the dead salmon in the MR scanner showing correlative activation via spam mapping. Those results somewhat having been a hinderance to quite a few PET and fMRI researchers’ careers.


What I got out of this, it appears in essence he's saying that "those creatures with the most accurate picture of the world are the ones best prepared to succeed in the world"?


This is a brilliant portrait of Karl Friston. Thanks for sharing!


>> He has an h-­index—a metric used to measure the impact of a researcher’s publications—nearly twice the size of Albert Einstein’s.

That can only mean the h-index is a load of rubbish.


Not really. The academic field grew tremendously in the mean time, so the comparison is rubbish, but the index isn't.

It is defined for an author as the largest number N such that as then have N articles with at least N citations.


It seems he's an ontologist at heart.

https://en.wikipedia.org/wiki/Ontology


Anyone have a reference relating the free energy minimisation principle / active inference to reinforcement learning type environments?


The particular study cited in the article is [1], however for a more general review of the links to reinforcement learning [2].

[1] https://www.biologicalpsychiatrycnni.org/article/S2451-9022(... [2] https://journals.plos.org/plosone/article?id=10.1371/journal...


Cheers.


This article is a complete waste. The title implies that it’s about ai but it turns out to be a portrait of a mans life — a pr piece. Not only that but free energy minimization has nothing to do with intelligence other than vaguely describing one of its most obvious and superficial characteristics.

—-

Ai is the most important issue in the world. True general ai is an existential threat to human kind. The economics of general ai lead to the extinction of humans no matter how you slice it. Killer robots are just icing on the cake — the tip of the iceberg.

General ai can be thought of as the keystone in the gateway of automation. It allows the automation of the human mind itself. The ai we have now cannot do this. Better ml algorithms will never threaten the human mind most likely. So people have a very false and dangerous sense of security.

Ml experts eagerly correct people like me with a vague notion and wave of the hand: ai won’t be a problem for a long time. As I said, ml is not a threat (for being automation of human thought) and this is because ml has nothing to do with human thought. Ml experts don’t know anything about human thought and therefore a complete layman is just as qualified to speculate about general ai as an ml expert is. Or a person with a physics degree or what have you. You might say that laymen tend to be dumber, or some variation on that, but that’s besides the point and irrelevant.

There are many reasons to be worried about the creation of general ai. First, general ai is much more broad than it is given credit for — sentience has many more forms than the human mind and is a broader attack surface than usually thought. People imagine it as finding the human mind like a needle in a haystack. It’s a lot easier than that. The algorithm for the kernel of intelligence is probably relatively much simpler than one would initially imagine. We don’t know when we might stumble on it. Or I could be wrong but I’m still right because even if it’s very complex relatively, we will still discover it if we try — and we are trying. As i said, ml isn’t a huge threat for general ai and I think it’s very likely that brain research is the biggest threat currently. The resolution of mri scanning and probing is increasing as is the computational power to make sense of the readings and test algorithms that we discover. I already see people commenting that computer won’t be powerful enough to test algorithms: you won’t need a silicon version of the brain to test them. I guarantee it.

If general ai were to come into existence, it would have the ability to do any task better than a human. Any group or organization that uses ai to perform any task will overtake anyone who does not. It will be a ratchet effect where each application of ai spreads across the world like a disease and never goes away. Soon, everything is done with ai. A market economy’s decentralized nature makes it an absolute powder-keg for ai in this respect because each node in the market is selfish and will implement ai to gain a short term advantage in the market — and as I’ve said once one node does it all nodes will do it. This behaviour historically has fueled the success of markets but as we have seen with global warming does not always work.

The key here is the fact that the only reason human life has value is because humans offer an extremely vital and valuable service that cannot be found anywhere else. Even though this is true, most humans on this planet do not enjoy a high quality of life. It is insane to imagine that once our only bargaining chip is ripped from our collective hands that the number of people with high standard of living will go up instead of down. There will be mass unemployment. Humans will be cast aside. And that’s all assuming that robots are never made to maliciously target human life for any reason.

People say that automation leads people to better, new jobs. In reality jobs are not an inexhaustible resource. They just seem to be.

The only solution, in one form or another, is the prohibition of ai. I hope that someone else reading this will agree with me or suggest another solution. I am interested in forming some kind of group to prevent all this from happening.


I agree with most of what you state, but the prohibition of AI is impossible. How could you stop nations from researching it secretly? How could you stop the Amazons and Baidus?


The only thing that is clear is that something must be thought of and attempted.


I hate this Wired style of "journalism": 99% of hype, hyperbole and anecdotes wrapped in 1% of evidence and substance.


It is a really annoying read. If there is really something to FEP you certainly won't find it on this article. If you can't explain something so that a ten year old can understand it, you don't really know it. According to this article no one really knows FEP. But the worst of it is the subjectivity of Friston's approach. He likes routine. He gets all out of sorts if his regular activity is disrupted. He doesn't like surprise, so he concludes the answer to the ultimate question of life, the universe and everything is avoiding surprise. Very self serving. Well guess what, there are plenty of people (and other organisms too!) that like surprises! And they do quite well, thank you. With out an appreciation and actual inclination to seek out surprises the drive to exploration would be snuffed out. Without that drive new habitats and opportunities are left untapped and wasted. It's fine that he doesn't like surprises. That doesn't make it a good basis for AI consciousness or to explain living creatures in general.



Take for example this paragraph:

> “This is absolutely novel in history,” Ramstead told me as we sat on a bench in Queen Square, surrounded by patients and staff from the surrounding hospitals. Before Friston came along, “We were kind of condemned to forever wander in this multidisciplinary space without a common currency,” he continued. “The free energy principle gives you that currency.”

This is bloviation and crankery. I am not the target audience for this kind of reputation-building.


I agree the style is annoying, using many such paragraphs to meet the required word count, but Friston's reputation is well established in scientific circles (although I first heard of him yesterday).


I read the title, saw the domain, and didn't read it.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: