> I guess this depends on how advanced your toolkit is.
I'm assuming you mean this to counter the "simplest of biological neural networks" part, rather than the "completely opaque black boxes" part. Nonetheless, I think it is fair to say that ANNs still fall far short of the complexity of even the simplest biological brains. For example, even the nervous system of a Hydra features neurotransmitters [1] (as opposed to just heterogeneous signals of ANNs) and brains are affected by their own electric fields [2].
> You want to be able to discern some rule base from the system that you can understand? Even biological brains do not have that property.
More generally, transparency is important to be able to understand the inductive bias you've built up, and to try to alter or refine it in useful ways.
Also, the fact that our brains lack transparency doesn't justify leaving it out of an AI system, nor does it demonstrate the difficulty of building a transparent AI -- nature just had no drive towards transparency. Plus we (humans) can introspect and explain our reasoning.
> See, but the problem is that we don't even have the knowledge of how real brains work to start forming such principals
> based on a technique which reflects the structure of the problem of learning, generalization, and hypothesis search and illuminates it.
This is only the case if we're speaking about biological brains, and not as a generic word to mean "intelligent system". In the latter case, we do in fact, have quite a bit of knowledge about such principles from reasoning about hypothesis spaces. From where we get things like active learning, or Solomonoff's work on Universal Induction [3].
> It's not going to be a fully general, "human-level" AI, obviously, as that requires huge amounts of semantic knowledge about the world that was encoded through billions of years of evolution
By "semantic knowledge" do you mean inductive bias? Because otherwise I'm at a loss. I don't believe that the picture for Artificial General Intelligence is as bleak as you make it sound though.
As an aside, you mention "Lots of more biologically-inspired approaches have been explored". Do you know of any projects looking to mine the structure of various parts of brains to figure out the sorts of inductive bias those structures correspond to? (As opposed to just copying structure) What I mean, is that presumably if a part of the brain heavily involved in recognizing faces has a unique structure/wiring, that structure is optimized such that it performs well on face recognition -- and correspondingly poorly on something else, per no free lunch -- and that optimization should tell us something about the nature of recognizing faces. Sort of in the same way that the use of a Naive Bayesian classifier rather than a Bayesian Net might tell you that the classifier is optimized for cases where the variables are independent.
I'm assuming you mean this to counter the "simplest of biological neural networks" part, rather than the "completely opaque black boxes" part. Nonetheless, I think it is fair to say that ANNs still fall far short of the complexity of even the simplest biological brains. For example, even the nervous system of a Hydra features neurotransmitters [1] (as opposed to just heterogeneous signals of ANNs) and brains are affected by their own electric fields [2].
> You want to be able to discern some rule base from the system that you can understand? Even biological brains do not have that property.
More generally, transparency is important to be able to understand the inductive bias you've built up, and to try to alter or refine it in useful ways.
Also, the fact that our brains lack transparency doesn't justify leaving it out of an AI system, nor does it demonstrate the difficulty of building a transparent AI -- nature just had no drive towards transparency. Plus we (humans) can introspect and explain our reasoning.
> See, but the problem is that we don't even have the knowledge of how real brains work to start forming such principals > based on a technique which reflects the structure of the problem of learning, generalization, and hypothesis search and illuminates it.
This is only the case if we're speaking about biological brains, and not as a generic word to mean "intelligent system". In the latter case, we do in fact, have quite a bit of knowledge about such principles from reasoning about hypothesis spaces. From where we get things like active learning, or Solomonoff's work on Universal Induction [3].
> It's not going to be a fully general, "human-level" AI, obviously, as that requires huge amounts of semantic knowledge about the world that was encoded through billions of years of evolution
By "semantic knowledge" do you mean inductive bias? Because otherwise I'm at a loss. I don't believe that the picture for Artificial General Intelligence is as bleak as you make it sound though.
As an aside, you mention "Lots of more biologically-inspired approaches have been explored". Do you know of any projects looking to mine the structure of various parts of brains to figure out the sorts of inductive bias those structures correspond to? (As opposed to just copying structure) What I mean, is that presumably if a part of the brain heavily involved in recognizing faces has a unique structure/wiring, that structure is optimized such that it performs well on face recognition -- and correspondingly poorly on something else, per no free lunch -- and that optimization should tell us something about the nature of recognizing faces. Sort of in the same way that the use of a Naive Bayesian classifier rather than a Bayesian Net might tell you that the classifier is optimized for cases where the variables are independent.
[1] http://books.google.com/books?id=WWN_t498S5IC&pg=PA12...
[2] http://www.scientificamerican.com/article.cfm?id=brain-elect...
[3] http://world.std.com/~rjs/pubs.html