> 1.) definite improvements
> 2.) and engineering inefficiencies that are economically difficult to overcome.
Improvements can be had in directions we’re not even thinking about and possibly that spark in artificiality that brings it to life in the self autonomous way. I could see that becoming a bad path for us…
Engineering inefficiencies will arise when it’s used for the wrong thing and that’s happening a lot when a ton of money is poured in new tools that are used without being properly understood.
It seems the value of ML and AI comes from the approximation of the brain, or best-fitting conceptual models that nevertheless today contain high error. As I understand, dendrites vs perceptrons are quite different, but holistically, this is about energent behavior from simple input output networks. Structuralism is about to be put to the test. The better our conceptual models fit the biological behavior, the more "human-like" the behavior. We should expect the field to continue developing practical and better fitting models. The only question, here then, is a "articial ladder of consciousness", where we decide where in this spectrum a being deserves to have rights and not suffer. We may want to start granting rights to our models today to avoid a history of enslaving conscious beings (serious perspective)
Improvements can be had in directions we’re not even thinking about and possibly that spark in artificiality that brings it to life in the self autonomous way. I could see that becoming a bad path for us…
Engineering inefficiencies will arise when it’s used for the wrong thing and that’s happening a lot when a ton of money is poured in new tools that are used without being properly understood.