Assume for a minute that AGI is being developed and in no way shape or form does it function or is it formed in a manner that mainstream AI efforts focus on...
That hypothetical could very well be the reality on the horizon.
What of Safety/Control research that has fundamentally nothing to do with such a system or even its philosophy that the broad majority of these institutions or ventures are centered on? What of deep learning centric methodologies that are incompatible?
Safety/control software and systems development isn't a research topic. It's an engineering practice that is most suited for well qualified and practiced engineers who design safety critical systems that are present all around you.
Safety/Control Engineering isn't a 'lab experiment'. If one were aiming to secure, control and ensure the safety of a system, they'd likely hire a grey bearded team of engineers who are experts and have proven careers doing so. A particular systems design can be imparted on well qualified engineers. This happens everyday.
Without a systems design or even a systems philosophy these efforts are just intellectual shots in the dark.
Furthermore, has anyone even stopped to consider that these problems would get worked out naturally during the development of such a technology?
Modern day AI algorithms and solutions center on mathematical optimization.
AGI centers are far deeper and elusive constructs. One can ignore this all to clear truth all they like.
So...
If one's real concern is about the development of AGI and understanding therein, I think its fine time to admit that it might not come from the race horses everybody's betting on. As such, it is much more worth one's penny to start funding a diverse range of people and groups pursuing it who have sound ideas and solid approaches.
This advice can continue to be ignored such as it currently is and has been for a number of years. It can persist across rather narrow hiring practices....
The closed/open door will or wont swing both ways.
That hypothetical could very well be the reality on the horizon.
What of Safety/Control research that has fundamentally nothing to do with such a system or even its philosophy that the broad majority of these institutions or ventures are centered on? What of deep learning centric methodologies that are incompatible?
Safety/control software and systems development isn't a research topic. It's an engineering practice that is most suited for well qualified and practiced engineers who design safety critical systems that are present all around you.
Safety/Control Engineering isn't a 'lab experiment'. If one were aiming to secure, control and ensure the safety of a system, they'd likely hire a grey bearded team of engineers who are experts and have proven careers doing so. A particular systems design can be imparted on well qualified engineers. This happens everyday.
Without a systems design or even a systems philosophy these efforts are just intellectual shots in the dark. Furthermore, has anyone even stopped to consider that these problems would get worked out naturally during the development of such a technology?
Modern day AI algorithms and solutions center on mathematical optimization.
AGI centers are far deeper and elusive constructs. One can ignore this all to clear truth all they like.
So... If one's real concern is about the development of AGI and understanding therein, I think its fine time to admit that it might not come from the race horses everybody's betting on. As such, it is much more worth one's penny to start funding a diverse range of people and groups pursuing it who have sound ideas and solid approaches.
This advice can continue to be ignored such as it currently is and has been for a number of years. It can persist across rather narrow hiring practices....
The closed/open door will or wont swing both ways.