> Recently having become a father has made me think a lot about general intelligence. [...] why don't we try modelling emotions as the basic building blocks that drive the AI forward
Because, among many other reasons, an AI going through the "terrible two(minute)s" could decide to destroy the world, or simply do so by accident. We will have a hard enough time building AI that doesn't do that when we set that specifically as our goal, let alone trying to "raise" an AI like a child.
> Edit: Oh, and for the love of god, please airgap the thing at all times...
There are multiple factions when it comes to AI, and both positions don't seem to be disprovable to me, i.e. whether AI will safe us or be our doom. On the opposite spectrum I'd put David Deutsch[1]. My position is that if such a singularity is possible, we probably can't avoid it, but it's probably possible to nugde it into a good direction by being careful during research. According to Deutsch, the problem of keeping AI on a good track is the same as keeping humans on a good track, since modelling ourselves is the only way we know how to build a general intelligence. So if we can succeed in building a stable society (which we sort of have at least locally), then we might also succeed in building a general AI that is acting in our interests.
I'd argue that if possible, it has great potential for both: AI provides one of the most universal solutions to a wide range of problems humanity faces, while simultaneously providing an existential threat of its own if it goes badly.
Because, among many other reasons, an AI going through the "terrible two(minute)s" could decide to destroy the world, or simply do so by accident. We will have a hard enough time building AI that doesn't do that when we set that specifically as our goal, let alone trying to "raise" an AI like a child.
> Edit: Oh, and for the love of god, please airgap the thing at all times...
Not even close to sufficient. See https://en.wikipedia.org/wiki/AI_box for how humans would voluntarily let it out, and papers like https://www.usenix.org/conference/usenixsecurity15/technical... for how it would let itself out.