Even with the right instructions, building weapons of mass destruction is mostly about obtaining difficult to obtain materials -- the technology is nearly a century old. I imagine it's similar with manufacturing a virus. These AI models already have heavy levels of censorship and filtering, and that will undoubtedly expand and include surveillance for suspicious queries once the AI starts to be able to create new knowledge more effectively than smart humans can.
If you're arguing we should be wary, I agree with you, although I think it's still far too early to give it serious concern. But a blanket pause on AI development at this still-early stage is absurd to me. I feel like some of the prominent signatories are pretty clueless on the issue and/or have conflicts of interest (e.g. If Tesla ever made decent FSD, it would have to be more "intelligent" than GPT-4 by an order of magnitude, AND it would be hooked up to an extremely powerful moving machine, as well as the internet).
If you're arguing we should be wary, I agree with you, although I think it's still far too early to give it serious concern. But a blanket pause on AI development at this still-early stage is absurd to me. I feel like some of the prominent signatories are pretty clueless on the issue and/or have conflicts of interest (e.g. If Tesla ever made decent FSD, it would have to be more "intelligent" than GPT-4 by an order of magnitude, AND it would be hooked up to an extremely powerful moving machine, as well as the internet).