Three of the four options of what an "artifical intelligence safety incident" is defined as require that the weights be kept secret. One is quite explicit, the others are just impossible to prevent if the weights are available:
> (2) Theft, misappropriation, malicious use, inadvertent release, unauthorized access, or escape of the model weights of a covered model or covered model derivative.
> (3) The critical failure of technical or administrative controls, including controls limiting the ability to modify a covered model or covered model derivative.
> (4) Unauthorized use of a covered model or covered model derivative to cause or materially enable critical harm.
That's nowhere in the bill, but plenty of people have been confused into thinking this by the bill's opponents.