The secret message must be encrypted (ideally with just a one time pad to preserve the guarantees) or everyone could simply read it -- the paper assumes the model is public.
The technique lets you turn an encrypted message into the output of a sampled auto-regressive model in a way which is impossible for someone to distinguish from just running the model normally unless they have the private key, even if the attacker has the exact model you're using.
The technique lets you turn an encrypted message into the output of a sampled auto-regressive model in a way which is impossible for someone to distinguish from just running the model normally unless they have the private key, even if the attacker has the exact model you're using.