Since it's impossible to run inference on the model without having access to the model and its weights, interpretable AI generally does assume that you have access to all of that. Otherwise, why you would want to try to explain the inner workings of something that you don't have and can't use?