Those are the same quant, but this is a good example of why you shouldn't use ollama. Either directly use llama.cpp, or use something like LM Studio if you want something with a GUI/easier user experience.
The Gemma 3 17b QAT GGUF should be taking up ~15gb, not 22gb.
The Gemma 3 17b QAT GGUF should be taking up ~15gb, not 22gb.