I might imagine a pipeline where a full photograph blob is downloaded and decrypted on your device, normalized, run through something like image2vec + ocr + metadata extraction, and the result stored in an index. At that point, of course, you could garbage collect the original blob - at least until your app releases an major update version requiring a reindexing of blobs.
(I am leaving this comment to explain why I am downvoting your comment, as while this is absolutely the correct answer for how to build this--and so in some sense deserves an upvote--it is itself the proof for why you were wrong and yet is presented as the response to a socratic question that should have led you to realize why you were wrong and yet you didn't seem to acknowledge such, even though you clearly do appreciate that this answer is the opposite of the narrow question that was asked. I thereby feel this deserved both the two downvotes--on this answer and the original question--as well as--and I try to avoid doing this: I prefer just hitting downvote and moving on with my life--an explanation to ensure that if anyone is merely skimming they see that this is in fact the reason why the device can do that search locally without all 15GB synchronized at all times, and work only ever has to be done to improve old indexes in the off chance you make a major improvement to your indexing, and that both can be done incrementally and is often avoided by centralized players anyway as it is so costly for them.)
So you're saying you DON'T need to keep the photograph on the device, and it can be retrieved as needed, like I was? Your scheme only requires you to keep the index. I don't understand why you asked that question in the first place?