Instead of saying "I'm going to try this, maybe it will work", you should instead be asking if wavelet transforms are appropriate given the domain you are building a model for. Don't just transform data in the hopes that it will magically work.
Do you know how we got penicillin? Alexander Fleming didn't keep a clean lab.
Do you know how we discovered X-Rays? Henri Becquerel realized his photographic plates had been darkened after being left in a drawer with uranium sulfate.
Do you know how electric guitar distortion was discovered? Willie Kizart dropped his Fender amp.
Worse things have happened than experimenting by throwing one more transform on your inputs before processing them.
Sure, and I understand this sentiment. But in practice/industry, it's best not to build/deploy models you don't understand. Edge cases in ML models that are fully automating business decisions can be pretty dangerous. I think the likelihood of a penicillin level discovery happening when I'm trying to train a model to make better marketing budget decisions for a company is quite low.
Hypothetically, if the computer could try both of these experiments at no cost to you, and tell you whether it improved things or not, does asking whether wavelets are appropriate for the domain even matter?