Well it really depends on the task. If it can be done with a regex, use a regex. We can’t make categorical statements about LLMs being better. It depends.
You can also probably distill a large model to a smaller one while maintaining a lot of performance. DistillBert is almost as good as Bert at a fraction of the inference cost.
GPT-3.5 and 4 also currently aren’t deterministic even with temperature zero, which is a nightmare for debugging.
You can also probably distill a large model to a smaller one while maintaining a lot of performance. DistillBert is almost as good as Bert at a fraction of the inference cost.
GPT-3.5 and 4 also currently aren’t deterministic even with temperature zero, which is a nightmare for debugging.