Even then threads are rarely the correct architectural abstraction for your code. Something like a work stealing job scheduler that happens to have spun up to the same number of worker threads as cores on the system under the hood generally works way better. That is, there should ideally be only one line in your code base that says "new Thread()" or the like.
There's "using threads at all" and there's "using threads as component of your exported architecture". The vast majority of your code should work if there's one worker thread or a hundred. You shouldn't be using threads to save state while some long running IO operation blocks you for instance. Even if you have computationally expensive work that screams threads, you should split them into small jobs backed by a priority queue anyway because that's how you most effectively distribute work regardless of one core or many.
A thread pool can have two threads pick up the task and run it concurrently. What you're talking about is basically an event loop that runs one task at a time (on any core ) giving you sequential execution.