There's the linguistic principle of "You shall know a word by the company it keeps", so for any particular word you can identify which other words are most specifically related to that word, the simplest measure that can be used for that is freq(both words together)/freq(that other word in general).
That would allow you to prioritize sentences containing "getting a car" over "driving a car" - even if "getting a car" is more frequent, driving is more specific according to such a measure.
Hmm, maybe I've been overcomplicating the problem in my mind. You've given me some good ideas.
Bigrams, as your own example shows, are too simple: in both examples, "car" will get related to "a", instead of "getting" and "driving".
Maybe if I parse all sentences with dependency and built dependency bigrams, and score sentences with frequency/inverse_freq and length of sentence (short sentences are better).
That would allow you to prioritize sentences containing "getting a car" over "driving a car" - even if "getting a car" is more frequent, driving is more specific according to such a measure.