Remaking Google requires engineering talent and lots of clickstream data (ie. Hundreds of Billions of search queries and information about what search results were clicked by which users).
Nobody else has this, and until they do, they won't be able to recreate Google.
Is it not like statistical certainty: i.e. you need a million or so queries to cover the range of your possible outcomes, and the hundreds of billions just takes you from 0.999999 certainty to 0.9999999999999999999?
You do NOT get added benefit from big data by taking it from millions to hundreds of billions. It becomes just a lesson in scale, without bringing any added benefit of insight.
With ML methods, they do benefit from taking data from millions to billions.
You can see that yourself with Google by searching for something obscure (eg. "how to unblock a drain jammed with cat fur") and getting three friends to click the 2nd search result. Come back in a week, and suddenly the 2nd result is now the first result! (ignore the videos and answer box)
It turns out just three new data points is enough to influence ranking for that query. It even affected related queries if you change the word "jammed" for "gunked".
Nobody else has this, and until they do, they won't be able to recreate Google.