Hacker News new | past | comments | ask | show | jobs | submit login
How we scrape 300k prices per day from Google Flights (medium.com/brisk-voyage)
47 points by gusgordon on June 13, 2020 | hide | past | favorite | 67 comments



> This isn’t an astronomical number, but it’s large enough that we (at least, as a bootstrapped company) have to care about cost efficiency.

... by externalizing the costs to a third party.

In general, I'm really surprised that they published this article. It's like they described exactly the data that somebody working on preventing scraping would need to block this traffic, in totally unnecessary level of detail. (E.g. telling exactly which ASN this traffic would be arriving from, describing the very specific timing of their traffic spikes, the kind of multi-city searches that probably see almost no organic traffic).

I just don't get it. It's like they're intentionally trying to get blocked so that they can write a follow-up "how Google blocked our bootstrapped business" blog post.


Or they just don't understand that what they are doing is illegal.

I'm always surprised by the level of ignorance, but I've seen more than one startup burn because the founders didn't understand which taxes were due and, thus, failed to account for them in their pricing.


? https://news.ycombinator.com/item?id=22180559 i dont think it's unethical in the age of ad driven web


Scraping public data is not illegal in the US.

> I'm always surprised by the level of ignorance

Such as the ignorance displayed in your comment?


> what they are doing is illegal

It's not illegal. Google can sue them and bury them in court fees and potentially win a civil suit, but it sure as hell isn't illegal.


I'm pretty sure this is legal wrt CFAA because they are not circumventing any access control mechanism.

However, 300k requests per day surely is enough that this could be considered a sort of denial of service attack and violate fair use.

If Google wanted to, they would scale down their servers for a day, wait for this traffic peak to hit, document how it made the service unavailable to others, and now you have a valid offense to sue for.


I am surprised people think this could affect the servers? If they scrape 300k prices from 25k pages a day, and the crawl runs every hour from 15-22 UTC, then that means the 25k is spread across 3.5k pages per crawl. Even if the crawl is aggressive and completes within a minute that is still only 58 QPS.


They are using residential proxies to evade rate-limiting by google. I don’t know if it’s enough to trigger CFAA, but it shows at a minimum that they know that what they are doing is not what google considers fair use.


> what they are doing is illegal

illegal means violate criminal code.

> now you have a valid offense to sue for

That's civil, not criminal.

It's pedantic, but words matter.


The court decision in the hiQ vs LinkedIn case limits the CFAA's reach and scraping of publicly accessible data "likely does not violate the CFAA".

https://www.eff.org/deeplinks/2019/09/victory-ruling-hiq-v-l...


> now you have a valid offense to sue for.

Yes. That's also what I said above when I said "Google can sue them". It's not illegal is my entire point. You can sue anyone, for any reason, at any time, and even win. That doesn't make something illegal.

If Google sends them a C&D, and expressly forbids them from doing this activity, and implements technical measures to prevent them from doing so, and they continue doing so then they may start approaching the area of illegal (Craigslist v. 3Taps would agree, hiQ v. LinkedIn would disagree).


> denial of service attack

300k requests / day is a little over 3 per second. That's not much.


Unethical? Yes. Illegal? How?


It’s not even unethical. Google publishes the data to the public for all to see.


Unethical to build a business scraping data from a company that makes money scraping data?


The Google Flights data is not "scraped." They interface directly to the airline reservations systems.


I agree with you in principle, but having worked on both sides of this, I think there's very little chance they get blocked at their current traffic levels.

I do think that if they ever get traction they'll have a lot of problems - there's a reason GDS access to flight availability is slow, expensive, and difficult to implement well. Scraping definitely won't scale.


> E.g. telling exactly which ASN this traffic would be arriving from

The article mentions that they are using rotating residential proxies.


Ah, thanks! I missed that part in the writeup.


There's a very high chance this is monetization of botnets, which adds even more to the overall shadyness.


This is almost certainly guaranteed to be either monetization of botnets, or inadvertently installed adware.


Why would they need those proxies if what they are doing is fully legal?


You do know that "legal" means "against the law" right? And when you say something is illegal, then you need to produce the law it violates; there's no proof from a company trying to prevent you from doing it.


> The crawl function reads a URL from the SQS queue, then Pyppeteer tells Chrome to navigate to that page behind a rotating residential proxy. The residential proxy is necessary to prevent Google from blocking the IP Lambda makes requests from.

I am very interested in what a 'rotating residential proxy' is. Are they routing requests through random people's internet connections? Are these people willing participants? Where do they come from?


Check out Luminati for example. They have a huge network of true residential IPs to exit traffic from, and you have to pay a hefty premium per GB of traffic to do so ($12.50 per GB for rotating residential IPs, but requires a minimum $500 commitment per month). The reason they can offer this is because they're exiting traffic through the users of the free Hola VPN Chrome extension.


It looks like that's basically what they are (https://smartproxy.com/blog/what-is-a-residential-proxies-ne...).

A residential proxy is listed as an "IP address provided by an Internet Service Provider", but I still don't really understand how they get access to them. ISPs have to be selling them access, right?


My guess is free VPN services and browser extensions which resell your residential network in tiny chunks.


Yes, to all your questions.

https://luminati.io/

Providers of the 'free' Hola vpn.


How awful.

"80M+ Monthly devices hosting Luminati's SDK" & "100% Peers chose to opt-in to Luminati's network" (https://luminati.io/network-details)

There is a 0% chance that 80M+ are agreeing "I am OK with Luminati selling access to my home internet connection to any party able to pay", which seems like an honest description of their business model. More likely Luminati is paying unscrupulous app developers to include this SDK in their apps, and some put some legalese into 10,000 word install-time agreements that no one reads.


People who use the network also participate in the network themselves.


I think you can make a reasonable argument that Hola VPN is largely exploiting users who don't actually understand and consent to having their IP address and connection used as a proxy.


To those lamenting that they're scraping... Google is the biggest scraper of them all. Facebook, Amazon, Google, Microsoft. All the big boys scrape voraciously, yet try their best to block themselves from being scraped. Scraping is vital for the functionality of the internet. The narrative that scraping is evil is what big companies want you to think.

When you block small scrapers from your site but permit giants like Googlebot and Bing all you're doing is locking in a monopoly that's bad for everyone


Google has the (often implicit) permission of the website owner to scrape. OTOH, Google Flights explicitly disallows scraping results.


No, Google's scraping is opt-out only, which they offer to be friendly.

Google does not need anyone's permission to scrape publicly accessible data, and they are not required to follow any opt-out requests.


It's ironic writing an article like that, while their ToS states:

> As a user of the Site, you agree not to:

> 1. Systematically retrieve data or other content from the Site to create or compile, directly or indirectly, a collection, compilation, database, or directory without written permission from us.


Irony is even deeper when you look on the other side, which is Google who made most their money off scraping data from people in different forms.

It's data scraping/middlemen all the way down... I wonder if Google indexes their scrape results to throw some loops in the mix.


Google have respected non scrape headers for decades no?


Google also curates and republishes data from a lot of sites, including news sites and informational sites that significantly reduces traffic to other sites, etc. There's a lot of data Google scraped that wasn't necessarily explicitly given permission to outside their page crawler. They chose "beg for forgiveness" over "ask for permission" in many cases.

The point being that there's irony in every direction, the proverbial "pot calling the kettle black." Lots of irony in both directions.


Just because its in the ToS doesn't mean its enforceable. That line is not enforceable in the US.


It's strange they write about this so openly. Aren't they wary that someone at Google Fights will read it and they will try blocking them? (E.g. by scrambling the page's code)


Google doesn't need to block them on a technical level, they just need to send a simple C&D. If Brisk keeps scraping without permission after that, they can look forward to financially a ruinous legal battle [1]. Or they could just not blog about what they're doing and fly under the radar for years and years without any concern.

[1] https://www.eff.org/deeplinks/2019/09/victory-ruling-hiq-v-l...


This isn't really anything new that Google isn't aware of.


I don't think it's possible to "scramble" a site to be unscrapeable - it has to render at some point.


You're supposed to break rules and laws in the early days; it's part of the startup lore.

Blogging about it publicly, as they're doing it: that may appear newish, but I'm sure some other startup did that 15 years ago.


Interesting. A scraper scraping a scraper. I don't get what the value add is over clients just searching Google Flights directly. Not trying to be mean, just trying to understand.


Google Flights isn’t a scraper, it’s an evolution of ITA matrix from what I remember, directly connected to that GDS. They aren’t piggy backing on someone else’s servers.

Which is what this guy could have done, instead of behaving like pond scum. It’s not like it’s particularly complicated to get programmatic access to a GDS API, that’s what they’re there for.


Pond scum? He's scraping some data from a company that got rich scraping data and that probably will tell him to stop doing it. I mean if he's pond scum, what level of scum are those guys with upshot sites? What level of scum is Mark Zuckerberg? Pond scum is generally supposed to be pretty scummy, I can think of thousands of people more scummy than someone scraping from google.


It’s expensive to get access to a GDS API and, from what I’ve heard, the data they provide is quite difficult to work with. There’s a reason Google bought ITA for $700m, right? If this project ever grows, it could make sense to pull from a GDS.


> It’s expensive to get access to a GDS API and, from what I’ve heard, the data they provide is quite difficult to work with.

Well, it’s expensive to provide live answers for flight search queries across hundreds of airlines and thousands of airports... Some of the old booking interfaces are ugly, but for simple searching most of them provide relatively sane REST/JSON

I don’t understand your attitude, steal it until you make it?


That's the attitude of just about every successful company in history. Once large enough, some of them (e.g. YouTube) even force industrial changes to accommodate all the theft that made them successful.

Meanwhile on the topic of attitudes, referring to a startup as 'pond scum' simply because they scrape an extremely expensive data set, especially regarding an industry with a long and controversial history of strategies designed to avoid price transparency.. hmm.


Cool. I did not know that. Thanks for the clarification.


Well, Google Flights is probably the best publicly available data on flight prices.

> Brisk Voyage finds cheap, last-minute weekend trips for our members. The basic idea is that we continuously check a bunch of flight and hotel prices, and when we find a trip that’s a low-priced outlier, we send an email with booking instructions.

Edit: Ok, this could actually be interesting. At least in the short while. .)


Flights isn't really the best way of getting cheap flights. They pepper the results, especially if they think you're scraping (which they probably do). Matrix is more accurate. Using a GDS is even more accurate but that costs money.


Hey Gus, you might be interested in https://pricelinepartnernetwork.com/ (take a look at the API part for example)

(Disclaimer: I work for priceline).


The way I read it, they scrape 25k pages per day?

I wonder if that could already bring them on Googles radar. If so, Google would probably send a cease and desist letter and this startup would simply give up.

I wonder if Google would also demand their legal expenses? Probably a couple thousand dollars?

I know, nobody would go to court against Google - but what would happen if this did go to court? Which laws would Google cite to deem this illegal?


Reader mode in case you don't prefer Medium: https://baitblock.app/read/medium.com/brisk-voyage/how-we-sc...


All the (AWS) technologies used are totally unnecessary. SQS/DynamoDB/Lambda. I can buy a laptop in walmart for $500 and i can do all the scrapping in starbucks wifi.


Right, it seems like they overbuilt this hacky solution. You are scraping, eventually you just need to subscribe to the data. Why invest that much effort into a temporary solution.


Lambda is needed to get rotating IPs and scale while avoiding browser fingerprinting. SQS takes the results of those scrapes and puts them into a database, DynamoDB. It's a straightforward web scraping pipeline.


Lambda isn't enough. You'll get blocked in a heartbeat. You still need a proxy service.


No, the access google over residental proxy servers provided by packetstream.io


Of course its unnecessary. The point is that you can do it in the cloud, instead of on a laptop at starbucks...


You state that you care about costs but you end up using some of the most expensive cloud offerings out there?


I'm torn about their account. It's true that you could easily scrape 25k pages per day on a small VPS that costs less than the $50 Lambda costs they mentioned. And in order to scrape from that VPS you wouldn't have to engineer this much with getting Chrome to run in Lambda, batching URLs, and you wouldn't worry about Lambda timeouts because you could run the whole scrape in one session more or less. So you could say that the engineering effort they spent building this was a waste of money. On the other hand, if they ever do need to scale up for whatever reason (information spread across more pages, or they need to scrape more services, or need multiple attempts per URL), all they have to do is push a button, at which point the upfront engineering effort will have paid off. Either way, their current Lambda costs are definitely eclipsed by the costs of paying for the residential proxy IPs. My two cents.


Seriously, Lambda does not make sense for their use case.


This is awesome


The Internet is not series of tubes. It's a series of leeches...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: