> This isn’t an astronomical number, but it’s large enough that we (at least, as a bootstrapped company) have to care about cost efficiency.
... by externalizing the costs to a third party.
In general, I'm really surprised that they published this article. It's like they described exactly the data that somebody working on preventing scraping would need to block this traffic, in totally unnecessary level of detail. (E.g. telling exactly which ASN this traffic would be arriving from, describing the very specific timing of their traffic spikes, the kind of multi-city searches that probably see almost no organic traffic).
I just don't get it. It's like they're intentionally trying to get blocked so that they can write a follow-up "how Google blocked our bootstrapped business" blog post.
Or they just don't understand that what they are doing is illegal.
I'm always surprised by the level of ignorance, but I've seen more than one startup burn because the founders didn't understand which taxes were due and, thus, failed to account for them in their pricing.
I'm pretty sure this is legal wrt CFAA because they are not circumventing any access control mechanism.
However, 300k requests per day surely is enough that this could be considered a sort of denial of service attack and violate fair use.
If Google wanted to, they would scale down their servers for a day, wait for this traffic peak to hit, document how it made the service unavailable to others, and now you have a valid offense to sue for.
I am surprised people think this could affect the servers? If they scrape 300k prices from 25k pages a day, and the crawl runs every hour from 15-22 UTC, then that means the 25k is spread across 3.5k pages per crawl. Even if the crawl is aggressive and completes within a minute that is still only 58 QPS.
They are using residential proxies to evade rate-limiting by google. I don’t know if it’s enough to trigger CFAA, but it shows at a minimum that they know that what they are doing is not what google considers fair use.
Yes. That's also what I said above when I said "Google can sue them". It's not illegal is my entire point. You can sue anyone, for any reason, at any time, and even win. That doesn't make something illegal.
If Google sends them a C&D, and expressly forbids them from doing this activity, and implements technical measures to prevent them from doing so, and they continue doing so then they may start approaching the area of illegal (Craigslist v. 3Taps would agree, hiQ v. LinkedIn would disagree).
I agree with you in principle, but having worked on both sides of this, I think there's very little chance they get blocked at their current traffic levels.
I do think that if they ever get traction they'll have a lot of problems - there's a reason GDS access to flight availability is slow, expensive, and difficult to implement well. Scraping definitely won't scale.
You do know that "legal" means "against the law" right? And when you say something is illegal, then you need to produce the law it violates; there's no proof from a company trying to prevent you from doing it.
> The crawl function reads a URL from the SQS queue, then Pyppeteer tells Chrome to navigate to that page behind a rotating residential proxy. The residential proxy is necessary to prevent Google from blocking the IP Lambda makes requests from.
I am very interested in what a 'rotating residential proxy' is. Are they routing requests through random people's internet connections? Are these people willing participants? Where do they come from?
Check out Luminati for example. They have a huge network of true residential IPs to exit traffic from, and you have to pay a hefty premium per GB of traffic to do so ($12.50 per GB for rotating residential IPs, but requires a minimum $500 commitment per month). The reason they can offer this is because they're exiting traffic through the users of the free Hola VPN Chrome extension.
A residential proxy is listed as an "IP address provided by an Internet Service Provider", but I still don't really understand how they get access to them. ISPs have to be selling them access, right?
There is a 0% chance that 80M+ are agreeing "I am OK with Luminati selling access to my home internet connection to any party able to pay", which seems like an honest description of their business model. More likely Luminati is paying unscrupulous app developers to include this SDK in their apps, and some put some legalese into 10,000 word install-time agreements that no one reads.
I think you can make a reasonable argument that Hola VPN is largely exploiting users who don't actually understand and consent to having their IP address and connection used as a proxy.
To those lamenting that they're scraping... Google is the biggest scraper of them all. Facebook, Amazon, Google, Microsoft. All the big boys scrape voraciously, yet try their best to block themselves from being scraped. Scraping is vital for the functionality of the internet. The narrative that scraping is evil is what big companies want you to think.
When you block small scrapers from your site but permit giants like Googlebot and Bing all you're doing is locking in a monopoly that's bad for everyone
It's ironic writing an article like that, while their ToS states:
> As a user of the Site, you agree not to:
> 1. Systematically retrieve data or other content from the Site to create or compile, directly or indirectly, a collection, compilation, database, or directory without written permission from us.
Google also curates and republishes data from a lot of sites, including news sites and informational sites that significantly reduces traffic to other sites, etc. There's a lot of data Google scraped that wasn't necessarily explicitly given permission to outside their page crawler. They chose "beg for forgiveness" over "ask for permission" in many cases.
The point being that there's irony in every direction, the proverbial "pot calling the kettle black." Lots of irony in both directions.
It's strange they write about this so openly. Aren't they wary that someone at Google Fights will read it and they will try blocking them? (E.g. by scrambling the page's code)
Google doesn't need to block them on a technical level, they just need to send a simple C&D. If Brisk keeps scraping without permission after that, they can look forward to financially a ruinous legal battle [1]. Or they could just not blog about what they're doing and fly under the radar for years and years without any concern.
Interesting. A scraper scraping a scraper. I don't get what the value add is over clients just searching Google Flights directly. Not trying to be mean, just trying to understand.
Google Flights isn’t a scraper, it’s an evolution of ITA matrix from what I remember, directly connected to that GDS. They aren’t piggy backing on someone else’s servers.
Which is what this guy could have done, instead of behaving like pond scum. It’s not like it’s particularly complicated to get programmatic access to a GDS API, that’s what they’re there for.
Pond scum? He's scraping some data from a company that got rich scraping data and that probably will tell him to stop doing it. I mean if he's pond scum, what level of scum are those guys with upshot sites? What level of scum is Mark Zuckerberg? Pond scum is generally supposed to be pretty scummy, I can think of thousands of people more scummy than someone scraping from google.
It’s expensive to get access to a GDS API and, from what I’ve heard, the data they provide is quite difficult to work with. There’s a reason Google bought ITA for $700m, right? If this project ever grows, it could make sense to pull from a GDS.
> It’s expensive to get access to a GDS API and, from what I’ve heard, the data they provide is quite difficult to work with.
Well, it’s expensive to provide live answers for flight search queries across hundreds of airlines and thousands of airports... Some of the old booking interfaces are ugly, but for simple searching most of them provide relatively sane REST/JSON
I don’t understand your attitude, steal it until you make it?
That's the attitude of just about every successful company in history. Once large enough, some of them (e.g. YouTube) even force industrial changes to accommodate all the theft that made them successful.
Meanwhile on the topic of attitudes, referring to a startup as 'pond scum' simply because they scrape an extremely expensive data set, especially regarding an industry with a long and controversial history of strategies designed to avoid price transparency.. hmm.
Well, Google Flights is probably the best publicly available data on flight prices.
> Brisk Voyage finds cheap, last-minute weekend trips for our members. The basic idea is that we continuously check a bunch of flight and hotel prices, and when we find a trip that’s a low-priced outlier, we send an email with booking instructions.
Edit: Ok, this could actually be interesting. At least in the short while. .)
Flights isn't really the best way of getting cheap flights. They pepper the results, especially if they think you're scraping (which they probably do). Matrix is more accurate. Using a GDS is even more accurate but that costs money.
I wonder if that could already bring them on Googles radar. If so, Google would probably send a cease and desist letter and this startup would simply give up.
I wonder if Google would also demand their legal expenses? Probably a couple thousand dollars?
I know, nobody would go to court against Google - but what would happen if this did go to court? Which laws would Google cite to deem this illegal?
All the (AWS) technologies used are totally unnecessary. SQS/DynamoDB/Lambda. I can buy a laptop in walmart for $500 and i can do all the scrapping in starbucks wifi.
Right, it seems like they overbuilt this hacky solution. You are scraping, eventually you just need to subscribe to the data. Why invest that much effort into a temporary solution.
Lambda is needed to get rotating IPs and scale while avoiding browser fingerprinting. SQS takes the results of those scrapes and puts them into a database, DynamoDB. It's a straightforward web scraping pipeline.
I'm torn about their account. It's true that you could easily scrape 25k pages per day on a small VPS that costs less than the $50 Lambda costs they mentioned. And in order to scrape from that VPS you wouldn't have to engineer this much with getting Chrome to run in Lambda, batching URLs, and you wouldn't worry about Lambda timeouts because you could run the whole scrape in one session more or less. So you could say that the engineering effort they spent building this was a waste of money. On the other hand, if they ever do need to scale up for whatever reason (information spread across more pages, or they need to scrape more services, or need multiple attempts per URL), all they have to do is push a button, at which point the upfront engineering effort will have paid off. Either way, their current Lambda costs are definitely eclipsed by the costs of paying for the residential proxy IPs. My two cents.
... by externalizing the costs to a third party.
In general, I'm really surprised that they published this article. It's like they described exactly the data that somebody working on preventing scraping would need to block this traffic, in totally unnecessary level of detail. (E.g. telling exactly which ASN this traffic would be arriving from, describing the very specific timing of their traffic spikes, the kind of multi-city searches that probably see almost no organic traffic).
I just don't get it. It's like they're intentionally trying to get blocked so that they can write a follow-up "how Google blocked our bootstrapped business" blog post.