> Since the official announcement of Chrome Headless, many of the industry standard libraries for automated testing have been discontinued by their maintainers. The prominent of these are PhantomJS and Selenium IDE for Firefox.
Correct me if I'm wrong, but if I'm notm mistaken Selenium IDE has been discontinued due to lack of mantainers, and that has little if any relation to Chrome Headless.
The IDE is just a more effective way of programming test behavior; the Selenium webdriver is still up and working with straight code (as is the case of this tutorial).
HospitalRun team is great and very welcoming. You can join there Slack channel here: https://hospitalrun.slack.com/ . The project is expected to be undertaken by JS Foundation in near future.
And Yeah, I am a Full stack developer.
Just to note, there are other uses in browser automation beyond testing (this article is about webscraping). Selenium webdriver have its own limitations, and they aren't willing to add features to cover other use cases.
Do you know if it is possible to render a page without serving it from a web server? For example, I have the html of one page of my domain generated by a test. I would like to use puppeteer to render it. But I don't want to setup a http server for this. I would like to give a string with the html + a url to page.goto and let it render the page like it comes from the real server.
I guess I can cheat by intercepting the request and respond with the html I already have. But I wonder if there is already something existing.
Initial assumption when reading the thread was that navigating to a data URI would be handled like entry of a data URI into the omnibox and still be allowed.
A small test case confirms that assumption - it works.
Tried Puppetter, Its pretty awesome. I'm a newbie in terms of scraping but thus far its been a pleasant experience with this tool. Anyone used artoo.js with puppeteer successfully?
For example with Puppeteer you can do page.injectFile("jquery-3.2.1.min.js"). I think that would simplify your evaluate() calls.
It would also be easy to speed up the whole process by doing a single evaluate() call per page with all your scraping code in it.
BTW we just released an article with tips & tricks for Headless Chrome: https://blog.phantombuster.com/web-scraping-in-2017-headless... What do you think?