Python scraper taking forever using Splinter

I just completed a scraper on my local machine which works. After checking it into Github I tried running the scraper but the scraper still hasn’t finished after 18+minutes

The scraper installs all the required modules successfully, but when it starts it just seems to hang.

In the log all I see are the following debug messages being repeated (I enabled debug on the scraper)

DEBUG:selenium.webdriver.remote.remote_connection:Finished Request
DEBUG:selenium.webdriver.remote.remote_connection:POST http://127.0.0.1:41825/wd/hub/session/437d3ab0-807c-11e7-879b-39c563003a5b/url {“url”: “https://eservices.lithgow.nsw.gov.au/ePropertyProd/P1/eTrack/eTrackApplicationSearchResults.aspx?Field=S&Period=L7&r=P1.WEBGUEST&f=%24P1.ETR.SEARCH.SL7”, “sessionId”: “437d3ab0-807c-11e7-879b-39c563003a5b”}

This is the scraper

https://morph.io/adamclayton/lithgow_city_council_development_applications

Could someone take a look at the issue? I am going to stop the scraper for now.

Just stopped the scraper but it took some time to return. Here is the error message:

Last run failed 2 minutes ago with status code 137.

I just did another run and it looks like it worked.

I’ll close for now.

1 Like

Thanks for sharing this @adamclayton post up here if it happens again :slight_smile: