I’m sure this one might be my scrapers fault as there was (still is) lots of crazy session resetting… just realized I wasn’t quitting all of the sessions, pushing an update to fix that now
Hi. Haven’t raised it over the holidays, but I’ve got a lot of stuck scrapers under the wdiv-scrapers org: https://morph.io/wdiv-scrapers
Some of them have been queued for several weeks now. Any chance of getting them cleared?
Thanks
I’ve been away on holidays but @equivalentideas has done a fantastic job of responding to queue and disk space issues with morph.io while I’ve been away - thank you Luke
After another round of issues over the last couple of days we both sat down and purged a bunch of Docker cruft that should have resolved all these issues.
We’ve still got a stack of problems that will recur, including the most mundane - disk space, but we’ll try to keep on top of it.
Thanks for killing those ones @equivalentideas, but it looks like there are about 40 others in tmtmtmtm and everypolitician-scrapers stuck for almost a week (e.g. tmtmtmtm/moldova_parlament, tmtmtmtm/montenegro-parldata, everypolitician-scrapers/samoa-parliament, https://morph.io/everypolitician-scrapers/croatian-parliament-wikidata). Any clue why those aren’t coming up in your search, and how to detect and autofix things like this?
When we were having all the disk space issues last week, Redis probably dropped a whole bunch of jobs, which left these Runs stuck in a queued but not running state, which avoids all our different lists.
The prune is removing the container, which may be causing the scraper to start from scratch. Which is why it appears to be re-running again and again when it hits the front of the queue.
The prune is removing the container, which may be causing the scraper to start from scratch. Which is why it appears to be re-running again and again when it hits the front of the queue.
When someone gets to having a look at it, if there’s anything else in the https://morph.io/wdiv-scrapers org that has become stuck by that point (I’ve got a few others that have been in the queue for 10+ hours at this point but may still run or fail, I guess), can you give them all a spring clean.
Thanks