Is there a recommended way for storing binary files that are downloaded by a scraper? For example: a site with a bunch of .pdfs.
The most obvious way seems to be storing it in the sqlite database as a blob object along with its associated metadata, but using sqlite to store blobs seems a little bit clumsy, and I wonder if there's a better way to surface those files for an API to retrieve.
Additionally, is it possible/allowed for scrapers to make POSTs/uploads to external servers? I understand that there's the webhook integration, but that doesn't transmit any of the scraped data.