Scrapy Gives Urlerror:
Solution 1:
There's an open scrapy issue for this problem: https://github.com/scrapy/scrapy/issues/1054
Although it seems to be just a warning on other platforms.
You can disable the S3DownloadHandler (that is causing this error) by adding to your scrapy settings:
DOWNLOAD_HANDLERS = {
's3': None,
}
Solution 2:
you can also remove boto
from the optional packages adding:
from scrapy import optional_features
optional_features.remove('boto')
as suggested in this issue
Solution 3:
This is very annoying. What is happening is that you have Null credentials and boto decides to populate them for you from a metadata server (if it exists) using _populate_keys_from_metadata_server()
. See here and here. If you don't run in an EC2 instance, or something that runs a metadata server (listening in the auto-magic IP: 169.254.169.254), the attempt timeouts. This was ok and quiet since scrapy handles the exception here, but unfortunately, boto started logging it here thus, the annoying message. Apart from disabling the s3 as said before... which looks a bit scary, you can achieve similar results by just setting the credentials to an empty string.
AWS_ACCESS_KEY_ID = ""AWS_SECRET_ACCESS_KEY = ""
Post a Comment for "Scrapy Gives Urlerror:"