You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This happens on Raspberry Pi OS with Python 3.7.3, after successful installation and installing some dependencies not listed as prerequisites. It happens both when issuing a complete scrapy command, or start.sh and following the instructions.
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/pi/.local/lib/python3.7/site-packages/scrapy/__main__.py", line 4, in <module>
execute()
File "/home/pi/.local/lib/python3.7/site-packages/scrapy/cmdline.py", line 145, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "/home/pi/.local/lib/python3.7/site-packages/scrapy/cmdline.py", line 100, in _run_print_help
func(*a, **kw)
File "/home/pi/.local/lib/python3.7/site-packages/scrapy/cmdline.py", line 153, in _run_command
cmd.run(args, opts)
File "/home/pi/.local/lib/python3.7/site-packages/scrapy/commands/crawl.py", line 22, in run
crawl_defer = self.crawler_process.crawl(spname, **opts.spargs)
File "/home/pi/.local/lib/python3.7/site-packages/scrapy/crawler.py", line 191, in crawl
crawler = self.create_crawler(crawler_or_spidercls)
File "/home/pi/.local/lib/python3.7/site-packages/scrapy/crawler.py", line 224, in create_crawler
return self._create_crawler(crawler_or_spidercls)
File "/home/pi/.local/lib/python3.7/site-packages/scrapy/crawler.py", line 229, in _create_crawler
return Crawler(spidercls, self.settings)
File "/home/pi/.local/lib/python3.7/site-packages/scrapy/crawler.py", line 72, in __init__
self.extensions = ExtensionManager.from_crawler(self)
File "/home/pi/.local/lib/python3.7/site-packages/scrapy/middleware.py", line 53, in from_crawler
return cls.from_settings(crawler.settings, crawler)
File "/home/pi/.local/lib/python3.7/site-packages/scrapy/middleware.py", line 35, in from_settings
mw = create_instance(mwcls, settings, crawler)
File "/home/pi/.local/lib/python3.7/site-packages/scrapy/utils/misc.py", line 156, in create_instance
instance = objcls.from_crawler(crawler, *args, **kwargs)
File "/home/pi/feedly-link-aggregator/aggregator/extensions.py", line 184, in from_crawler
instance = cls(crawler)
File "/home/pi/feedly-link-aggregator/aggregator/extensions.py", line 217, in __init__
self.state = self.load_state()
File "/home/pi/feedly-link-aggregator/aggregator/extensions.py", line 237, in load_state
with suppress(FileNotFoundError, EOFError, gzip.BadGzipFile):
AttributeError: module 'gzip' has no attribute 'BadGzipFile'
The text was updated successfully, but these errors were encountered:
gzip.BadGzipFile is an exception class introduced in Python 3.8 (previously gzip raises OSError). The README has been updated to reflect this requirement.
This happens on Raspberry Pi OS with Python 3.7.3, after successful installation and installing some dependencies not listed as prerequisites. It happens both when issuing a complete scrapy command, or start.sh and following the instructions.
The text was updated successfully, but these errors were encountered: