Signals¶
Scrapy uses signals extensively to notify when certain events occur. You can catch some of those signals in your Scrapy project (using an extension, for example) to perform additional tasks or extend Scrapy to add functionality not provided out of the box.
Even though signals provide several arguments, the handlers that catch them don’t need to accept all of them - the signal dispatching mechanism will only deliver the arguments that the handler receives.
You can connect to signals (or send your own) through the Signals API.
Deferred signal handlers¶
Some signals support returning Twisted deferreds from their handlers, see the Built-in signals reference below to know which ones.
Built-in signals reference¶
Here’s the list of Scrapy built-in signals and their meaning.
engine_started¶
- scrapy.signals.engine_started()¶
Sent when the Scrapy engine has started crawling.
This signal supports returning deferreds from their handlers.
Note
This signal may be fired after the spider_opened signal, depending on how the spider was started. So don’t rely on this signal getting fired before spider_opened.
engine_stopped¶
- scrapy.signals.engine_stopped()¶
Sent when the Scrapy engine is stopped (for example, when a crawling process has finished).
This signal supports returning deferreds from their handlers.
item_scraped¶
- scrapy.signals.item_scraped(item, response, spider)¶
Sent when an item has been scraped, after it has passed all the Item Pipeline stages (without being dropped).
This signal supports returning deferreds from their handlers.
Parameters: - item (Item object) – the item scraped
- response (Response object) – the response from where the item was scraped
- spider (BaseSpider object) – the spider which scraped the item
item_dropped¶
- scrapy.signals.item_dropped(item, spider, exception)¶
Sent after an item has been dropped from the Item Pipeline when some stage raised a DropItem exception.
This signal supports returning deferreds from their handlers.
Parameters: - item (Item object) – the item dropped from the Item Pipeline
- spider (BaseSpider object) – the spider which scraped the item
- exception (DropItem exception) – the exception (which must be a DropItem subclass) which caused the item to be dropped
spider_closed¶
- scrapy.signals.spider_closed(spider, reason)¶
Sent after a spider has been closed. This can be used to release per-spider resources reserved on spider_opened.
This signal supports returning deferreds from their handlers.
Parameters: - spider (BaseSpider object) – the spider which has been closed
- reason (str) – a string which describes the reason why the spider was closed. If it was closed because the spider has completed scraping, the reason is 'finished'. Otherwise, if the spider was manually closed by calling the close_spider engine method, then the reason is the one passed in the reason argument of that method (which defaults to 'cancelled'). If the engine was shutdown (for example, by hitting Ctrl-C to stop it) the reason will be 'shutdown'.
spider_opened¶
- scrapy.signals.spider_opened(spider)¶
Sent after a spider has been opened for crawling. This is typically used to reserve per-spider resources, but can be used for any task that needs to be performed when a spider is opened.
This signal supports returning deferreds from their handlers.
Parameters: spider (BaseSpider object) – the spider which has been opened
spider_idle¶
- scrapy.signals.spider_idle(spider)¶
Sent when a spider has gone idle, which means the spider has no further:
- requests waiting to be downloaded
- requests scheduled
- items being processed in the item pipeline
If the idle state persists after all handlers of this signal have finished, the engine starts closing the spider. After the spider has finished closing, the spider_closed signal is sent.
You can, for example, schedule some requests in your spider_idle handler to prevent the spider from being closed.
This signal does not support returning deferreds from their handlers.
Parameters: spider (BaseSpider object) – the spider which has gone idle
spider_error¶
- scrapy.signals.spider_error(failure, response, spider)¶
Sent when a spider callback generates an error (ie. raises an exception).
Parameters: - failure (Failure object) – the exception raised as a Twisted Failure object
- response (Response object) – the response being processed when the exception was raised
- spider (BaseSpider object) – the spider which raised the exception
response_received¶
- scrapy.signals.response_received(response, request, spider)¶
Sent when the engine receives a new Response from the downloader.
This signal does not support returning deferreds from their handlers.
Parameters: - response (Response object) – the response received
- request (Request object) – the request that generated the response
- spider (BaseSpider object) – the spider for which the response is intended
response_downloaded¶
- scrapy.signals.response_downloaded(response, request, spider)¶
Sent by the downloader right after a HTTPResponse is downloaded.
This signal does not support returning deferreds from their handlers.
Parameters: - response (Response object) – the response downloaded
- request (Request object) – the request that generated the response
- spider (BaseSpider object) – the spider for which the response is intended