PageRenderTime 110ms CodeModel.GetById 13ms RepoModel.GetById 0ms app.codeStats 0ms

/docs/faq.rst

https://gitlab.com/oytunistrator/scrapy
ReStructuredText | 286 lines | 191 code | 95 blank | 0 comment | 0 complexity | e93077245c77a7a108fc463018b44645 MD5 | raw file
  1. .. _faq:
  2. Frequently Asked Questions
  3. ==========================
  4. How does Scrapy compare to BeautifulSoup or lxml?
  5. -------------------------------------------------
  6. `BeautifulSoup`_ and `lxml`_ are libraries for parsing HTML and XML. Scrapy is
  7. an application framework for writing web spiders that crawl web sites and
  8. extract data from them.
  9. Scrapy provides a built-in mechanism for extracting data (called
  10. :ref:`selectors <topics-selectors>`) but you can easily use `BeautifulSoup`_
  11. (or `lxml`_) instead, if you feel more comfortable working with them. After
  12. all, they're just parsing libraries which can be imported and used from any
  13. Python code.
  14. In other words, comparing `BeautifulSoup`_ (or `lxml`_) to Scrapy is like
  15. comparing `jinja2`_ to `Django`_.
  16. .. _BeautifulSoup: http://www.crummy.com/software/BeautifulSoup/
  17. .. _lxml: http://lxml.de/
  18. .. _jinja2: http://jinja.pocoo.org/2/
  19. .. _Django: http://www.djangoproject.com
  20. .. _faq-python-versions:
  21. What Python versions does Scrapy support?
  22. -----------------------------------------
  23. Scrapy is supported under Python 2.7 only.
  24. Python 2.6 support was dropped starting at Scrapy 0.20.
  25. Does Scrapy work with Python 3?
  26. ---------------------------------
  27. No, but there are plans to support Python 3.3+.
  28. At the moment, Scrapy works with Python 2.7.
  29. .. seealso:: :ref:`faq-python-versions`.
  30. Did Scrapy "steal" X from Django?
  31. ---------------------------------
  32. Probably, but we don't like that word. We think Django_ is a great open source
  33. project and an example to follow, so we've used it as an inspiration for
  34. Scrapy.
  35. We believe that, if something is already done well, there's no need to reinvent
  36. it. This concept, besides being one of the foundations for open source and free
  37. software, not only applies to software but also to documentation, procedures,
  38. policies, etc. So, instead of going through each problem ourselves, we choose
  39. to copy ideas from those projects that have already solved them properly, and
  40. focus on the real problems we need to solve.
  41. We'd be proud if Scrapy serves as an inspiration for other projects. Feel free
  42. to steal from us!
  43. .. _Django: http://www.djangoproject.com
  44. Does Scrapy work with HTTP proxies?
  45. -----------------------------------
  46. Yes. Support for HTTP proxies is provided (since Scrapy 0.8) through the HTTP
  47. Proxy downloader middleware. See
  48. :class:`~scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware`.
  49. How can I scrape an item with attributes in different pages?
  50. ------------------------------------------------------------
  51. See :ref:`topics-request-response-ref-request-callback-arguments`.
  52. Scrapy crashes with: ImportError: No module named win32api
  53. ----------------------------------------------------------
  54. You need to install `pywin32`_ because of `this Twisted bug`_.
  55. .. _pywin32: http://sourceforge.net/projects/pywin32/
  56. .. _this Twisted bug: http://twistedmatrix.com/trac/ticket/3707
  57. How can I simulate a user login in my spider?
  58. ---------------------------------------------
  59. See :ref:`topics-request-response-ref-request-userlogin`.
  60. Does Scrapy crawl in breadth-first or depth-first order?
  61. --------------------------------------------------------
  62. By default, Scrapy uses a `LIFO`_ queue for storing pending requests, which
  63. basically means that it crawls in `DFO order`_. This order is more convenient
  64. in most cases. If you do want to crawl in true `BFO order`_, you can do it by
  65. setting the following settings::
  66. DEPTH_PRIORITY = 1
  67. SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleFifoDiskQueue'
  68. SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.FifoMemoryQueue'
  69. My Scrapy crawler has memory leaks. What can I do?
  70. --------------------------------------------------
  71. See :ref:`topics-leaks`.
  72. Also, Python has a builtin memory leak issue which is described in
  73. :ref:`topics-leaks-without-leaks`.
  74. How can I make Scrapy consume less memory?
  75. ------------------------------------------
  76. See previous question.
  77. Can I use Basic HTTP Authentication in my spiders?
  78. --------------------------------------------------
  79. Yes, see :class:`~scrapy.contrib.downloadermiddleware.httpauth.HttpAuthMiddleware`.
  80. Why does Scrapy download pages in English instead of my native language?
  81. ------------------------------------------------------------------------
  82. Try changing the default `Accept-Language`_ request header by overriding the
  83. :setting:`DEFAULT_REQUEST_HEADERS` setting.
  84. .. _Accept-Language: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.4
  85. Where can I find some example Scrapy projects?
  86. ----------------------------------------------
  87. See :ref:`intro-examples`.
  88. Can I run a spider without creating a project?
  89. ----------------------------------------------
  90. Yes. You can use the :command:`runspider` command. For example, if you have a
  91. spider written in a ``my_spider.py`` file you can run it with::
  92. scrapy runspider my_spider.py
  93. See :command:`runspider` command for more info.
  94. I get "Filtered offsite request" messages. How can I fix them?
  95. --------------------------------------------------------------
  96. Those messages (logged with ``DEBUG`` level) don't necessarily mean there is a
  97. problem, so you may not need to fix them.
  98. Those message are thrown by the Offsite Spider Middleware, which is a spider
  99. middleware (enabled by default) whose purpose is to filter out requests to
  100. domains outside the ones covered by the spider.
  101. For more info see:
  102. :class:`~scrapy.contrib.spidermiddleware.offsite.OffsiteMiddleware`.
  103. What is the recommended way to deploy a Scrapy crawler in production?
  104. ---------------------------------------------------------------------
  105. See :ref:`topics-scrapyd`.
  106. Can I use JSON for large exports?
  107. ---------------------------------
  108. It'll depend on how large your output is. See :ref:`this warning
  109. <json-with-large-data>` in :class:`~scrapy.contrib.exporter.JsonItemExporter`
  110. documentation.
  111. Can I return (Twisted) deferreds from signal handlers?
  112. ------------------------------------------------------
  113. Some signals support returning deferreds from their handlers, others don't. See
  114. the :ref:`topics-signals-ref` to know which ones.
  115. What does the response status code 999 means?
  116. ---------------------------------------------
  117. 999 is a custom response status code used by Yahoo sites to throttle requests.
  118. Try slowing down the crawling speed by using a download delay of ``2`` (or
  119. higher) in your spider::
  120. class MySpider(CrawlSpider):
  121. name = 'myspider'
  122. download_delay = 2
  123. # [ ... rest of the spider code ... ]
  124. Or by setting a global download delay in your project with the
  125. :setting:`DOWNLOAD_DELAY` setting.
  126. Can I call ``pdb.set_trace()`` from my spiders to debug them?
  127. -------------------------------------------------------------
  128. Yes, but you can also use the Scrapy shell which allows you too quickly analyze
  129. (and even modify) the response being processed by your spider, which is, quite
  130. often, more useful than plain old ``pdb.set_trace()``.
  131. For more info see :ref:`topics-shell-inspect-response`.
  132. Simplest way to dump all my scraped items into a JSON/CSV/XML file?
  133. -------------------------------------------------------------------
  134. To dump into a JSON file::
  135. scrapy crawl myspider -o items.json
  136. To dump into a CSV file::
  137. scrapy crawl myspider -o items.csv
  138. To dump into a XML file::
  139. scrapy crawl myspider -o items.xml
  140. For more information see :ref:`topics-feed-exports`
  141. What's this huge cryptic ``__VIEWSTATE`` parameter used in some forms?
  142. ----------------------------------------------------------------------
  143. The ``__VIEWSTATE`` parameter is used in sites built with ASP.NET/VB.NET. For
  144. more info on how it works see `this page`_. Also, here's an `example spider`_
  145. which scrapes one of these sites.
  146. .. _this page: http://search.cpan.org/~ecarroll/HTML-TreeBuilderX-ASP_NET-0.09/lib/HTML/TreeBuilderX/ASP_NET.pm
  147. .. _example spider: http://github.com/AmbientLighter/rpn-fas/blob/master/fas/spiders/rnp.py
  148. What's the best way to parse big XML/CSV data feeds?
  149. ----------------------------------------------------
  150. Parsing big feeds with XPath selectors can be problematic since they need to
  151. build the DOM of the entire feed in memory, and this can be quite slow and
  152. consume a lot of memory.
  153. In order to avoid parsing all the entire feed at once in memory, you can use
  154. the functions ``xmliter`` and ``csviter`` from ``scrapy.utils.iterators``
  155. module. In fact, this is what the feed spiders (see :ref:`topics-spiders`) use
  156. under the cover.
  157. Does Scrapy manage cookies automatically?
  158. -----------------------------------------
  159. Yes, Scrapy receives and keeps track of cookies sent by servers, and sends them
  160. back on subsequent requests, like any regular web browser does.
  161. For more info see :ref:`topics-request-response` and :ref:`cookies-mw`.
  162. How can I see the cookies being sent and received from Scrapy?
  163. --------------------------------------------------------------
  164. Enable the :setting:`COOKIES_DEBUG` setting.
  165. How can I instruct a spider to stop itself?
  166. -------------------------------------------
  167. Raise the :exc:`~scrapy.exceptions.CloseSpider` exception from a callback. For
  168. more info see: :exc:`~scrapy.exceptions.CloseSpider`.
  169. How can I prevent my Scrapy bot from getting banned?
  170. ----------------------------------------------------
  171. See :ref:`bans`.
  172. Should I use spider arguments or settings to configure my spider?
  173. -----------------------------------------------------------------
  174. Both :ref:`spider arguments <spiderargs>` and :ref:`settings <topics-settings>`
  175. can be used to configure your spider. There is no strict rule that mandates to
  176. use one or the other, but settings are more suited for parameters that, once
  177. set, don't change much, while spider arguments are meant to change more often,
  178. even on each spider run and sometimes are required for the spider to run at all
  179. (for example, to set the start url of a spider).
  180. To illustrate with an example, assuming you have a spider that needs to log
  181. into a site to scrape data, and you only want to scrape data from a certain
  182. section of the site (which varies each time). In that case, the credentials to
  183. log in would be settings, while the url of the section to scrape would be a
  184. spider argument.
  185. I'm scraping a XML document and my XPath selector doesn't return any items
  186. --------------------------------------------------------------------------
  187. You may need to remove namespaces. See :ref:`removing-namespaces`.
  188. .. _user agents: http://en.wikipedia.org/wiki/User_agent
  189. .. _LIFO: http://en.wikipedia.org/wiki/LIFO
  190. .. _DFO order: http://en.wikipedia.org/wiki/Depth-first_search
  191. .. _BFO order: http://en.wikipedia.org/wiki/Breadth-first_search