2015-01-21 2 views
0

Я довольно новичок в этом и ознакомился с основными учебными пособиями на веб-сайте Scrapy. Я пытаюсь запустить spider, который захватывает определенные фрагменты информации от http://www.hltv.org/?pageid=188&eventid=0&gameid=2 и создает CSV-файл с данными. Я хотел бы паука, чтобы пройти через каждую дату и царапать ключевые части информации для каждого из перечисленных дата: http://www.hltv.org/?pageid=188&matchid=19029&eventid=0&gameid=2Скребка сайта и создание файла .csv с помощью Scrapy

Это то, что я до сих пор:

import scrapy 
 

 
class hltvspider(scrapy.Spider): 
 
    name = "hltvspider" 
 
    allowed_domains = ["hltv.org"] 
 
    start_urls = ["http://www.hltv.org/?pageid=188&eventid=0&gameid=2"] 
 

 
    def parse(self, response): 
 
     for sel in response.xpath('//ul/li'): 
 
      title = sel.xpath('a/text()').extract() 
 
      link = sel.xpath('a/@href').extract() 
 
      desc = sel.xpath('text()').extract() 
 
      print title, link, desc

Вот выход я получаю:

C:\Users\Michael\PycharmProjects\HLTV\HLTV\HLTV\spiders\hltv.py:5: ScrapyDeprecationWarning: HLTV.spiders.hltv.MySpider inherits from deprecated class scrapy.spider.BaseSpider, please inhe 
 
rit from scrapy.spider.Spider. (warning only on first subclass, there may be others) 
 
2015-01-21 16:20:22-0600 [scrapy] INFO: Scrapy 0.24.4 started (bot: HLTV) 
 
2015-01-21 16:20:22-0600 [scrapy] INFO: Optional features available: ssl, http11 
 
2015-01-21 16:20:22-0600 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'HLTV.spiders', 'SPIDER_MODULES': ['HLTV.spiders'], 'BOT_NAME': 'HLTV'} 
 
2015-01-21 16:20:22-0600 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState 
 
2015-01-21 16:20:22-0600 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaR 
 
efreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats 
 
2015-01-21 16:20:22-0600 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware 
 
2015-01-21 16:20:22-0600 [scrapy] INFO: Enabled item pipelines: 
 
2015-01-21 16:20:22-0600 [hltvspider] INFO: Spider opened 
 
2015-01-21 16:20:22-0600 [hltvspider] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 
 
2015-01-21 16:20:22-0600 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023 
 
2015-01-21 16:20:22-0600 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080 
 
2015-01-21 16:20:23-0600 [hltvspider] DEBUG: Crawled (200) <GET http://www.hltv.org/?pageid=188&eventid=0&gameid=2> (referer: None) 
 
[] [] [u'\n\t\t\t\t', u'\n\t\t\t\t', u'\n\t\t\t'] 
 
[] [] [u'\n\t\t\t\t', u'\n\t\t\t\t', u'\n\t\t\t'] 
 
[] [] [u'\n\t\t\t\t', u'\n\t\t\t\t', u'\n\t\t\t'] 
 
[] [] [u'\n\t\t\t\t', u'\n\t\t\t\t', u'\n\t\t\t'] 
 
[] [] [u'\n\t\t\t\t', u'\n\t\t\t\t', u'\n\t\t\t'] 
 
[] [] [u'\n\t\t\t\t', u'\n\t\t\t\t', u'\n\t\t\t\t'] 
 
[] [] [u'\n\t\t\t\t', u'\n\t\t\t\t', u'\n\t\t\t\t'] 
 
[] [] [u'\n  ', u'\n  ', u'\n  '] 
 
[] [] [u'\n\t\t\t\t\t', u'\n\t\t\t\t\t', u'\n\t\t\t\t'] 
 
[] [] [u'\n\t\t\t\t\t', u'\n\t\t\t\t\t', u'\n\t\t\t\t'] 
 
2015-01-21 16:20:23-0600 [hltvspider] INFO: Closing spider (finished) 
 
2015-01-21 16:20:23-0600 [hltvspider] INFO: Dumping Scrapy stats: 
 
     {'downloader/request_bytes': 241, 
 
     'downloader/request_count': 1, 
 
     'downloader/request_method_count/GET': 1, 
 
     'downloader/response_bytes': 13544, 
 
     'downloader/response_count': 1, 
 
     'downloader/response_status_count/200': 1, 
 
     'finish_reason': 'finished', 
 
     'finish_time': datetime.datetime(2015, 1, 21, 22, 20, 23, 432000), 
 
     'log_count/DEBUG': 3, 
 
     'log_count/INFO': 7, 
 
     'response_received_count': 1, 
 
     'scheduler/dequeued': 1, 
 
     'scheduler/dequeued/memory': 1, 
 
     'scheduler/enqueued': 1, 
 
     'scheduler/enqueued/memory': 1, 
 
     'start_time': datetime.datetime(2015, 1, 21, 22, 20, 22, 775000)} 
 
2015-01-21 16:20:23-0600 [hltvspider] INFO: Spider closed (finished)

+0

Возможный дубликат http://stackoverflow.com/questions/20719263/write-to-a-csv-file-scrapy – aberna

ответ

0

проверить, если это работает для вас

import scrapy 
from scrapy.selector import Selector 

from megacritics.items import MegacriticsItem 

class testspider(scrapy.Spider): 
    name = "pupu" 
    allowed_domains = ["hltv.org"] 
    start_urls = ["http://www.hltv.org/?pageid=188&eventid=0&gameid=2"] 

    def parse(self,response): 
     hxs = Selector(response) 
     sites = hxs.select('//div[@style="width:606px;height:22px;background-color:white"]') 
     items = [] 
     for site in sites: 
      item = MegacriticsItem() 
      item['date'] = site.select('.//div[@style="padding-left:5px;padding-top:5px;"]/a/div/text()').extract() 
      # item['team1'] = site.select('.//div[@class="covSmallHeadline"]/text()').extract() 
      # item['team2'] = site.select('.//div[@class="covSmallHeadline"]/text()').extract() 
      # item['map'] = site.select('.//div[@class="covSmallHeadline"]/text()').extract() 
      # item['event'] = site.select('.//div[@class="covSmallHeadline"]/text()').extract() 
      items.append(item) 
     return items 
+0

игнорировать элементы импорта я отредактированные мой файл. –

Смежные вопросы