Tuesday, March 10, 2015

Unit Testing Scrapy - part 1 - Integrating with DjangoItem

In my previous post I exposed how to scrap a page which requires multiple form submissions along one of way to save the scrapped data to the database, the Django integration with DjangoItem.

In this post I want to show how we can unit test scrappers using just the usual python unit test framework, and how we need to configure our testing environment when referencing a Django model from our Items.


Basic unit testing


Continuing with the example in my previous post, let's recall our project layout

├── mappingsite
│   ├── mappingsite
│   └── storemapapp
└── storedirectoryscraper
    └── storedirectoryscraper
        └──spiders

We had built a scraper in the storedirectoryscraper project but we haven't make any unit or integration tests for it yet (you may try out a little TDD afterwards instead of testing last, but it certainly helps out to have an idea of where we are heading when learning a new tool)

So you can go ahead and create a tests.py file inside the storedirectoryscraper top level folder, and add the following code to it

import unittest


class TestSpider(unittest.TestCase):

    def test_1(self):
        pass


if __name__ == '__main__':
    unittest.main()


running

python -m unittest storedirectoryscraper.tests

from the top level folder will display the python unittest success message.

Now, let's see what happens when we try to import our spider to test it. add the following line to the top of the tests.py file

from storedirectoryscraper.spiders import rapipago


and run the test again. You should see an error message like the one below.

Traceback (most recent call last):
  File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
    exec code in run_globals
  File "/usr/lib/python2.7/unittest/__main__.py", line 12, in <module>
    main(module=None)
  File "/usr/lib/python2.7/unittest/main.py", line 94, in __init__
    self.parseArgs(argv)
  File "/usr/lib/python2.7/unittest/main.py", line 149, in parseArgs
    self.createTests()
  File "/usr/lib/python2.7/unittest/main.py", line 158, in createTests
    self.module)
  File "/usr/lib/python2.7/unittest/loader.py", line 130, in loadTestsFromNames
    suites = [self.loadTestsFromName(name, module) for name in names]
  File "/usr/lib/python2.7/unittest/loader.py", line 100, in loadTestsFromName
    parent, obj = obj, getattr(obj, part)
AttributeError: 'module' object has no attribute 'tests'



What happened here is that the unittest framework is not aware of our scrapy project configuration, it is not Scrapy running our tests, it is Python directly. So the configuration in our settings file does not take any effect.
One way to solve this is to simply add the Django application to our python path so the test runner can find it when invoked. As we are going to need to do this for every test and we certainly don't want to add it definetly to our path, but just for testing the scraper, we can just create a test package and alter the path in it's __init__.py file.

 So let's do that. Create a tests folder at the same level we have our tests.py file. Add a __init__.py file to it, and move the tests.py file to that directory. After the changes, the project should look like this

storedirectoryscraper
    ├── scrapy.cfg
    └── storedirectoryscraper
        ├── __init__.py
        ├── items.py
        ├── pipelines.py
        ├── settings.py
        ├── spiders
        │   ├── __init__.py
        │   └── rapipago.py
        └── tests
            ├── __init__.py
            └── tests.py


Now add the following lines to the __init__.py file you just created


import sys
import os

BASE_DIR = os.path.dirname(os.path.dirname(__file__))
sys.path.append(os.path.join(BASE_DIR, '../../mappingsite'))
os.environ['DJANGO_SETTINGS_MODULE'] = 'mappingsite.settings'

Now if you run

python -m unittest storedirectoryscraper.tests.tests

or just

python -m unittest discover

from the top level scrapy folder, you should get a success message again.

Setting up the test database


Now that we have make our test work with the Django model, we need to be careful which database we are running our tests against, we wouldn't like our tests modifying our production database.

Looking at how we integrated our Django app to the testing environment, it turns out to be very easy to configure a testing database, separated from our development one. We just need to create different settings for dev, test and prod environments in our Django application. Let's create a setting file for testing for now.
Inside our mappingsite module, create a folder called settings. Add an empty __init__.py file so that we tell python this is a package. Move our settings file inside that folder, and rename it to base.py.


├── manage.py
├── mappingsite
│   ├── __init__.py
│   ├── settings
│   │   ├── base.py
│   │   └── __init__.py
│   ├── urls.py
│   └── wsgi.py


Now create two new modules, dev.py and test.py. and cut and space the DATABASES declaration to those two files. Rename the database name to something that makes sense for each environment (you can also change the engine if desired) so that they won't collide.

Now you can take several approaches to resolve the correct environment. For this case, we will just add

from base import *

at the top of each file and replace our settings module in wsgi.py and the scrapper's settings.py and tests.__init__.py with the correct one. This is not the recommended solution though, as we would need to change those settings when deploying to a different environment (let's say production, or an staging server). You can read more on this from the Django docs.

Summary


In this post we have seen how to set up our environment for unit testing when utilizing Django models. With these changes, we can now start unit testing our scraper and even add some integration tests to see we are actually being able to populate our database. In future posts we will go deeper into how to unit test our scraper, and later on, we will look into a Scrapy's alternative, Contracts.

Thursday, March 5, 2015

Scraping a website using Scrapy and Django

I've been playing around with Scrapy lately and I found it extremely easy to use. 

The steps to build a simple project are well described in the scrapy tutorial, here I am going expand what's explained there to include submitting forms, Django integration and testing.
If you worked on the tutorial project, you have already an understanding of the three key concepts you need to get started

  • Spiders: This is where we navigate the pages and look for the information you want to acquire. You will need some basic knowledge of CSS selectors and/or XPath to get to the information you want. There are easy ways to submit forms (for login, search etc), follow links and so on. In the end, when you get to the data you want to keep, you store it on Items.
  • Items: Items can be serialized in many formats, save directly to a database, linked to a Django model to store via the Django ORM, etc. Prior to that, they can be sent through one or more pipelines for processing.
  • Pipelines: Here is were you would be doing all the validation, data clean up etc.

Scraping a complex page


Let's say we want to scrap the page here

It lists locations of services and taxes payment offices around my country.
You need to either search by keyword or by province and city in the search form at the right of the page. The search form does have the provinces loaded by default, but it is not until you select a province that you are able to select the city. As we cannot execute javascript with Scrapy, we are going to need to split the process into 4 steps inside the spider:

  1. Go to the main page http://www.rapipago.com.ar/rapipagoWeb/index.htm, parse the response looking for the list of Provinces
  2. For each province in the select element, submit the form simulating the selection.
  3. Parse the response to each request in 2. to find the list of cities associated with each province and submit the search form again for each pair (province, city)
  4. Parse each response in 3. to obtain the items. In this case, the information we want is name, address, city and province of each location. Yield each item for further processing through the pipeline.

Django integration

 

You will have to build both a Scrapy project and a Django project. The out of the box integration uses DjangoItem to store data using Django ORM.

Let's say you are scrapping an ATM directory to later build a Django application that would display store locations on a Map.


- BloggerWorkspace
       - storedirectoryscraper (Scrapy project)
       - mappingsite (Django Project)
             - mappingsite
             - storemapapp

In the example, you can see the workspace structure with the scrapy project and the Django project.

This was easily achieved by doing:

$ mkvirtualenv BloggerWorkspace
$ mkproject BloggerWorkspace

When creating the environments bear in mind that Scrapy currently does not support python 3, so you'll need to use the latest 2.7 version. You can probably use different environments and python versions for the Scrapy and Django projects, I am using the same here in favour of simplicity.


Update: Since May 2016 Scrapy 1.1 supports now Python 3 on non-windows environments with some limitations, see release notes
 

$ pip install django
$ django-admin.py startproject mappingsite
$ cd MappingSite
$ django-admin.py startapp storemapapp
$ cd ..
$ pip install Scrapy
$ scrapy startproject storedirectoryscraper

Doing the actual work


So, now that we have our projects set up, let's see what the code would look like.

Following the Scrapy tutorial, we need to create our item. This is going to be a DjangoItem. So we will go first to our django application and add a model to our models.py inside our brand new app StoreMapApp. We also need to add our app to the INSTALLED_APPS in our settings.py module.


mappingsite/storemapapp/models.py

from django.db import models

class Office(models.Model):
    city = models.CharField(max_length=100)
    province = models.CharField(max_length=100)
    address = models.CharField(max_length=100)
    name = models.CharField(max_length=100)



mappingSite/mappingsite/settings.py

....

INSTALLED_APPS = (
    'django.contrib.admin',
    'django.contrib.auth',
    'django.contrib.contenttypes',
    'django.contrib.sessions',
    'django.contrib.messages',
    'django.contrib.staticfiles',
    'storemapapp',
)
....


We can then go back to our scraper application and create our scrapy item

from scrapy.contrib.djangoitem import DjangoItem
from storemapapp.models import Office

class OfficeItem(DjangoItem):
    django_model = Office


Update As of Scrapy 1.0.0 DjangoItem has been relocated to its own package. You need to pip install  scrapy-djangoitem and import DjangoItem from scrapy_djangoitem instead.

 We will also need to tell scrapy how to find our Django application to be able to import our Office model. At the top of our scraper settings.py file add this lines to make our django app available:

storedirectoryscraper/storedirectoryscraper/settings.py

import sys
import os


sys.path.append('<abs path to BloggerWorkspace/mappingsite>')
os.environ['DJANGO_SETTINGS_MODULE'] = 'mappingsite.settings'

Update As of Django 1.7 App loading changed and you need to explicitely call the set up method. You can do this by adding also (thanks to Romerito Campos)


import django
django.setup()


with those settings we should now be able to start the scrapy shell and import our django model with

from storemapapp.models import Office

to check it works.


Once we have our Item set up, we can continue to create our first spider. Create a file rapipago.py under the spiders directory.

storedirectoryscraper/storedirectoryscraper/spiders/rapipago.py


import scrapy
from storedirectoryscraper.items import OfficeItem


class RapiPagoSpider(scrapy.Spider):
    name = "rapipago"
    allowed_domains = ["rapipago.com.ar"]
    start_urls = [
        "http://www.rapipago.com.ar/rapipagoWeb/index.htm", (1)
    ]

    def parse(self, response):
        # find form and fill in
        # call inner parse to parse real results.
        for idx, province in enumerate(response.xpath("//*[@id='provinciaSuc']/option")): (2)
            if idx > 0: # avoid select prompt
                code = province.xpath('@value').extract()
                request = scrapy.FormRequest("http://www.rapipago.com.ar/rapipagoWeb/suc-buscar.htm",
                                             formdata={'palabraSuc': 'Por palabra', 'provinciaSuc': code},
                                             callback=self.parse_province) (3)

                request.meta['province'] = province.xpath('text()').extract()[0] (4)
                request.meta['province_code'] = code
                yield request (5)

    def parse_province(self, response):
        for idx, city in enumerate(response.xpath("//*[@id='ciudadSuc']/option")):
            if idx > 0: 
                code = city.xpath('@value').extract()[0]

                request = scrapy.FormRequest("http://www.rapipago.com.ar/rapipagoWeb/suc-buscar.htm",
                                             formdata={'palabraSuc': 'Por palabra',
                                                       'provinciaSuc': response.meta['province_code'],
                                                       'ciudadSuc': code},
                                             callback=self.parse_city)

                request.meta['province'] = response.meta['province']
                request.meta['province_code'] = response.meta['province_code']
                request.meta['city'] = city.xpath('text()').extract()[0]
                request.meta['city_code'] = code
                yield request

    def parse_city(self, response):
        for link in response.xpath("//a[contains(@href,'index?pageNum')]/@href").extract():
            request = scrapy.FormRequest('http://www.rapipago.com.ar/rapipagoWeb/suc-buscar.htm?' + link.split('?')[1],
                                         formdata={'palabraSuc': 'Por palabra',
                                                   'provinciaSuc': response.meta['province_code'],
                                                   'ciudadSuc': response.meta['city_code']},
                                         callback=self.parse_city_data)

            request.meta['province'] = response.meta['province']
            request.meta['city'] = response.meta['city']

            yield request

    def parse_city_data(self, response):
        # TODO: follow page links (7)
        for office in response.xpath("//*[@class='resultadosNumeroSuc']"): (6)
            officeItem = OfficeItem()
            officeItem['province'] = response.meta['province']
            officeItem['city'] = response.meta['city']
            officeItem['name'] = office.xpath("../*[@class='resultadosTextWhite']/text()").extract()[0]
            officeItem['address'] = office.xpath("../..//*[@class='resultadosText']/text()").extract()[0]
            yield officeItem




That is a lot of code for our scraper and it deserves some explanation.

In (1) we are telling scraper were to start navigating. In this example, I've decided to start from the index page, were the search form first appears.

Our spider extends from the Spider class in scrapy. As such, it will go to the initial page and call the method parse, passing in the response.

In (2) we are fulfilling the first step described in Scraping a complex page. We find the select element that lists the provinces and we iterate over the values. For each value listed, we create a FormRequest, telling scrapy how to populate the form using the value obtained, and passing in a callback to process the response of the form submission.

If we look at our OfficeItem, we see we want to store information about province, city and address of each location. For that, we are going to need to pass that data all the way through to the item creation in the last method code in parse_city_data. The way to accomplish this with Scrapy is to add the data to the meta dictionary of the request object in each call, as shown in (4)

In (5) we yield the request, which will be processed and the callback called when the request is completed. We have now spawn a request for each province, and each is going to call parse_city when it is completed. In this method we repeat the same procedure but filling in both province and city in the FormRequest and passing in parse_city_data as callback.We also rewrite the province and city values to the meta dictionary to make them available to the next callback.

Finally, in (6) we have the response with locations for a particular province and city, so we can proceed to parse the response and create our OfficeItem objects.

As per the comment on (7), we would need to repeat this call for each pagination link we find, there is more than a way to accomplish this in scrapy. One way to expand to implement this requirement could be to just add an intermediate callback before the one extracting the data, to iterate to the pagination links and yield new requests for each of them.

 

 Saving the items


Up to now we have been able to create our django item, but we have not written it anywhere, nor into a database neither into a file.

The most suitable place to do this would be the pipelines module. We can define all the pipelines we want and just set them up in our settings.py module so that scrapy would execute them.

For this example, I have just written a short pipeline which performs a basic clean up on the address we extracted from the html, and saves it to the database.

Here is the code for our pipeline.py module.

# -*- coding: utf-8 -*-
import re


class ScrapRapiPagoPipeline(object):

    def process_item(self, item, spider):
        item['address'] = self.cleanup_address(item['address'])
        item.save()
        return item

    def cleanup_address(self, address):
        m = re.search('(?P<numb>(\d+))\s(?P=numb)', address)
        if m:
            return address[0:m.end(1)]
        return address


We need to tell Scrapy which pipelines to run, for that, open the settings.py file in our Scrapy project and add these lines:

ITEM_PIPELINES = {
    'storedirectoryscraper.pipelines.StoreDirectoryScraperPipeline': 300,
}

Running the spider

So now that we have built all the pieces, you can try to run our spider from the command line (that is, if you haven't yet been trying)

Just go into the top folder of the scrapy project and type

scrapy crawl rapipago

Summary


In this post I've examined how to scrap a site which required multiple form submissions, passing in data from request to request and some basic data validation. I've exposed a way to save to the database using scrapy's django integration, though you could want to just save directly to the database, or dump to a file instead. More information about each piece can be found at Scrapy docs.

In later posts I'll cover the steps to unit test this scraper the usual way, and also explore the newer Scrapy alternative, contracts.