Skip to content

celrenheit/spider

Repository files navigation

Spider Build Status GoDoc License

This package provides a simple way, yet extensible, to scrape HTML and JSON pages. It uses spiders around the web scheduled at certain configurable intervals to fetch data. It is written in Golang and is MIT licensed.

You can see an example app using this package here: https://github.com/celrenheit/trending-machine

Installation

$ go get -u github.com/celrenheit/spider

Usage

package main

import (
	"fmt"
	"time"

	"github.com/celrenheit/spider"
	"github.com/celrenheit/spider/schedule"
)

// LionelMessiSpider scrape wikipedia's page for LionelMessi
// It is defined below in the init function
var LionelMessiSpider spider.Spider

func main() {
	// Create a new scheduler
	scheduler := spider.NewScheduler()

	// Register the spider to be scheduled every 15 seconds
	scheduler.Add(schedule.Every(15*time.Second), LionelMessiSpider)
	// Alternatively, you can choose a cron schedule
	// This will run every minute of every day
	scheduler.Add(schedule.Cron("* * * * *"), LionelMessiSpider)

	// Start the scheduler
	scheduler.Start()

	// Exit 5 seconds later to let time for the request to be done.
	// Depends on your internet connection
	<-time.After(65 * time.Second)
}

func init() {
	LionelMessiSpider = spider.Get("https://en.wikipedia.org/wiki/Lionel_Messi", func(ctx *spider.Context) error {
		fmt.Println(time.Now())
		// Execute the request
		if _, err := ctx.DoRequest(); err != nil {
			return err
		}

		// Get goquery's html parser
		htmlparser, err := ctx.HTMLParser()
		if err != nil {
			return err
		}
		// Get the first paragraph of the wikipedia page
		summary := htmlparser.Find("#mw-content-text > p").First().Text()

		fmt.Println(summary)
		return nil
	})
}

In order, to create your own spiders you have to implement the spider.Spider interface. It has two functions, Setup and Spin.

Setup gets a Context and returns a new Context with an error if something wrong happened. Usually, it is in this function that you create a new http client and http request.

Spin gets a Context do its work and returns an error if necessarry. It is in this function that you do your work (do a request, handle response, parse HTML or JSON, etc...). It should return an error if something didn't happened correctly.

Documentation

The documentation is hosted on GoDoc.

Examples

$ cd $GOPATH/src/github.com/celrenheit/spider/examples
$ go run wiki.go

Contributing

Contributions are welcome ! Feel free to submit a pull request. You can improve documentation and examples to start. You can also provides spiders and better schedulers.

If you have developed your own spiders or schedulers, I will be pleased to review your code and eventually merge it into the project.

License

MIT License

Inspiration

Dkron for the new in memory scheduler (as of 0.3)

About

Scheduler of spiders for scraping and parsing HTML and JSON pages

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages