批量部署scrapy爬虫

1.首先安装scrapyd。
2.在爬虫scrapy.cfg文件中,清加如下列表:

[deploy:s158] #冒号隔开。
url = http://192.168.1.158:6800/

[deploy:s161]
url = http://192.168.2.161:6800/

[deploy:s88]
url = http://10.199.3.88:6800/

3.在与scrapy.cfg同级目录下建立任意py文件,如depoy.py文件,在其添加如下代码。

# -*- coding: utf-8 -*-
import os

project = os.path.dirname(__file__)
project = os.path.basename(project)
hosts = ['s158', 's161', 's88']
for it in hosts:
command = 'scrapy deploy %s -p %s' %(it, project)
os.system(command)

———————————————-

sh脚本:
#!/bin/sh

# you should modify some parameter below.

project='webproxy'

# hosts' string are from scrapy.cfg file at the same directory.
hosts="s2 s254 s102"

# ---------------------------------------------
# code body.
path=$(dirname $(readlink -f $0))
prjdir="$path/$project"

if [ ! -d "$prjdir" ]; then
echo ""
echo "can't find the path like ${prjdir}."
echo "make sure the spiders were done in this path."
echo ""
exit
fi

echo ""
echo "----"
echo "ready to deploy '$project' to [$hosts]"
echo "----"
echo ""

read -p "Do you want to continue [Y/N]?" answer
case $answer in
Y | y)
echo ""
for it in $hosts
do
scrapy deploy $it -p $project
done
;;
*)
echo ""
echo "nothing to do, good bye!!"
;;
esac
echo ""
echo ""