您好,登錄后才能下訂單哦!
scrapydweb.herokuapp.com
訪問(wèn) heroku.com 注冊(cè)免費(fèi)賬號(hào)(注冊(cè)頁(yè)面需要調(diào)用 google recaptcha 人機(jī)驗(yàn)證,登錄頁(yè)面也需要科學(xué)地進(jìn)行上網(wǎng),訪問(wèn) app 運(yùn)行頁(yè)面則沒(méi)有該問(wèn)題),免費(fèi)賬號(hào)最多可以創(chuàng)建和運(yùn)行5個(gè) app。
訪問(wèn) redislabs.com 注冊(cè)免費(fèi)賬號(hào),提供30MB 存儲(chǔ)空間,用于下文通過(guò) scrapy-redis 實(shí)現(xiàn)分布式爬蟲(chóng)。
svr-1
, svr-2
, svr-3
和 svr-4
myscrapydweb
SCRAPYD_SERVER_2
, VALUE 為 svr-2.herokuapp.com:80#group2
pip install redis
命令即可。新開(kāi)一個(gè)命令行提示符:
git clone https://github.com/my8100/scrapyd-cluster-on-heroku
cd scrapyd-cluster-on-heroku
heroku login
# outputs:
# heroku: Press any key to open up the browser to login or q to exit:
# Opening browser to https://cli-auth.heroku.com/auth/browser/12345-abcde
# Logging in... done
# Logged in as username@gmail.com
新建 Git 倉(cāng)庫(kù)
cd scrapyd
git init
# explore and update the files if needed
git status
git add .
git commit -a -m "first commit"
git status
部署 Scrapyd app
heroku apps:create svr-1
heroku git:remote -a svr-1
git remote -v
git push heroku master
heroku logs --tail
# Press ctrl+c to stop logs outputting
# Visit https://svr-1.herokuapp.com
添加環(huán)境變量
# python -c "import tzlocal; print(tzlocal.get_localzone())"
heroku config:set TZ=Asia/Shanghai
# heroku config:get TZ
heroku config:set REDIS_HOST=your-redis-host
heroku config:set REDIS_PORT=your-redis-port
heroku config:set REDIS_PASSWORD=your-redis-password
svr-2
,svr-3
和 svr-4
新建 Git 倉(cāng)庫(kù)
cd ..
cd scrapydweb
git init
# explore and update the files if needed
git status
git add .
git commit -a -m "first commit"
git status
部署 ScrapydWeb app
heroku apps:create myscrapydweb
heroku git:remote -a myscrapydweb
git remote -v
git push heroku master
添加環(huán)境變量
heroku config:set TZ=Asia/Shanghai
heroku config:set SCRAPYD_SERVER_1=svr-1.herokuapp.com:80
heroku config:set SCRAPYD_SERVER_2=svr-2.herokuapp.com:80#group1
heroku config:set SCRAPYD_SERVER_3=svr-3.herokuapp.com:80#group1
heroku config:set SCRAPYD_SERVER_4=svr-4.herokuapp.com:80#group2
mycrawler:start_urls
觸發(fā)爬蟲(chóng)并查看結(jié)果In [1]: import redis # pip install redis
In [2]: r = redis.Redis(host='your-redis-host', port=your-redis-port, password='your-redis-password')
In [3]: r.delete('mycrawler_redis:requests', 'mycrawler_redis:dupefilter', 'mycrawler_redis:items')
Out[3]: 0
In [4]: r.lpush('mycrawler:start_urls', 'http://books.toscrape.com', 'http://quotes.toscrape.com')
Out[4]: 2
# wait for a minute
In [5]: r.lrange('mycrawler_redis:items', 0, 1)
Out[5]:
[b'{"url": "http://quotes.toscrape.com/", "title": "Quotes to Scrape", "hostname": "d6cf94d5-324e-4def-a1ab-e7ee2aaca45a", "crawled": "2019-04-02 03:42:37", "spider": "mycrawler_redis"}',
b'{"url": "http://books.toscrape.com/index.html", "title": "All products | Books to Scrape - Sandbox", "hostname": "d6cf94d5-324e-4def-a1ab-e7ee2aaca45a", "crawled": "2019-04-02 03:42:37", "spider": "mycrawler_redis"}']
my8100/scrapyd-cluster-on-heroku
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。