我们目前在基于 Celery 队列做定向爬虫。目标网页很简单,但存储的时候要将目标页上的各种信息转换、分解为结构化数据。在这期间需要查询很多次数据库。
采用 “发出任务 -> 队列 -> 在 Celery 中执行” 的方式。目前遇到以下问题
使用 supervisor 管理 Celery 的运行。但发现每隔一段时间,启动的多个 Celery 进程就好一起退出(同时)。
supervisor.log 如下(部分)
2015-05-09 03:02:56,462 INFO stopped: celerya (exit status 0)
2015-05-09 03:02:57,457 INFO stopped: celeryb (exit status 0)
2015-05-09 03:04:38,275 INFO spawned: 'celerya' with pid 3547
2015-05-09 03:04:48,529 INFO success: celerya entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)
2015-05-09 03:07:55,995 INFO spawned: 'celeryb' with pid 3926
2015-05-09 03:08:06,337 INFO success: celeryb entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)
2015-05-09 03:09:01,861 INFO stopped: celerya (exit status 0)
2015-05-09 03:11:29,792 INFO stopped: celeryb (exit status 0)
2015-05-09 03:12:02,037 INFO spawned: 'celerya' with pid 4706
2015-05-09 03:12:03,044 INFO spawned: 'celeryb' with pid 4710
退出时 celery 的异常信息如下:
Traceback (most recent call last):
File "/usr/local/opt/pyenv/versions/calvino/lib/python2.7/site-packages/celery/worker/__init__.py", line 206, in start
self.blueprint.start(self)
File "/usr/local/opt/pyenv/versions/calvino/lib/python2.7/site-packages/celery/bootsteps.py", line 123, in start
step.start(parent)
File "/usr/local/opt/pyenv/versions/calvino/lib/python2.7/site-packages/celery/bootsteps.py", line 374, in start
return self.obj.start()
File "/usr/local/opt/pyenv/versions/calvino/lib/python2.7/site-packages/celery/worker/consumer.py", line 278, in start
blueprint.start(self)
File "/usr/local/opt/pyenv/versions/calvino/lib/python2.7/site-packages/celery/bootsteps.py", line 123, in start
step.start(parent)
File "/usr/local/opt/pyenv/versions/calvino/lib/python2.7/site-packages/celery/worker/consumer.py", line 821, in start
c.loop(*c.loop_args())
File "/usr/local/opt/pyenv/versions/calvino/lib/python2.7/site-packages/celery/worker/loops.py", line 97, in synloop
connection.drain_events(timeout=2.0)
File "/usr/local/opt/pyenv/versions/calvino/lib/python2.7/site-packages/kombu/connection.py", line 275, in drain_events
return self.transport.drain_events(self.connection, **kwargs)
File "/usr/local/opt/pyenv/versions/calvino/lib/python2.7/site-packages/kombu/transport/virtual/__init__.py", line 840, in drain_events
message, queue = item
TypeError: 'NoneType' object is not iterable
在运行 4 小时之后,celery 进程虽然活着,但却停止工作,无 log,手动重启之后恢复工作。但 80% 的任务出现数据库异常,提示 IntegrityError: (IntegrityError) (1062, u"Duplicate entry '201173000000006801' for key 'PRIMARY'")
错误。手动查找 改主键却不存在。
在运行 2 小时左右之后,开始出现 陆续出现主键重复错误,手动查找,该数据确实存在。但实际上应该不存在。因为该条数据就是在本次任务中创建的。
附: celery 运行参数:/usr/local/opt/pyenv/versions/calvino/bin/celery -A celeryd worker -P eventlet -c 30 -n spider02
刚开始为公司写用于生产环境的代码,救bug 如救火,各位 V 友救我!
1
est 2015-05-09 09:27:14 +08:00
IntegrityError: (IntegrityError) (1062, u"Duplicate entry '201173000000006801' for key 'PRIMARY'")
这个问题我遇到过,2个线程同时插入这个值,2个线程都报错,然后db里无记录,然后DB自增跳入下一个。 |
2
ultimate010 2015-05-09 09:58:25 +08:00
看不出问题,但是celery+supervisord爬虫完全可行,以前在20+机器用celery做任务队列的爬虫很稳定。爬虫注意try异常。
建议用redis做broker,省得折腾。 |
3
binux 2015-05-09 10:41:33 +08:00
celery 会在任务失败,没有 ack 的时候重试吧,会不会因为不完整的事务导致 Duplicate
|
4
binux 2015-05-09 10:41:45 +08:00
另外,来试试 pyspider 吧
|
12
lilydjwg 2015-05-09 13:39:02 +08:00
这用的什么 broker 啊,不会是 MySQL 吧?
换 Redis 吧。 |
13
fy 2015-05-09 14:22:04 +08:00
celery 用 Redis 妥妥的啊。
多进程方案的话,LZ也可以试试zeromq,几十行代码就能搭建一个原型。 跨机房也是OK的,只要能建立Socket连接就行。 |
14
Daniel65536 2015-05-09 23:56:30 +08:00
@Moker pyspider 就是 binux 巨巨的作品……直接问就是 =_=
|
15
Moker 2015-05-10 09:55:26 +08:00
@Daniel65536 哈哈...看资料的时候发现了...不过话说我的问题比较新手...问多了还是比较尴尬的
|