21. class FeedImporter(Task):
name = "feed.import"
routing_key = 'feed.import'
tasks.p
ignore_result = True
default_retry_delay = 5 * 60 # retry in 5 minutes
max_retries = 72 # 6 Hours to cover major outages
def run(self, podcast_id, **kwargs):
try:
logger = self.get_logger(**kwargs)
# The cache key consists of the task name and the MD5 digest of the feed id.
lock_id = "%s-lock-%s" % (self.name, podcast_id)
is_locked = lambda: str(cache.get(lock_id)) == "true"
acquire_lock = lambda: cache.set(lock_id, "true", 300)
# memcache delete is very slow, so we'd rather set a false value
# with a very low expiry time.
release_lock = lambda: cache.set(lock_id, "nil", 1)
logger.debug("Trying to import feed: %s" % podcast_id)
if is_locked():
logger.debug("Feed %s is already being imported by another worker" % podcast_id)
return
acquire_lock()
try:
import_feed(logger, podcast_id)
finally:
release_lock()
except Exception, exc:
logger.error(exc)
22. typical
• running out of disk space ==
rabbitmq fail
• queue priorities, difficult
• non-pickle-able errors
• crashing consumers
23. other cool
• tasksets / callbacks
• remote control tasks
• abortable tasks
• eta – run tasks at a set time
• HttpDispatchTask
• expiring tasks
• celerymon
• celeryev
• ajax views