版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/qq_17612199/article/details/80393734
问题背景:
问题代码:
r = StrictRedis('10.20.23.45', 3901)
print r.get('7551A63C3F2C0EA261AAE2B509ABC782172FE56DF64F64B6CB0B355E5A9D9FB7:u_feed_lt_ad_type_show:0')
print r.get('1803D8B45F7371F0DE0F98D392B07814B09E9601F2DDE57A0EA154685E85E16B:u_lt_search_showclk:0')
p = r.pipeline()
p.get('7551A63C3F2C0EA261AAE2B509ABC782172FE56DF64F64B6CB0B355E5A9D9FB7:u_feed_lt_ad_type_show:0')
p.get('1803D8B45F7371F0DE0F98D392B07814B09E9601F2DDE57A0EA154685E85E16B:u_lt_search_showclk:0')
print p.execute()
出现报错:redis.exceptions.ResponseError: CROSSSLOT Keys in request don’t hash to the same slot
异常信息:这两个key在同一个redis实例的不同slot上面。
- 在该redis实例分开get可以查询到值
- 两个key倒在同一个redis的不同slot
问题排查
- 赶紧google一发:大部分信息都描述的是cluster环境下,批量get会由同一个slot的限定。
- 如果是cluster对mget的限定,我觉得可以理解,但是pipeline按照本人的理解,应该是打包多个命令,一次请求redis,然后在redis单独执行,回到问题背景描述1,正常逻辑应该是执行没问题的。
- 基于上面的分析,我认为pipeline的两个get操作被压缩了,可能在客户端。
- 源码分析,python客户端pipeline有个transaction参数默认为True,在execute的时候,会变成MULTI命令。
- 仔细再google一发:http://weizijun.cn/2015/12/30/redis3.0%20cluster%E5%8A%9F%E8%83%BD%E4%BB%8B%E7%BB%8D/,不仅mget支持会有问题,multi的支持也存在问题,问题可以完美解释。
def execute(self, raise_on_error=True):
"Execute all the commands in the current pipeline"
stack = self.command_stack
if not stack:
return []
if self.scripts:
self.load_scripts()
if self.transaction or self.explicit_transaction:
# transaction参数
execute = self._execute_transaction
else:
execute = self._execute_pipeline
conn = self.connection
if not conn:
conn = self.connection_pool.get_connection('MULTI',
self.shard_hint)
# assign to self.connection so reset() releases the connection
# back to the pool after we're done
self.connection = conn
try:
# 请求redis
return execute(conn, stack, raise_on_error)
except (ConnectionError, TimeoutError) as e:
conn.disconnect()
if not conn.retry_on_timeout and isinstance(e, TimeoutError):
raise
# if we were watching a variable, the watch is no longer valid
# since this connection has died. raise a WatchError, which
# indicates the user should retry his transaction. If this is more
# than a temporary failure, the WATCH that the user next issues
# will fail, propegating the real ConnectionError
if self.watching:
raise WatchError("A ConnectionError occured on while watching "
"one or more keys")
# otherwise, it's safe to retry since the transaction isn't
# predicated on any state
return execute(conn, stack, raise_on_error)
finally:
self.reset()