版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/jacke121/article/details/83831302
当数据特别大时,redis存取不需要时间
但是json.dumps和 json.loads会花时间
picke压缩与解压需要很长时间:
下面是测试代码:压缩与解压都需要1秒多
#coding=utf-8 import time import numpy as np import redis import json pool = redis.ConnectionPool(host='localhost', port=6379, db=0) red = redis.StrictRedis(connection_pool=pool) for i in range(2): data = np.arange(1000 * 4000, dtype='float').reshape(1000, 4000) t1=time.time() user = {"Name": "Pradeep", "Company": "SCTL", "Address": data.tolist()} # red.hmset("dict"+str(i), user) red.set(i,json.dumps(user)) print('存入时间',time.time()-t1) for i in range(2): t2=time.time() list2=red.get(i) # list2=red.hgetall("dict"+str(i)) list2=json.loads(list2.decode()) print('取出时间',time.time()-t2,len(list2))