版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/u013818374/article/details/83070203
/**
* 批量插入的数量需要根据性能而定
*/
private static Integer BATCH_INSERT_MAX_SIZE = 100;
/**
* 批量插入
*
* @param fileHandleList
*/
protected void insertBatch(List<FileHandle> fileHandleList) {
int pageSize = BATCH_INSERT_MAX_SIZE;
int pageNo = fileHandleList.size() / pageSize;
List subList;
if (fileHandleList.size() <= pageSize) {
fileHandleMapper.batchInsert(fileHandleList);
} else {
for (int i = 0; i < pageNo; i++) {
subList = fileHandleList.subList(0, pageSize);
fileHandleMapper.batchInsert(subList);
fileHandleList.subList(0, pageSize).clear();
}
if(list.size()>0){
fileHandleMapper.batchInsert(fileHandleList);
}
}
}
<insert id="batchInsert">
INSERT INTO t_file_handle(id,fileId, data, tag,`status`, logicType, creTime, updTime)
VALUES
<foreach collection="list" item="item" index="index" separator=",">
(#{item.id},#{item.fileId},#{item.data},#{item.tag},#{item.status},#{item.logicType},#{item.creTime},#{item.updTime})
</foreach>
</insert>
100w+数据保存mysql。由于mysql每次新增时slq长度限制为4M(跟版本有关,可配置)大小,故对sql进行了拆分,仅做记录。