制作tfrecord报错[[node IteratorGetNext (defined at XXX)= IteratorGetNext[output_shapes= [[?,?,?,3], [?]

对于image_data的要求是这样的 string, JPEG encoding of RGB image

example = tf.train.Example(features=tf.train.Features(feature={
‘img_name’: _bytes_feature(img_name.encode(‘ascii’)),
‘img_height’: _int64_feature(height),
‘img_width’: _int64_feature(width),
# ‘img’: bytes_feature(image_data),
‘img’: _bytes_feature(image_data),
‘gtboxes_and_label’: _bytes_feature(gtboxes_and_label.tostring())
}))

tfrecord_writer.write(example.SerializeToString())

v1

image_data = tf.gfile.FastGFile(filename, ‘rb’).read() # raw

v2

image_data1 = cv2.imread(filename)
image_data = image_data1.tostring()

v3

image_data2 = skimage.io.imread(filename)
image_data = image_data2.tostring()

v4

image1 = Image.open(filename)
image2 = image1.convert(‘RGB’)
image_data = image2.tobytes()

以上写了四种读数据的方法,但是我将tfrecord重新读出来的时候只有# v2和#v4能正确读出来我的图片信息,我理解的是将图片读成二进制就行了,但是我读成base64格式的也不行

我将图片读出来的那部分这样写的
img = tf.decode_raw(features[‘img’], tf.uint8)

先埋个坑在这,日后来填,若有大神能指点一二就更好了!!!

猜你喜欢

转载自blog.csdn.net/weixin_43868576/article/details/107062628
今日推荐