版权声明:Copyright reserved to Hazekiah Wang ([email protected]) https://blog.csdn.net/u010909964/article/details/84068565
Reproducing experiments means to restore the random seed.
There are two types of seeding in tensorflow:
- graph-level
typically by callingtf.set_random_seed(seed)
- op-level
by setting theseed=
param in the tf.random functions.
What are the differences?
- graph-level seed has priority over op-level
when graph-level seed set, different sessions running identical graphs with the same graph-level seed will produce the same random behavior.
Note: graphs do not need to have same references but same structure and seed, i.e., set bytf.set_random_seed(seed)
- op-level seed controls the specific op to keep its random behavior identical across different sessions.
Note: graphs do not need to have same references and even the same structure, but there should be two identical op with the same seed.
Warning
When tf.random functions are used in interior functions or lambda functions, It seems that the graph is not inherited, which means the graph-level seed is not inherited. It follows that in such cases, we have to manually set the op-level seed, if we want to reproduce the experiment.