Hey everybody,
Last week I asked this question on stackoverflow: https://stackoverflow.com/questions/40547198/saving-the-state-of-the-adagrad-algorithm-in-tensorflow .
My problem is that I want to save the state of the optimizer (in my case the adagrad accumulators) so I can stop my learning and continue whenever I want.
Unless I'm mistaken the state of the optimizer can't be saved (you cant pass an optimizer to a tf.train.Saver, right?). A quick (hacky?) solution for me might be is calling Optimizer.get_slot_names() and save the op of each slot.
The next problem would be putting this op back in the slots, as I don't think there is a set_slot(name,op) at the moment.
So my questions are:
Thank you for asking the question on stackoverflow, which is a better place for it. The optimizer state will be saved by default, and is only not saved because you are specifically telling the saver what to save.