Keras: Extracting embeddings from layers

Created on 1 Sep 2015  ·  3Comments  ·  Source: keras-team/keras

Embeddings obtained from training a discriminative NN towards a specific task can be extremely useful on related tasks (e.g. Transfer learning). We can extract a lot of potentially useful embeddings by looking at the weights of a layer of the model. By looking at the documentation http://keras.io/models/ it doesn't seem like Keras supports the abstraction of extracting weight values from individual layers. Seems like it would be relatively easy to implement.

Most helpful comment

From the skipgram word embeddings example:

# recover the embedding weights trained with skipgram:
weights = model.layers[0].get_weights()[0]

If instead you're looking to extract the hidden layer representation of a given input, refer to https://github.com/fchollet/keras/issues/41

All 3 comments

From the skipgram word embeddings example:

# recover the embedding weights trained with skipgram:
weights = model.layers[0].get_weights()[0]

If instead you're looking to extract the hidden layer representation of a given input, refer to https://github.com/fchollet/keras/issues/41

hi @Smerity ,i use the graph model,refer to #41
like this,

model = graph()
model.add_input(name='input0',input_shape=())
model.add_node(Convolution2D(),name='c1',input='input0')
.......

And i want to see the output of the c1,Then

getFeatureMap = theano.function(model.inputs['input0'].input,model.nodes['c1'].get_output(train=False),
allow_input_downcast=True)

But it show me that
TypeError: list indices must be integers, not str

Do you give me some advices? Thanks.

More generally you can visualise the output/activations of every layer of your model. I wrote an example with MNIST to show how here:

https://github.com/philipperemy/keras-visualize-activations

So far it's the least painful I've seen.

Was this page helpful?
0 / 5 - 0 ratings