Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

redundancy in TT-rank #7

Open
ShHsLin opened this issue Sep 1, 2017 · 0 comments
Open

redundancy in TT-rank #7

ShHsLin opened this issue Sep 1, 2017 · 0 comments

Comments

@ShHsLin
Copy link

ShHsLin commented Sep 1, 2017

I might be wrong here. But isn't it the case that the tt-rank do not need to grow more than the smaller dimension of the matrixization dimension?

But as in the implementation of ultimate tensorization,

    layers.append(tensornet.layers.tt(tf.reshape(layers[-1], [-1, sz]),
                                      np.array([8, 8, 8, 16], dtype=np.int32),
                                      np.array([4, 6, 8, 8], dtype=np.int32),
                                      np.array([1, 40, 40, 40, 1], dtype=np.int32),
                                      biases_initializer=None,
                                      cpu_variables=cpu_variables, scope='tt4.1'))

I think it is actually enough to have tt_rank as [1,32,40,40,1] instead?
It is also the case in the conv layer, for example,

    layers.append(tensornet.layers.tt_conv_full(layers[-1],
                                                [3, 3],
                                                np.array([4,8,4],dtype=np.int32),
                                                np.array([4,8,4],dtype=np.int32),
                                                np.array([16,16,16,1],dtype=np.int32),
                                                [1, 1],
                                                cpu_variables=cpu_variables,
                                                biases_initializer=None, scope='tt_conv3.2'))

since the window size is 3x3, the tt-rank [9,16,16,1] would be enough.

As a result, the compression ratio could be a little bit better, if this were the original implementation for the paper?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant