-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Commands to MQ Training with VSGN #1
Comments
Hello Junwei, Thanks for your interest for our work, Thank you for your patience! |
Hi Junwei, I have uploaded the video features for MQ tasks to G drive: train&val / test, so that you can download it directly. Please try it out and let us know if you have new results. |
I have downloaded the features but they seem to be a single file. Are they a single pickle binary with dictionary keys? How to read them and map them to the videos (for example, slowfast8x8_r101_k400/ has 9645 *.pt files each corresponds to a video)? Thanks. |
There is a gz file, after unzipping it (I unzip it on my mac), you will see a document that contains multiple The clip information is provided by the MQ metadata, i.e., clip xxx come from the video yyy with start time t1 and end time t2. |
I see. The file you provided on Google drive is a .tar.gz file, and I extract it with |
So Thanks. |
Yes, it is the clip ID. And sorry, I am currently unable to provide video-level features, a solution is to rewrite the data loader so that supports clip features as input. |
@QinghongLin - Thanks for providing the clip features. I tried training the VSGN model using the Ego4D episodic-memory codebase instructions. But I'm not able to reproduce the val results from the paper. The numbers are quite a bit lower than the paper results (2nd row vs. 3rd row in the figure below). Here is the training command I used. Note: I modified the data loader to use clip features instead of video features.
|
Hi, @srama2512 ,
|
@QinghongLin - Thanks for sharing your code and the hyperparameters. I was able to obtain a similar performance. It turns out that there was a bug in the Lines 77 to 87 in dc4a60f
The calculation of
Happy to send a PR if you'd like this bug-fix to be a part of the EgoVLP repo. This affects most of the |
Hi, thanks for releasing the code!
Could you provide some instructions on how to run VSGN training with EgoVLP features (hyper-parameters, learning rate, etc.)? Thanks!
Junwei
The text was updated successfully, but these errors were encountered: