Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistent SoX speed behaviour compared to WavAugment #1019

Closed
pzelasko opened this issue Nov 11, 2020 · 4 comments
Closed

Inconsistent SoX speed behaviour compared to WavAugment #1019

pzelasko opened this issue Nov 11, 2020 · 4 comments

Comments

@pzelasko
Copy link

🐛 Bug

With the following effect chain:

def speed(sampling_rate: int) -> List[List[str]]:
    return [
        # Random speed perturbation factor between 0.9x and 1.1x the original speed
        ['speed', RandomValue(0.9, 1.1)],
        ['rate', sampling_rate],  # Resample back to the original sampling rate (speed changes it)
    ]

being applied to a tensor of 16000 samples (with a sampling rate of 16000) I am receiving an output that has a different number of samples than the input. This behavior is inconsistent with what WavAugment does - the shapes are always identical.

This is tested in Lhotse here for WavAugment and here for torchaudio in PR lhotse-speech/lhotse#124.

To Reproduce

Run Lhotse's test suite with torchaudio 0.7 installed. See e.g. the error in Lhotse's CI here: https://github.com/lhotse-speech/lhotse/pull/124/checks?check_run_id=1386141728

Expected behavior

Equal input and output tensor shapes.

Environment

This is my local MacOS env; the other one is GitHub Actions CI in Lhotse.

PyTorch version: 1.7.0
Is debug build: True
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: Mac OSX 10.15.7 (x86_64)
GCC version: Could not collect
Clang version: 12.0.0 (clang-1200.0.32.21)
CMake version: version 3.18.4

Python version: 3.7 (64-bit runtime)
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A

Versions of relevant libraries:
[pip3] numpy==1.18.1
[pip3] torch==1.7.0
[pip3] torchaudio==0.7.0a0+ac17b64
[conda] blas 1.0 mkl
[conda] mkl 2019.4 233
[conda] mkl-service 2.3.0 py37hfbe908c_0
[conda] mkl_fft 1.2.0 py37hc64f4ea_0
[conda] mkl_random 1.1.1 py37h959d312_0
[conda] numpy 1.18.1 py37h7241aed_0
[conda] numpy-base 1.18.1 py37h3304bdc_1
[conda] pytorch 1.7.0 py3.7_0 pytorch
[conda] torchaudio 0.5.1 pypi_0 pypi

(ignore that last one version, it's some conda+pip quirk... the torchaudio version is 0.7.0 😅)

Additional context

@mthrok
Copy link
Collaborator

mthrok commented Nov 11, 2020

Hi @pzelasko

How off are they each other?

My initial thought is that this is not a bug but the way WavAugment handles speed is different.
Second thought is that based on the libsox being used, it generates a slightly different results. I noticed this when I was writing tests for the sox effects. (sox command installed via OS utility vs the one cmake builds as a part of torchaudio build process)

apply_effects_tensor and apply_effects_file are tested and confirmed that they return an identical result to sox command for speed as the following.

{"effects": [["speed", "1.3"]], "input_sample_rate": 4000, "output_sample_rate": 5200}
{"effects": [["speed", "0.7"]], "input_sample_rate": 4000, "output_sample_rate": 2800}

@parameterized.expand(
load_params("sox_effect_test_args.json"),
name_func=lambda f, i, p: f'{f.__name__}_{i}_{p.args[0]["effects"][0][0]}',
)
def test_apply_effects(self, args):
"""`apply_effects_tensor` should return identical data as sox command"""
effects = args['effects']
num_channels = args.get("num_channels", 2)
input_sr = args.get("input_sample_rate", 8000)
output_sr = args.get("output_sample_rate")
input_path = self.get_temp_path('input.wav')
reference_path = self.get_temp_path('reference.wav')
original = get_sinusoid(
frequency=800, sample_rate=input_sr,
n_channels=num_channels, dtype='float32')
save_wav(input_path, original, input_sr)
sox_utils.run_sox_effect(
input_path, reference_path, effects, output_sample_rate=output_sr)
expected, expected_sr = load_wav(reference_path)
found, sr = sox_effects.apply_effects_tensor(original, input_sr, effects)
assert sr == expected_sr
self.assertEqual(expected, found)

@parameterized.expand(
load_params("sox_effect_test_args.json"),
name_func=lambda f, i, p: f'{f.__name__}_{i}_{p.args[0]["effects"][0][0]}',
)
def test_apply_effects(self, args):
"""`apply_effects_file` should return identical data as sox command"""
dtype = 'int32'
channels_first = True
effects = args['effects']
num_channels = args.get("num_channels", 2)
input_sr = args.get("input_sample_rate", 8000)
output_sr = args.get("output_sample_rate")
input_path = self.get_temp_path('input.wav')
reference_path = self.get_temp_path('reference.wav')
data = get_wav_data(dtype, num_channels, channels_first=channels_first)
save_wav(input_path, data, input_sr, channels_first=channels_first)
sox_utils.run_sox_effect(
input_path, reference_path, effects, output_sample_rate=output_sr)
expected, expected_sr = load_wav(reference_path)
found, sr = sox_effects.apply_effects_file(
input_path, effects, normalize=False, channels_first=channels_first)
assert sr == expected_sr
self.assertEqual(found, expected)

@pzelasko
Copy link
Author

Yeah, bug might not be the best description - sorry.

I don't think it's actually the speed behaviour, seems like the difference is in how the rate effect is being applied. Consider the following:

import torch
from torchaudio import sox_effects as se
audio = torch.sin(2 * 3.14 * torch.linspace(0, 1, 16000)).unsqueeze(0)

se.apply_effects_tensor(audio, 16000, [['speed', '0.9']])
Out[10]: (tensor([[ 0.0000,  0.0004,  0.0008,  ..., -0.0040, -0.0036, -0.0032]]), 14400)

se.apply_effects_tensor(audio, 16000, [['speed', '0.9']])[0].shape
Out[11]: torch.Size([1, 16000])

se.apply_effects_tensor(audio, 16000, [['speed', '0.9'], ['rate', '16000']])[0].shape
Out[12]: torch.Size([1, 17778])

se.apply_effects_tensor(audio, 16000, [['speed', '0.9'], ['rate', '-q', '16000']])[0].shape
Out[13]: torch.Size([1, 17778])

That last effect chain in WavAugment would have returned a tensor of shape [1, 16000]. Now that I think of it, it's reasonable that the number of samples would have changed... I'll need to check what are they doing (maybe truncating/padding the output?), if you happen to know please share :)

@pzelasko
Copy link
Author

OK I think I understand it now. When the output length description is specified in their effect chain invocation, they are truncating or zero-padding the signal (I see that here: https://github.com/facebookresearch/WavAugment/blob/master/augment/speech_augment.h#L120). Sorry for the false alarm! 😉

@mthrok
Copy link
Collaborator

mthrok commented Nov 11, 2020

Glad you figured. :)

mthrok pushed a commit to mthrok/audio that referenced this issue Feb 26, 2021
* Add TorchScript fork/join tutorial

* Add note about zipfile format in serialization tutorial

* Profiler recipe (pytorch#1019)

* Profiler recipe

Summary:
Adding a recipe for profiler

Test Plan:
make html-noplot

* [mobile] Mobile Perf Recipe

* Minor syntax edits to mobile perf recipe

* Remove built files

* [android] android native app recipe

* [mobile_perf][recipe] Add ChannelsLast recommendation

* Adding distributed pipeline parallel tutorial

* Add async execution tutorials

* Fix code block in pipeline tutorial

* Adding an Overview Page for PyTorch Distributed (pytorch#1056)

* Adding an Overview Page for PyTorch Distributed

* Let existing PT Distributed tutorials link to the overview page

* Add a link to AMP

* Address Comments

* Remove unnecessary dist.barrier()

* [Mobile Perf Recipe] Add the benchmarking part for iOS (pytorch#1055)

* [Mobile Perf Recipe] Add the benchmarking part for iOS

* [Mobile Perf Recipe] Add the benchmarking part for iOS

Co-authored-by: Jessica Lin <jplin@fb.com>

* RPC profiling recipe (pytorch#1068)

* Initial commit

* Update

* Complete most of recipe

* Add image

* Link image

* Remove extra file

* update

* Update

* update

* Push latest changes from master into release/1.6 (pytorch#1074)

* Update feature classification labels

* Update NVidia -> Nvidia

* Bring back default filename_pattern so that by default we run all galleries.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Add prototype_source directory

* Add prototype directory

* Add prototype

* Remove extra "done"

* Add REAME.txt

* Update for prototype instructions

* Update for prototype feature

* refine torchvision_tutorial doc for windows

* Update neural_style_tutorial.py (pytorch#1059)

Updated the mistake in the Loading Images Section.

* torch_script_custom_ops restructure (pytorch#1057)

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Port custom ops tutorial to new registration API, increase testability.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Kill some other occurrences of RegisterOperators

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Update README.md

* Make torch_script_custom_classes tutorial runnable

I also fixed some warnings in the tutorial, and fixed some minor bitrot
(e.g., torch::script::Module to torch::jit::Module)

I also added some missing quotes around some bash expansions.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Update torch_script_custom_classes to use TORCH_LIBRARY (pytorch#1062)

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Co-authored-by: Edward Z. Yang <ezyang@fb.com>
Co-authored-by: Yang Gu <yangu@microsoft.com>
Co-authored-by: Hritik Bhandari <bhandari.hritik@gmail.com>

* Tutorial for DDP + RPC (pytorch#1071)

* Update feature classification labels

* Update NVidia -> Nvidia

* Bring back default filename_pattern so that by default we run all galleries.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Tutorial for DDP + RPC.

Summary: Based on example from pytorch/examples#800

* Add to main section

Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

* Added separate code file and used literalinclude

Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

Co-authored-by: Jessica Lin <jplin@fb.com>
Co-authored-by: Edward Z. Yang <ezyang@fb.com>
Co-authored-by: pritam <pritam.damania@fb.com>

* Make RPC profiling recipe into prototype tutorial (pytorch#1078)

* Add RPC tutorial

* Update to include recipes

* Add Graph Mode Dynamic Quant tutorial (pytorch#1065)

* Update feature classification labels

* Update NVidia -> Nvidia

* Bring back default filename_pattern so that by default we run all galleries.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Add prototype_source directory

* Add prototype directory

* Add prototype

* Remove extra "done"

* Add REAME.txt

* Update for prototype instructions

* Update for prototype feature

* refine torchvision_tutorial doc for windows

* Update neural_style_tutorial.py (pytorch#1059)

Updated the mistake in the Loading Images Section.

* torch_script_custom_ops restructure (pytorch#1057)

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Port custom ops tutorial to new registration API, increase testability.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Kill some other occurrences of RegisterOperators

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Update README.md

* Make torch_script_custom_classes tutorial runnable

I also fixed some warnings in the tutorial, and fixed some minor bitrot
(e.g., torch::script::Module to torch::jit::Module)

I also added some missing quotes around some bash expansions.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Update torch_script_custom_classes to use TORCH_LIBRARY (pytorch#1062)

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Add Graph Mode Dynamic Quant tutorial

Summary:
Tutorial to demonstrate graph mode dynamic quant on BERT model.
Currently not directly runnable as it requires to download glue dataset and fine-tuned model

Co-authored-by: Jessica Lin <jplin@fb.com>
Co-authored-by: Edward Z. Yang <ezyang@fb.com>
Co-authored-by: Yang Gu <yangu@microsoft.com>
Co-authored-by: Hritik Bhandari <bhandari.hritik@gmail.com>

* Add mobile recipes images

* Update mobile recipe index

* Remove RPC Profiling recipe from index

* 1.6 model freezing tutorial (pytorch#1077)

* Update feature classification labels

* Update NVidia -> Nvidia

* Bring back default filename_pattern so that by default we run all galleries.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Add prototype_source directory

* Add prototype directory

* Add prototype

* Remove extra "done"

* Add REAME.txt

* Update for prototype instructions

* Update for prototype feature

* refine torchvision_tutorial doc for windows

* Update neural_style_tutorial.py (pytorch#1059)

Updated the mistake in the Loading Images Section.

* torch_script_custom_ops restructure (pytorch#1057)

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Port custom ops tutorial to new registration API, increase testability.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Kill some other occurrences of RegisterOperators

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Update README.md

* Make torch_script_custom_classes tutorial runnable

I also fixed some warnings in the tutorial, and fixed some minor bitrot
(e.g., torch::script::Module to torch::jit::Module)

I also added some missing quotes around some bash expansions.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Update torch_script_custom_classes to use TORCH_LIBRARY (pytorch#1062)

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Add Model Freezing in TorchScript

Co-authored-by: Edward Z. Yang <ezyang@fb.com>
Co-authored-by: Yang Gu <yangu@microsoft.com>
Co-authored-by: Hritik Bhandari <bhandari.hritik@gmail.com>

* Update title

* Update recipes_index.rst

Touch for rebuild.

* Update dcgan_faces_tutorial.py

Update labels to be floats to work around torch.full inference change.

Co-authored-by: James Reed <jamesreed@fb.com>
Co-authored-by: ilia-cher <30845429+ilia-cher@users.noreply.github.com>
Co-authored-by: Ivan Kobzarev <ivankobzarev@fb.com>
Co-authored-by: Shen Li <shenli@devfair017.maas>
Co-authored-by: Shen Li <cs.shenli@gmail.com>
Co-authored-by: Tao Xu <taox@fb.com>
Co-authored-by: Rohan Varma <rvarm1@fb.com>
Co-authored-by: Edward Z. Yang <ezyang@fb.com>
Co-authored-by: Yang Gu <yangu@microsoft.com>
Co-authored-by: Hritik Bhandari <bhandari.hritik@gmail.com>
Co-authored-by: Pritam Damania <9958665+pritamdamania87@users.noreply.github.com>
Co-authored-by: pritam <pritam.damania@fb.com>
Co-authored-by: supriyar <supriyar@fb.com>
Co-authored-by: Brian Johnson <brianjo@fb.com>
Co-authored-by: gchanan <gchanan@fb.com>
mthrok pushed a commit to mthrok/audio that referenced this issue Feb 26, 2021
* Add TorchScript fork/join tutorial

* Add note about zipfile format in serialization tutorial

* Profiler recipe (pytorch#1019)

* Profiler recipe

Summary:
Adding a recipe for profiler

Test Plan:
make html-noplot

* [mobile] Mobile Perf Recipe

* Minor syntax edits to mobile perf recipe

* Remove built files

* [android] android native app recipe

* [mobile_perf][recipe] Add ChannelsLast recommendation

* Adding distributed pipeline parallel tutorial

* Add async execution tutorials

* Fix code block in pipeline tutorial

* Adding an Overview Page for PyTorch Distributed (pytorch#1056)

* Adding an Overview Page for PyTorch Distributed

* Let existing PT Distributed tutorials link to the overview page

* Add a link to AMP

* Address Comments

* Remove unnecessary dist.barrier()

* [Mobile Perf Recipe] Add the benchmarking part for iOS (pytorch#1055)

* [Mobile Perf Recipe] Add the benchmarking part for iOS

* [Mobile Perf Recipe] Add the benchmarking part for iOS

Co-authored-by: Jessica Lin <jplin@fb.com>

* Add files via upload

* Create numeric_suite_tutorial.py

* jlin27_numeric_suite_tutorial

Made some syntax edits because original headings were not rendering properly and breaking the build:
- Removed the lines of pound sign (#) delimiters under text because when placed under text, it renders them all as headers
- Add lines of pound delimiters above certain blocks of text to force them to show up as plain text between the code rather than comments with the code
- Added code syntax  (e.g.``compare_weights``)

Suggestions: 
- Link to code or documentation (for example in the beginning when referencing new code or new concepts)
- Add a conclusion section with links to references or learn more at the end
    - Examples:
https://pytorch.org/tutorials/intermediate/dynamic_quantization_bert_tutorial.html#conclusion

Fixes:
- Currently the tutorial references images in `/_static/img/` but they are placed in `/_static/`. Make sure these match up.

* Delete compare_output.png

* Delete compare_stub.png

* Delete shadow.png

* Add files via upload

* RPC profiling recipe (pytorch#1068)

* Initial commit

* Update

* Complete most of recipe

* Add image

* Link image

* Remove extra file

* update

* Update

* update

* Update numeric_suite_tutorial.py

* Update numeric_suite_tutorial.py

* Push latest changes from master into release/1.6 (pytorch#1074)

* Update feature classification labels

* Update NVidia -> Nvidia

* Bring back default filename_pattern so that by default we run all galleries.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Add prototype_source directory

* Add prototype directory

* Add prototype

* Remove extra "done"

* Add REAME.txt

* Update for prototype instructions

* Update for prototype feature

* refine torchvision_tutorial doc for windows

* Update neural_style_tutorial.py (pytorch#1059)

Updated the mistake in the Loading Images Section.

* torch_script_custom_ops restructure (pytorch#1057)

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Port custom ops tutorial to new registration API, increase testability.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Kill some other occurrences of RegisterOperators

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Update README.md

* Make torch_script_custom_classes tutorial runnable

I also fixed some warnings in the tutorial, and fixed some minor bitrot
(e.g., torch::script::Module to torch::jit::Module)

I also added some missing quotes around some bash expansions.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Update torch_script_custom_classes to use TORCH_LIBRARY (pytorch#1062)

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Co-authored-by: Edward Z. Yang <ezyang@fb.com>
Co-authored-by: Yang Gu <yangu@microsoft.com>
Co-authored-by: Hritik Bhandari <bhandari.hritik@gmail.com>

* Tutorial for DDP + RPC (pytorch#1071)

* Update feature classification labels

* Update NVidia -> Nvidia

* Bring back default filename_pattern so that by default we run all galleries.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Tutorial for DDP + RPC.

Summary: Based on example from pytorch/examples#800

* Add to main section

Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

* Added separate code file and used literalinclude

Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

Co-authored-by: Jessica Lin <jplin@fb.com>
Co-authored-by: Edward Z. Yang <ezyang@fb.com>
Co-authored-by: pritam <pritam.damania@fb.com>

* Make RPC profiling recipe into prototype tutorial (pytorch#1078)

* Add RPC tutorial

* Update to include recipes

* Add Graph Mode Dynamic Quant tutorial (pytorch#1065)

* Update feature classification labels

* Update NVidia -> Nvidia

* Bring back default filename_pattern so that by default we run all galleries.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Add prototype_source directory

* Add prototype directory

* Add prototype

* Remove extra "done"

* Add REAME.txt

* Update for prototype instructions

* Update for prototype feature

* refine torchvision_tutorial doc for windows

* Update neural_style_tutorial.py (pytorch#1059)

Updated the mistake in the Loading Images Section.

* torch_script_custom_ops restructure (pytorch#1057)

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Port custom ops tutorial to new registration API, increase testability.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Kill some other occurrences of RegisterOperators

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Update README.md

* Make torch_script_custom_classes tutorial runnable

I also fixed some warnings in the tutorial, and fixed some minor bitrot
(e.g., torch::script::Module to torch::jit::Module)

I also added some missing quotes around some bash expansions.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Update torch_script_custom_classes to use TORCH_LIBRARY (pytorch#1062)

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Add Graph Mode Dynamic Quant tutorial

Summary:
Tutorial to demonstrate graph mode dynamic quant on BERT model.
Currently not directly runnable as it requires to download glue dataset and fine-tuned model

Co-authored-by: Jessica Lin <jplin@fb.com>
Co-authored-by: Edward Z. Yang <ezyang@fb.com>
Co-authored-by: Yang Gu <yangu@microsoft.com>
Co-authored-by: Hritik Bhandari <bhandari.hritik@gmail.com>

* Add mobile recipes images

* Update mobile recipe index

* Remove RPC Profiling recipe from index

* 1.6 model freezing tutorial (pytorch#1077)

* Update feature classification labels

* Update NVidia -> Nvidia

* Bring back default filename_pattern so that by default we run all galleries.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Add prototype_source directory

* Add prototype directory

* Add prototype

* Remove extra "done"

* Add REAME.txt

* Update for prototype instructions

* Update for prototype feature

* refine torchvision_tutorial doc for windows

* Update neural_style_tutorial.py (pytorch#1059)

Updated the mistake in the Loading Images Section.

* torch_script_custom_ops restructure (pytorch#1057)

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Port custom ops tutorial to new registration API, increase testability.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Kill some other occurrences of RegisterOperators

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Update README.md

* Make torch_script_custom_classes tutorial runnable

I also fixed some warnings in the tutorial, and fixed some minor bitrot
(e.g., torch::script::Module to torch::jit::Module)

I also added some missing quotes around some bash expansions.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Update torch_script_custom_classes to use TORCH_LIBRARY (pytorch#1062)

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Add Model Freezing in TorchScript

Co-authored-by: Edward Z. Yang <ezyang@fb.com>
Co-authored-by: Yang Gu <yangu@microsoft.com>
Co-authored-by: Hritik Bhandari <bhandari.hritik@gmail.com>

Co-authored-by: James Reed <jamesreed@fb.com>
Co-authored-by: Jessica Lin <jplin@fb.com>
Co-authored-by: ilia-cher <30845429+ilia-cher@users.noreply.github.com>
Co-authored-by: Ivan Kobzarev <ivankobzarev@fb.com>
Co-authored-by: Shen Li <shenli@devfair017.maas>
Co-authored-by: Shen Li <cs.shenli@gmail.com>
Co-authored-by: Tao Xu <taox@fb.com>
Co-authored-by: Rohan Varma <rvarm1@fb.com>
Co-authored-by: Edward Z. Yang <ezyang@fb.com>
Co-authored-by: Yang Gu <yangu@microsoft.com>
Co-authored-by: Hritik Bhandari <bhandari.hritik@gmail.com>
Co-authored-by: Pritam Damania <9958665+pritamdamania87@users.noreply.github.com>
Co-authored-by: pritam <pritam.damania@fb.com>
Co-authored-by: supriyar <supriyar@fb.com>
Co-authored-by: Jessica Lin <jlin2700@gmail.com>
mthrok pushed a commit to mthrok/audio that referenced this issue Feb 26, 2021
* Add TorchScript fork/join tutorial

* Add note about zipfile format in serialization tutorial

* Profiler recipe (pytorch#1019)

* Profiler recipe

Summary:
Adding a recipe for profiler

Test Plan:
make html-noplot

* [mobile] Mobile Perf Recipe

* Minor syntax edits to mobile perf recipe

* Remove built files

* [android] android native app recipe

* [mobile_perf][recipe] Add ChannelsLast recommendation

* Adding distributed pipeline parallel tutorial

* Add async execution tutorials

* Fix code block in pipeline tutorial

* Adding an Overview Page for PyTorch Distributed (pytorch#1056)

* Adding an Overview Page for PyTorch Distributed

* Let existing PT Distributed tutorials link to the overview page

* Add a link to AMP

* Address Comments

* Remove unnecessary dist.barrier()

* [Mobile Perf Recipe] Add the benchmarking part for iOS (pytorch#1055)

* [Mobile Perf Recipe] Add the benchmarking part for iOS

* [Mobile Perf Recipe] Add the benchmarking part for iOS

Co-authored-by: Jessica Lin <jplin@fb.com>

* Graph mode static quantization tutorial

* RPC profiling recipe (pytorch#1068)

* Initial commit

* Update

* Complete most of recipe

* Add image

* Link image

* Remove extra file

* update

* Update

* update

* Push latest changes from master into release/1.6 (pytorch#1074)

* Update feature classification labels

* Update NVidia -> Nvidia

* Bring back default filename_pattern so that by default we run all galleries.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Add prototype_source directory

* Add prototype directory

* Add prototype

* Remove extra "done"

* Add REAME.txt

* Update for prototype instructions

* Update for prototype feature

* refine torchvision_tutorial doc for windows

* Update neural_style_tutorial.py (pytorch#1059)

Updated the mistake in the Loading Images Section.

* torch_script_custom_ops restructure (pytorch#1057)

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Port custom ops tutorial to new registration API, increase testability.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Kill some other occurrences of RegisterOperators

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Update README.md

* Make torch_script_custom_classes tutorial runnable

I also fixed some warnings in the tutorial, and fixed some minor bitrot
(e.g., torch::script::Module to torch::jit::Module)

I also added some missing quotes around some bash expansions.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Update torch_script_custom_classes to use TORCH_LIBRARY (pytorch#1062)

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Co-authored-by: Edward Z. Yang <ezyang@fb.com>
Co-authored-by: Yang Gu <yangu@microsoft.com>
Co-authored-by: Hritik Bhandari <bhandari.hritik@gmail.com>

* Tutorial for DDP + RPC (pytorch#1071)

* Update feature classification labels

* Update NVidia -> Nvidia

* Bring back default filename_pattern so that by default we run all galleries.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Tutorial for DDP + RPC.

Summary: Based on example from pytorch/examples#800

* Add to main section

Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

* Added separate code file and used literalinclude

Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

Co-authored-by: Jessica Lin <jplin@fb.com>
Co-authored-by: Edward Z. Yang <ezyang@fb.com>
Co-authored-by: pritam <pritam.damania@fb.com>

* Make RPC profiling recipe into prototype tutorial (pytorch#1078)

* Add RPC tutorial

* Update to include recipes

* Graph mode static quantization tutorial

Co-authored-by: James Reed <jamesreed@fb.com>
Co-authored-by: Jessica Lin <jplin@fb.com>
Co-authored-by: ilia-cher <30845429+ilia-cher@users.noreply.github.com>
Co-authored-by: Ivan Kobzarev <ivankobzarev@fb.com>
Co-authored-by: Shen Li <shenli@devfair017.maas>
Co-authored-by: Shen Li <cs.shenli@gmail.com>
Co-authored-by: Tao Xu <taox@fb.com>
Co-authored-by: Rohan Varma <rvarm1@fb.com>
Co-authored-by: Edward Z. Yang <ezyang@fb.com>
Co-authored-by: Yang Gu <yangu@microsoft.com>
Co-authored-by: Hritik Bhandari <bhandari.hritik@gmail.com>
Co-authored-by: Pritam Damania <9958665+pritamdamania87@users.noreply.github.com>
Co-authored-by: pritam <pritam.damania@fb.com>
mpc001 pushed a commit to mpc001/audio that referenced this issue Aug 4, 2023
* Adding FSDP example

* adding slurm cluster setup instruction

* adding setup model func

* added missing features

* sumamrizatioon_dataset

* Updates training and remove unnecessary imports

* updtaing the wrapping policy

* Added Zero2 sharding

* updates from testing on clean machine

* updates from clean machine, add requirements.txt

* updates from clean machine

* added SentencePiece

* removed activation checkpointing and added check for bf16

* clean up

* removing cluster setup

* fix progress bars, update readme

* update progress bars, readme

* correct ordering for curr_val_loss evaluation and model save

* clean up the dataset links

* fixing the dataset links

* updates from clean machine

* reverting lastest unnecesary changes

* moving to a new folder

* adding FSDP to dist folder

* updates to address comments

* adding utils and configs to make the code modular

* clean up

---------

Co-authored-by: lessw2020 <lessw@etrillium.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants