Check out this build #61
Replies: 6 comments 16 replies
-
I was unable to download anything from that link. What kind of performance increase are you seeing relative to the current lv2 plugin build with USE_NATIVE_ARCH on? |
Beta Was this translation helpful? Give feedback.
-
This is what my Didthis post says: O3, LTO, PGO, OpenACC and x86-64-v3 optimized Linux build of neural-amp-modeler-lv2 for Glibc 2.39 systems. IMPROVEMENT! On my Ryzen 5600G with a fixed 4.4GHz CPU frequency a Standard WaveNet NAM model used 0.40% CPU. The build from yesterday used 0.5%. The following is not a system benchmark but just an anecdote. I did a test on my i3-7100U CPU running this build of neural-amp-modeler-lv2 on Linux Void and then on Windows 10. The plugin on Windows 10 was compiled using Clang 16 (O3, LTO and x86-64-v3). The DAW on both systems was REAPER and an LSTM model trained with 2 num_layers and 14 hidden_size. The Linux system managed up to 73 instances without any crackling or xruns. The Windows system was unresponsive using only 26 instances. If I had used the official Neural Amp Modeler VST3 I am certain it would have been even fewer instances before unresponsiveness. |
Beta Was this translation helpful? Give feedback.
-
If you can outline the changes to the build procedure, I can try to replicate it on my RPi4. |
Beta Was this translation helpful? Give feedback.
-
I have typed down a guide:
|
Beta Was this translation helpful? Give feedback.
-
What are your thoughts? |
Beta Was this translation helpful? Give feedback.
-
Makes no difference (which isn't surprising, since running NAM is more than enough to kick the core being used into full power). |
Beta Was this translation helpful? Give feedback.
-
https://didthis.app/user/4xvkqmpk/project/nvq5x/post/33yj6
Beta Was this translation helpful? Give feedback.
All reactions