ValueError: Unable to restore custom object of type _tf_keras_metric. Please make sure that any custom layers are included in the custom_objects
arg when calling load_model()
and make sure that all layers implement get_config
and from_config
#224
Labels
Describe the bug
When I use the tf2 branch suggested from this issue, I can't run the convert step.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
A pb file is created.
Log files
Command output
(.venv) root@0170575a25bb:/precise# precise-convert mywakeword.net
Converting mywakeword.net to mywakeword.tflite ...
2022-02-13 14:03:25.311901: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-02-13 14:03:25.331090: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-02-13 14:03:25.331215: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-02-13 14:03:25.331436: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-02-13 14:03:25.332303: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-02-13 14:03:25.332468: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-02-13 14:03:25.332591: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-02-13 14:03:25.815796: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-02-13 14:03:25.815980: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-02-13 14:03:25.816121: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:939] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-02-13 14:03:25.816246: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 6505 MB memory: -> device: 0, name: NVIDIA GeForce RTX 2060 SUPER, pci bus id: 0000:01:00.0, compute capability: 7.5
WARNING:tensorflow:Layer net will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Traceback (most recent call last):
File "/root/mycroft-precise/.venv/bin/precise-convert", line 33, in
sys.exit(load_entry_point('mycroft-precise', 'console_scripts', 'precise-convert')())
File "/root/mycroft-precise/precise/scripts/base_script.py", line 49, in run_main
script.run()
File "/root/mycroft-precise/precise/scripts/convert.py", line 38, in run
self.convert(args.model, args.out.format(model=model_name))
File "/root/mycroft-precise/precise/scripts/convert.py", line 60, in convert
model = K.models.load_model(model_path, custom_objects={'weighted_log_loss': weighted_log_loss})
File "/root/mycroft-precise/.venv/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/root/mycroft-precise/.venv/lib/python3.8/site-packages/keras/saving/saved_model/load.py", line 1008, in revive_custom_object
raise ValueError(
ValueError: Unable to restore custom object of type _tf_keras_metric. Please make sure that any custom layers are included in the
custom_objects
arg when callingload_model()
and make sure that all layers implementget_config
andfrom_config
.(.venv) root@0170575a25bb:/precise# ./test.sh
Environment (please complete the following information):
Additional context
Running things inside a docker container that has all the gpus exposed from the host machine to the docker image
The text was updated successfully, but these errors were encountered: