We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi, I'm trying to run sleap on Bonsai and I get the following error:
2024-11-15 10:26:28.130378: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2024-11-15 10:26:28.580069: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1616] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 9403 MB memory: -> device: 0, name: NVIDIA GeForce RTX 4070 SUPER, pci bus id: 0000:01:00.0, compute capability: 8.9 2024-11-15 10:26:28.762068: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:354] MLIR V1 optimization pass is not enabled 2024-11-15 10:26:29.185317: W tensorflow/core/grappler/costs/op_level_cost_estimator.cc:690] Error in PredictCost() for the op: op: "CropAndResize" attr { key: "T" value { type: DT_FLOAT } } attr { key: "extrapolation_value" value { f: 0 } } attr { key: "method" value { s: "bilinear" } } inputs { dtype: DT_FLOAT shape { dim { size: -41 } dim { size: -42 } dim { size: -43 } dim { size: 1 } } } inputs { dtype: DT_FLOAT shape { dim { size: -9 } dim { size: 4 } } } inputs { dtype: DT_INT32 shape { dim { size: -9 } } } inputs { dtype: DT_INT32 shape { dim { size: 2 } } } device { type: "GPU" vendor: "NVIDIA" model: "NVIDIA GeForce RTX 4070 SUPER" frequency: 2640 num_cores: 56 environment { key: "architecture" value: "8.9" } environment { key: "cuda" value: "11020" } environment { key: "cudnn" value: "8100" } num_registers: 65536 l1_cache_size: 24576 l2_cache_size: 50331648 shared_memory_size_per_multiprocessor: 102400 memory_size: 9859760128 bandwidth: 504048000 } outputs { dtype: DT_FLOAT shape { dim { size: -9 } dim { size: -44 } dim { size: -45 } dim { size: 1 } } } 2024-11-15 10:26:29.186523: W tensorflow/core/grappler/costs/op_level_cost_estimator.cc:690] Error in PredictCost() for the op: op: "CropAndResize" attr { key: "T" value { type: DT_UINT8 } } attr { key: "extrapolation_value" value { f: 0 } } attr { key: "method" value { s: "bilinear" } } inputs { dtype: DT_UINT8 shape { dim { size: -15 } dim { size: 1440 } dim { size: 1440 } dim { size: 1 } } } inputs { dtype: DT_FLOAT shape { dim { size: -9 } dim { size: 4 } } } inputs { dtype: DT_INT32 shape { dim { size: -9 } } } inputs { dtype: DT_INT32 shape { dim { size: 2 } } } device { type: "CPU" vendor: "GenuineIntel" model: "103" frequency: 3187 num_cores: 32 environment { key: "cpu_instruction_set" value: "AVX SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2" } environment { key: "eigen" value: "3.4.90" } l1_cache_size: 49152 l2_cache_size: 2097152 l3_cache_size: 37748736 memory_size: 268435456 } outputs { dtype: DT_FLOAT shape { dim { size: -9 } dim { size: -52 } dim { size: -53 } dim { size: 1 } } } 2024-11-15 10:26:29.194721: W tensorflow/core/grappler/costs/op_level_cost_estimator.cc:690] Error in PredictCost() for the op: op: "CropAndResize" attr { key: "T" value { type: DT_FLOAT } } attr { key: "extrapolation_value" value { f: 0 } } attr { key: "method" value { s: "bilinear" } } inputs { dtype: DT_FLOAT shape { dim { size: -86 } dim { size: -87 } dim { size: -88 } dim { size: 1 } } } inputs { dtype: DT_FLOAT shape { dim { size: -14 } dim { size: 4 } } } inputs { dtype: DT_INT32 shape { dim { size: -14 } } } inputs { dtype: DT_INT32 shape { dim { size: 2 } } } device { type: "GPU" vendor: "NVIDIA" model: "NVIDIA GeForce RTX 4070 SUPER" frequency: 2640 num_cores: 56 environment { key: "architecture" value: "8.9" } environment { key: "cuda" value: "11020" } environment { key: "cudnn" value: "8100" } num_registers: 65536 l1_cache_size: 24576 l2_cache_size: 50331648 shared_memory_size_per_multiprocessor: 102400 memory_size: 9859760128 bandwidth: 504048000 } outputs { dtype: DT_FLOAT shape { dim { size: -14 } dim { size: -90 } dim { size: -91 } dim { size: 1 } } } 2024-11-15 10:26:29.961998: I tensorflow/stream_executor/cuda/cuda_dnn.cc:384] Loaded cuDNN version 8200 2024-11-15 10:26:31.417458: I tensorflow/stream_executor/cuda/cuda_blas.cc:1614] TensorFloat-32 will be used for the matrix multiplication. This will only be logged once.
what could be the cause of this?
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Hi, I'm trying to run sleap on Bonsai and I get the following error:
2024-11-15 10:26:28.130378: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-11-15 10:26:28.580069: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1616] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 9403 MB memory: -> device: 0, name: NVIDIA GeForce RTX 4070 SUPER, pci bus id: 0000:01:00.0, compute capability: 8.9
2024-11-15 10:26:28.762068: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:354] MLIR V1 optimization pass is not enabled
2024-11-15 10:26:29.185317: W tensorflow/core/grappler/costs/op_level_cost_estimator.cc:690] Error in PredictCost() for the op: op: "CropAndResize" attr { key: "T" value { type: DT_FLOAT } } attr { key: "extrapolation_value" value { f: 0 } } attr { key: "method" value { s: "bilinear" } } inputs { dtype: DT_FLOAT shape { dim { size: -41 } dim { size: -42 } dim { size: -43 } dim { size: 1 } } } inputs { dtype: DT_FLOAT shape { dim { size: -9 } dim { size: 4 } } } inputs { dtype: DT_INT32 shape { dim { size: -9 } } } inputs { dtype: DT_INT32 shape { dim { size: 2 } } } device { type: "GPU" vendor: "NVIDIA" model: "NVIDIA GeForce RTX 4070 SUPER" frequency: 2640 num_cores: 56 environment { key: "architecture" value: "8.9" } environment { key: "cuda" value: "11020" } environment { key: "cudnn" value: "8100" } num_registers: 65536 l1_cache_size: 24576 l2_cache_size: 50331648 shared_memory_size_per_multiprocessor: 102400 memory_size: 9859760128 bandwidth: 504048000 } outputs { dtype: DT_FLOAT shape { dim { size: -9 } dim { size: -44 } dim { size: -45 } dim { size: 1 } } }
2024-11-15 10:26:29.186523: W tensorflow/core/grappler/costs/op_level_cost_estimator.cc:690] Error in PredictCost() for the op: op: "CropAndResize" attr { key: "T" value { type: DT_UINT8 } } attr { key: "extrapolation_value" value { f: 0 } } attr { key: "method" value { s: "bilinear" } } inputs { dtype: DT_UINT8 shape { dim { size: -15 } dim { size: 1440 } dim { size: 1440 } dim { size: 1 } } } inputs { dtype: DT_FLOAT shape { dim { size: -9 } dim { size: 4 } } } inputs { dtype: DT_INT32 shape { dim { size: -9 } } } inputs { dtype: DT_INT32 shape { dim { size: 2 } } } device { type: "CPU" vendor: "GenuineIntel" model: "103" frequency: 3187 num_cores: 32 environment { key: "cpu_instruction_set" value: "AVX SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2" } environment { key: "eigen" value: "3.4.90" } l1_cache_size: 49152 l2_cache_size: 2097152 l3_cache_size: 37748736 memory_size: 268435456 } outputs { dtype: DT_FLOAT shape { dim { size: -9 } dim { size: -52 } dim { size: -53 } dim { size: 1 } } }
2024-11-15 10:26:29.194721: W tensorflow/core/grappler/costs/op_level_cost_estimator.cc:690] Error in PredictCost() for the op: op: "CropAndResize" attr { key: "T" value { type: DT_FLOAT } } attr { key: "extrapolation_value" value { f: 0 } } attr { key: "method" value { s: "bilinear" } } inputs { dtype: DT_FLOAT shape { dim { size: -86 } dim { size: -87 } dim { size: -88 } dim { size: 1 } } } inputs { dtype: DT_FLOAT shape { dim { size: -14 } dim { size: 4 } } } inputs { dtype: DT_INT32 shape { dim { size: -14 } } } inputs { dtype: DT_INT32 shape { dim { size: 2 } } } device { type: "GPU" vendor: "NVIDIA" model: "NVIDIA GeForce RTX 4070 SUPER" frequency: 2640 num_cores: 56 environment { key: "architecture" value: "8.9" } environment { key: "cuda" value: "11020" } environment { key: "cudnn" value: "8100" } num_registers: 65536 l1_cache_size: 24576 l2_cache_size: 50331648 shared_memory_size_per_multiprocessor: 102400 memory_size: 9859760128 bandwidth: 504048000 } outputs { dtype: DT_FLOAT shape { dim { size: -14 } dim { size: -90 } dim { size: -91 } dim { size: 1 } } }
2024-11-15 10:26:29.961998: I tensorflow/stream_executor/cuda/cuda_dnn.cc:384] Loaded cuDNN version 8200
2024-11-15 10:26:31.417458: I tensorflow/stream_executor/cuda/cuda_blas.cc:1614] TensorFloat-32 will be used for the matrix multiplication. This will only be logged once.
what could be the cause of this?
The text was updated successfully, but these errors were encountered: