Skip to content

Instantly share code, notes, and snippets.

@jaskiratsingh2000
Last active May 4, 2022 10:57
Show Gist options
  • Save jaskiratsingh2000/293d185e4cbc1a7a4f07e39aee84fb55 to your computer and use it in GitHub Desktop.
Save jaskiratsingh2000/293d185e4cbc1a7a4f07e39aee84fb55 to your computer and use it in GitHub Desktop.

Command Ran:

''' /usr/bin/salloc -n 20 -J interactive -p ihub -A mobility_arfs --gres=gpu:2 -t 12:00:00 /usr/bin/srun --pty --cpu_bind=no /bin/bash -l


python3 infer.py --cfg /home2/jaskirat.singh/Drone-based-building-assessment/win_det_heatmaps/experiments/shufflenet/lr1e-3_120x90-110_center_b2.yaml --model /home2/jaskirat.singh/shufflenet_zjufacade+iiitH.pth --infer /home2/jaskirat.singh/002


### Error:

=================SYS PATH================ /home2/jaskirat.singh/Drone-based-building-assessment/win_det_heatmaps/common_pytorch /home2/jaskirat.singh/Drone-based-building-assessment/win_det_heatmaps/common /home2/jaskirat.singh/Drone-based-building-assessment/win_det_heatmaps /home2/jaskirat.singh/miniconda3/envs/robo/lib/python38.zip /home2/jaskirat.singh/miniconda3/envs/robo/lib/python3.8 /home2/jaskirat.singh/miniconda3/envs/robo/lib/python3.8/lib-dynload /home2/jaskirat.singh/miniconda3/envs/robo/lib/python3.8/site-packages =================SYS PATH================ Using Devices: [0] Defining result & flip func Creating dataset Loading model from /home2/jaskirat.singh/shufflenet_zjufacade+iiitH.pth /home2/jaskirat.singh/miniconda3/envs/robo/lib/python3.8/site-packages/torch/cuda/init.py:143: UserWarning: NVIDIA GeForce RTX 3080 Ti with CUDA capability sm_86 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70. If you want to use the NVIDIA GeForce RTX 3080 Ti GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name)) Net total params: 13.84M Test DB size: 0. in infer Traceback (most recent call last): File "infer.py", line 90, in main() File "infer.py", line 83, in main inferNet(infer_data_loader, net, merge_hm_flip_func, merge_tag_flip_func, flip_pairs, File "/home2/jaskirat.singh/Drone-based-building-assessment/win_det_heatmaps/common_pytorch/net_modules.py", line 191, in inferNet heatmaps = torch.cat(heatmaps_list) NotImplementedError: There were no tensor arguments to this function (e.g., you passed an empty list of Tensors), but no fallback function is registered for schema aten::_cat. This usually means that this function requires a non-empty list of Tensors, or that you (the operator writer) forgot to register a fallback function. Available functions are [CPU, CUDA, QuantizedCPU, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].

CPU: registered at aten/src/ATen/RegisterCPU.cpp:18433 [kernel] CUDA: registered at aten/src/ATen/RegisterCUDA.cpp:26496 [kernel] QuantizedCPU: registered at aten/src/ATen/RegisterQuantizedCPU.cpp:1068 [kernel] BackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback] Python: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:47 [backend fallback] Named: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback] Conjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:18 [backend fallback] Negative: registered at ../aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback] ADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback] AutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel] AutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel] AutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel] AutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel] AutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel] AutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel] AutogradMLC: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel] AutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel] AutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel] AutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel] AutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel] AutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel] Tracer: registered at ../torch/csrc/autograd/generated/TraceType_3.cpp:11560 [kernel] UNKNOWN_TENSOR_TYPE_ID: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:466 [backend fallback] Autocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:305 [backend fallback] Batched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback] VmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment