使用lmdeploy在2080ti上部署Mini-InternVL-Chat-4B-V1-5
主要遇到的问题:
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/triton/backends/nvidia/compiler.py", line 292, in make_cubin
subprocess.run(cmd, shell=True, check=True)
File "/opt/conda/lib/python3.10/subprocess.py", line 526, in run
raise CalledProcessError(retcode, process.