linux下手工安装ollama0.9.6
1、去下载ollama的linux版的压缩包地址https://github.com/ollama/ollama/releases2、上传到linux中。3、解压tarzxvf ollama-linux-amd64-0.9.6.tgz-C/usr/local/最新的从0.14开始使用了新格式改为tar-Izstd-xvfollama-linux-amd64.0.20.2.tar.zst-C/usr/local用起来可以的模型ollama run fredrezones55/Qwopus3.5:27b#opus的蒸馏版shmily_006/Qw3:latest#qwen3 4b不思考版本4、如果仅仅是要手工执行已经可以了ollama serve5、添加存储目录mkdir -p /app/ollama/data修改权限chmod 777 -R /app/ollama/data6、添加用户sudouseradd-r-s/bin/false-U-m-d/usr/share/ollama ollamasudousermod-a-Gollama$(whoami)7、创建服务文件 /etc/systemd/system/ollama.service[Unit]DescriptionOllama ServiceAfternetwork-online.target[Service]ExecStart/usr/local/bin/ollama serveUserollamaGroupollamaRestartalwaysRestartSec3EnvironmentPATH/root/anaconda3/bin:/root/anaconda3/condabin:/usr/local/cuda/bin:/root/anaconda3/bin:/usr/local/cuda/bin:/usr/share/Modules/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bi$ EnvironmentOLLAMA_MODELS/app/ollama/data EnvironmentOLLAMA_HOST0.0.0.0[Install]WantedBydefault.target8、启动服务systemctl start ollama9、可以看下状态systemctl status ollama10、api接口openapi兼容接口 http://localhost:11434/v1/ http://localhost:11434/v1/models 查看所有的模型 http://localhost:11434/v1/chat/completions 会话接口 http://localhost:11434/v1/responses 响应接口 http://localhost:11434/v1/embeddings 嵌入模型 http://localhost:11434/v1/images/generations 图片生成接口 下面是ollama的web api http://localhost:11434/api/generate http://localhost:11434/api/chat http://localhost:11434/api/embed http://localhost:11434/api/tags