所使用的模型是`Llama-3-Taiwan-8B-Instruct-DPO`的量化版本。更多详情可以在网站上找到(https://hugging-face.cn/yentinglin/Llama-3-Taiwan-8B-Instruct-DPO)
8B
38 提交 更新于6周前
更新于6周前
6周前
7bec3b5db58c · 8.5GB
model
架构llama
·
参数8.03B
·
量化Q8_0
8.5GB
template
"{{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|>""
257B
params
{"num_ctx":8192,"stop":["<|start_header_id|>","<|end_header_id|>","<|end_of_text|>","<|eot_id|>","<|reserved_special_token"]}
171B
阅读全文
无阅读全文