统一的多模态理解和生成模型:进步、挑战和机遇.pdf 1.6 MB
The Power of Scale for Parameter-Efficient Prompt Tuning.pdf 535.0 KB
QLORA Efficient Finetuning of Quantized LLMs.pdf 1.0 MB
P-Tuning v2 Prompt Tuning Can Be.pdf 679.0 KB
Prefix-Tuning Optimizing Continuous Prompts for Generation.pdf 1.5 MB
LORA.pdf 1.5 MB
GPT Understands, Too.pdf 1.5 MB
BriLLM:类脑大型语言模型.pdf 1.0 MB
大模型微调预训练论文.zip
(7.94 MB, 需要: RMB 10 元)


雷达卡


京公网安备 11010802022788号







