Chinese LLaMA & Alpaca Multimodal Language Model.zip
Chinese LLaMA & Alpaca Multimodal Language Model.zip is based on the multimodal model VisualCLA, derived from Chinese-LLaMA-Alpaca. It integrates capabilities in multimodal comprehension and dialogue, providing inference code and deployment scripts based on Gradio/Text-Generation-WebUI. The demonstration showcases the model's performance on multimodal instruction understanding tasks and includes an open-access translation test set. Current open-source version: VisualCLA-7B-v0.1 (beta).
8.25MB
文件大小:
评论区