From 50ff243abf52ab9b0c3d42a3853e001cd4158a36 Mon Sep 17 00:00:00 2001 From: Zhanwen Chen Date: Wed, 20 Dec 2023 14:24:24 -0500 Subject: [PATCH] Fix LLaMA2 dead link in README.md The old deepcoda link is dead. The official LLaMA2 HF repo is working: https://huggingface.co/meta-llama/Llama-2-13b-hf/tree/main?clone=true --- video_chat/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/video_chat/README.md b/video_chat/README.md index 9f31d07..0887279 100644 --- a/video_chat/README.md +++ b/video_chat/README.md @@ -102,7 +102,7 @@ In this study, we initiate an exploration into video understanding by introducin - Change the `vit_model_path` and `q_former_model_path` in [config.json](./configs/config.json) or [config_7b.json](./configs/config_7b.json). - Download [StableVicuna](https://huggingface.co/CarperAI/stable-vicuna-13b-delta) model: - - LLAMA: Download it from the [original repo](https://github.com/facebookresearch/llama) or [hugging face](https://huggingface.co/decapoda-research/llama-13b-hf). + - LLAMA: Download it from the [original repo](https://github.com/facebookresearch/llama) or [hugging face](https://huggingface.co/meta-llama/Llama-2-13b-hf/tree/main?clone=true). - If you download LLAMA from the original repo, please process it via the following command: ```shell # convert_llama_weights_to_hf is copied from transformers