[linux] huggingface transformers 如何下载模型至本地 & git lfs install 报错

bert-base-uncased at main

一、官方给出的命令:

# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
git clone https://huggingface.co/bert-base-uncased



# if you want to clone without large files – just their pointers
# prepend your git clone with the following env var:
GIT_LFS_SKIP_SMUDGE=1

但是 git lfs install 的时候报错。

于是查了一下。。。。安装lfs是要这样: 

二、安装 lfs

You can't directly use

git lfs install

Instead of that, you can use these commands to download and install (you have to download it before installing).

sudo apt-get install git-lfs
git-lfs install

三、 再用官方的命令下载。

如果失败,则。。。可能是网络太差。。。。

最终,我还是给 from_pretrained 套了循环,来解决的。。。。。。。

如果不套循环,会报错 " requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))" 。。。。。。。。。。。。。。。。。。。。。

class Trainer(object):
    def __init__(self, args, train_dataset=None, dev_dataset=None, test_dataset=None):
        self.args = args
        self.train_dataset = train_dataset
        self.dev_dataset = dev_dataset
        self.test_dataset = test_dataset

        self.intent_label_lst = get_intent_labels(args)
        self.slot_label_lst = get_slot_labels(args)
        # Use cross entropy ignore index as padding label id so that only real label ids contribute to the loss later
        self.pad_token_label_id = args.ignore_index

        self.config_class, self.model_class, _ = MODEL_CLASSES[args.model_type]
        #self.config = self.config_class.from_pretrained(args.model_name_or_path, finetuning_task=args.task, output_hidden_states=args.output_hidden_states)
        self.config = self.config_class.from_pretrained(args.model_name_or_path, finetuning_task=args.task)
        ################## [O.O]这是一个循环,解决下不下来模型的问题 #################
        nb_tries = 20
        while nb_tries>0:
            nb_tries -= 1
            try:
                self.model = self.model_class.from_pretrained(args.model_name_or_path,
                                                      config=self.config,
                                                      args=args,
                                                      intent_label_lst=self.intent_label_lst,
                                                      slot_label_lst=self.slot_label_lst)
                break
            except:
                time.sleep(0.1)
        #########################################################################
        # GPU or CPU
        self.device = "cuda" if torch.cuda.is_available() else "cpu"
        # self.device = "cuda" if torch.cuda.is_available() and not args.no_cuda else "cpu"
        self.model.to(self.device)

猜你喜欢

转载自blog.csdn.net/Trance95/article/details/131326825