diff --git a/README.md b/README.md
index 70c77486a6a064113b5ccb248156712e99b0409c..f83a0b531637cac73e5dec3d722ccefc6424a28f 100644
--- a/README.md
+++ b/README.md
@@ -15,10 +15,16 @@ docker pull 123mutouren/cv:1.0.0
 ## Local Material Dataset
 Please download the original dataset from https://vision.ist.i.kyoto-u.ac.jp/codeanddata/localmatdb/, into the folder datasets/localmatdb. Then you can zip the folder localmatdb since our dataloader assumes the images are zipped.
 
+## Pre-trained DBAT checkpoint
+Please download the pre-trained checkpoints[url:https://drive.google.com/file/d/1DCyF1FUJPlEm0Mb5QTz2afnlbzYmPhMY/view?usp=sharing] into the folder "checkpoints"
+```
+mkdir -p checkpoints/dpglt_mode95/accuracy
+```
+
+
 ## Train DBAT
 To train our DBAT, you can use the code below:
 ```
-mkdir checkpoints
 python train_sota.py --data-root "./datasets" --batch-size 4 --tag dpglt --gpus 1 --num-nodes 1 --epochs 200 --mode 95 --seed 42
 ```
 To test the trained model, you can specify the checkpoint path with the --test option