References
Lewis, Patrick, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir
Karpukhin, Naman Goyal, Heinrich Küttler, et al. 2020.
“Retrieval-Augmented Generation for Knowledge-Intensive Nlp
Tasks.” Advances in Neural Information Processing
Systems 33: 9459–74.
Liu, Nelson F, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele
Bevilacqua, Fabio Petroni, and Percy Liang. 2023. “Lost in the
Middle: How Language Models Use Long Contexts.” arXiv
Preprint arXiv:2307.03172.
Liu, Ye, Kazuma Hashimoto, Yingbo Zhou, Semih Yavuz, Caiming Xiong, and
Philip S Yu. 2021. “Dense Hierarchical Retrieval for Open-Domain
Question Answering.” arXiv Preprint arXiv:2110.15439.
Ma, Xinbei, Yeyun Gong, Pengcheng He, Hai Zhao, and Nan Duan. 2023.
“Query Rewriting for Retrieval-Augmented Large Language
Models.” arXiv Preprint arXiv:2305.14283.
Roeder, Luke Metz, Geoffrey, and Durk Kingma. 2021. “On Linear
Identifiability of Learned Representations.” arXiv Preprint
arXiv:2007.00810.
Zhao, Wayne Xin, Jing Liu, Ruiyang Ren, and Ji-Rong Wen. 2022.
“Dense Text Retrieval Based on Pretrained Language Models: A
Survey.” arXiv Preprint arXiv:2211.14876.
Zheng, Lianmin, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu,
Yonghao Zhuang, Zi Lin, et al. 2023. “Judging LLM-as-a-Judge with
MT-Bench and Chatbot Arena.” arXiv Preprint
arXiv:2306.05685.