1. Five years of GPT progress
source: https://finbarr.ca/five-years-of-gpt-progress/?
2. Large language models aren't trained enough.
source: https://finbarr.ca/llms-not-trained-enough/
3. Transformer Inference Arithmetic
source: https://kipp.ly/blog/transformer-inference-arithmetic/
4. Transformer Taxonomy (the last lit review)
source: https://kipp.ly/blog/transformer-taxonomy/
5. What can RL do?
source: https://ropiens.tistory.com/215?fbclid=IwAR1kGG2H8mwoQG7NwNqIUoO0U2-n9__S1RMST2Bb1fBEkSqy3S11B6ts4xQ
6. Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback
source: https://arxiv.org/pdf/2302.12813.pdf
source: https://www.microsoft.com/en-us/research/group/deep-learning-group/articles/check-your-facts-and-try-again-improving-large-language-models-with-external-knowledge-and-automated-feedback/
'Daily-Trend-Review' 카테고리의 다른 글
2023/04/07: GPT4All (0) | 2023.04.07 |
---|---|
2023/04/06: 모델 맞춤화, 모델 Scaling에 대한 믿음.. (0) | 2023.04.06 |
2023/04/04: Retro "photo" by Midjourney (0) | 2023.04.04 |
2023/04/03: ChatGPT plugin 논문, Stable Diffusion, LLM의 규모 평가하기 (0) | 2023.04.03 |
2023/04/01: 탄소 배출과 전력 소모를 줄이는 방법 (0) | 2023.04.01 |