LLM and Transformers Series:
- Part 1 — Are LLMs Just a Memory Trick?
- Part 2 — LLMs; Beyond Memorization
- Part 3 — Mathematically Assessing Closed-LLMs for Generalization
- Part 4 — Enhancing Safety in LLMs: A Rigorous Mathematical Examination of Jailbreaking
- Part 5 — In-Depth Analysis of Red Teaming in LLMs: A Mathematical and Empirical Approach
- Part 6 — Adversarial Attacks on LLM. A Mathematical and Strategic Analysis
- Part 7 — Strategies for Enhancing LLM Safety: Mathematical and Ethical Frameworks
- Part 8 — Mathematical Explanation of Why It’s Hard for LLMs to Memorize
- Part 9 — Memory-Augmented Transformer Networks: A Mathematical Insight
'Daily-Trend-Review' 카테고리의 다른 글
2023/12/14: Prompt Cache: Modular Attention Reuse For Low-Latency Inference (1) | 2023.12.14 |
---|---|
2023/12/12: chip cloud 논문 (0) | 2023.12.14 |
2023/12/11: LLM Visualization (0) | 2023.12.11 |
2023/12/11: Reproducible Performance Metrics for LLM inference (0) | 2023.12.11 |
2023/12/10: 아이패드에서 colab 사용법 (0) | 2023.12.10 |