분류 전체보기 135

2023/04/05: transformer 추론 연산, Check your facts and try again 등

1. Five years of GPT progress source: https://finbarr.ca/five-years-of-gpt-progress/? 2. Large language models aren't trained enough. source: https://finbarr.ca/llms-not-trained-enough/ 3. Transformer Inference Arithmetic source: https://kipp.ly/blog/transformer-inference-arithmetic/ 4. Transformer Taxonomy (the last lit review) source: https://kipp.ly/blog/transformer-taxonomy/ 5. What can ..

Daily-Trend-Review 2023.04.05

2023/04/03: ChatGPT plugin 논문, Stable Diffusion, LLM의 규모 평가하기

1. TaskMatrix.AI: Completing Tasks by Connecting Foundation Models with Millions of APIs source : https://arxiv.org/pdf/2303.16434.pdf\ 2. HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace source : https://arxiv.org/pdf/2303.17580.pdf 3. 그림으로 이해하는 스테이블 디퓨전 source : https://medium.com/@aldente0630/%EA%B7%B8%EB%A6%BC%EC%9C%BC%EB%A1%9C-%EC%9D%B4%ED%95%B4%ED%95%98%EB%8A%94-%EC..

Daily-Trend-Review 2023.04.03

2023/03/28: 얀 르쿤의 auto-regression 모델에 대한 견해 등

1. Observations on Microsoft’s Experiments with GPT-4 source: https://machine-learning-made-simple.medium.com/observations-on-microsofts-experiments-with-gpt-4-3d189647556d 2. 얀 르쿤의 auto-regression 모델에 대한 견해 source: https://www.linkedin.com/posts/yann-lecun_i-have-claimed-that-auto-regressive-llms-activity-7045908925660950528-hJGk/?utm_source=share&utm_medium=member_desktop 3. Sparks of Artifici..

Daily-Trend-Review 2023.03.28

2023/03/24: RL, NLP 모델 비용 최적화

1. RL Widens the ChatGPT Moat source: https://dmccreary.medium.com/rl-widens-the-chatgpt-moat-7543041cda54 2. ChatGPT는 어떻게 학습할까요_ChatGPT 대화형 언어모델 소개 (feat, 챗봇) source: https://youtu.be/vziygFrRlZ4 3. How to Run Stable Diffusion Locally to Generate Images source: https://www.assemblyai.com/blog/how-to-run-stable-diffusion-locally-to-generate-images/#prompt-engineering 4. ExplAIning AI: The fundam..

Daily-Trend-Review 2023.03.24