• Skip to primary navigation
  • Skip to content
  • Skip to footer
Machine Learning / Deep Learning Study
      Hong Yong Man

      Hong Yong Man

      I am an amazing person.

      • Republic of Korea
      • Email
      • Facebook
      • LinkedIn
      • GitHub

      An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale

      less than 1 minute read

      An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale

      - Vision Transformer(ViT) Method

      • 기존 자연어 처리에서 사용되던 Transformer라는 모델을 Vision에 적용
      • Image Patch -> Transformer Encoder

      Tags: ViT

      Categories: Paper

      Updated: February 19, 2021

      Share on

      X Facebook LinkedIn Bluesky
      Previous Next

      You May Also Enjoy

      A Survey on LLM-as-a-Judge

      less than 1 minute read

      LLM-as-a-Judge에 대한 종합 조사

      Efficient Memory Management for Large Language Model Serving with PagedAttention

      1 minute read

      본 논문은 대형 언어 모델(LLM) 서빙 환경에서 가장 큰 병목 중 하나인 메모리 관리 문제를 해결하기 위해 PagedAttention이라는 혁신적인 방법을 제안한다. 이 기법은 특히 KV 캐시(Key-Value Cache) 메모리 사용 최적화에 초점을 맞추며, 운영체제의 가상 메...

      RemixMatch, FixMatch

      less than 1 minute read

      Semi-Supervised Learning Supervised Learning은 Labeled data만을 이용하여 만들어진다. But, 현실세계에서는 Unlabeled data가 훨씬 많고 Labeled data와 Unlabeled data를 같이 학습시키는 것이 Se...

      Multi Task, Multi Modal

      less than 1 minute read

      Multi Task & Multi Modal Single Task : Input X —> Output Y (ex. Object Classification) Multi Task : Input X —> Output Y1, Y2, Y3 (ex. Object ...

      • Feed
      © 2025 Machine Learning / Deep Learning Study. Powered by Jekyll & Minimal Mistakes.