Skip to content
View junkim100's full-sized avatar
👀
:)
👀
:)

Highlights

  • Pro

Organizations

@EdwinMichaelLab

Block or report junkim100

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
junkim100/README.md

Hi there, I'm Dong Jun Kim 👋

I'm currently pursuing my Ph.D. at at Korea University's NLP&AI Lab, specializing in Mechanistic Interpretability of Large Language Models (LLMs). My research focuses on understanding the internal workings of LLMs, developing methods to make their decision-making processes more transparent and interpretable. I am passionate about advancing the field of AI by improving our ability to analyze and explain model behavior at a granular level.

Research Interests:

  • Mechanistic Interpretability of LLMs
  • Sparse Autoencoders & LLM Circuits
  • Retrieval-Augmented Generation (RAG)
  • LLM Architectures & Optimization
  • Efficient Fine-tuning Techniques

Open to:

  • Research collaborations in AI/ML
  • Industry partnerships for LLM development
  • Reviewer or PC member roles for AI/ML conferences or journals

Skills

Core Expertise:

  • Mechanistic Interpretability: Developing novel techniques to understand the internal structures and decision-making pathways in LLMs.
  • Sparse Autoencoders & LLM Circuits: Researching sparse representations and circuit-level analysis within LLMs to enhance interpretability.
  • LLM Architectures: Designing and optimizing large-scale architectures for efficiency and performance.
  • Retrieval-Augmented Generation (RAG): Integrating retrieval mechanisms with generative models to improve accuracy and relevance in language generation tasks.

Machine Learning & Deep Learning Frameworks:

PyTorch  DeepSpeed  Flax  Hugging Face  TensorFlow 

Tools & Technologies:

Docker  Kubernetes  Streamlit  Weights & Biases CodaLab bitsandbytes

Optimization & Acceleration:

ONNX Runtime TensorRT Optimization

Connect with me:

Pinned Loading

  1. sil_evolving_dataset sil_evolving_dataset Public

    Self-Improving Leaderboard Evolving Dataset

    Python

  2. model-explorer model-explorer Public

    Interactive TUI for Transformer Model Analysis

    Python 1

  3. SAELens SAELens Public

    Forked from jbloomAus/SAELens

    Training Sparse Autoencoders on Language Models

    Jupyter Notebook

  4. makeMoE makeMoE Public

    Forked from AviSoori1x/makeMoE

    From scratch implementation of a sparse mixture of experts language model inspired by Andrej Karpathy's makemore :)

    Jupyter Notebook

  5. ARENA_3.0 ARENA_3.0 Public

    Forked from callummcdougall/ARENA_3.0

    HTML

  6. nixos-dotfiles nixos-dotfiles Public

    Nix