Aaron Feng

Aaron Feng

Data Science & Statistics
@ UC San Diego

Building scalable AI systems, optimizing neural networks, and exploring algorithmic reasoning.

I'm an undergraduate at UC San Diego studying Data Science and Statistics.

My research centers on a fundamental question: how can AI systems keep learning after deployment? I build self-evolving agents capable of test-time continual learning—systems that accumulate concepts, abstract from experience, and know when to trust themselves.

My work spans memory-augmented LLM agents, abstract reasoning, AI for scientific discovery, and uncertainty quantification.

Latest Research

View All →
Current

Research Assistant @ Q-Lab

Investigating memory systems for compositional reasoning on ARC-AGI. Designed feature extraction methodology for Program Synthesis memory format.

Current

Research Assistant @ Wang Lab

Architected a heterogeneous Mixture-of-Experts (MoE) system for chemical reaction prediction. Implemented GNN encoders with shortest-path positional encodings.

Selected Publication

View All →

ArcMemo: Abstract Reasoning Composition with Lifelong LLM Memory

Matthew Ho, Chen Si, Zhaoxiang Feng, Fangxu Yu, Yichi Yang, Zhijian Liu, Zhiting Hu, Lianhui Qin.

ICLR 2025 (Under Review) PDF Code

Featured Project

View All →

LLM Foundry: Scalable Training & Inference

Developing scalable pipelines for training and inference of open-source LLMs.

Docker Hydra PyTorch

Quantify Credibility

A data-driven framework to assess and quantify the credibility of information sources.

NLP Python