← home
Papers
A running list of papers and books I’ve found interesting. Occasionally annotated, sometimes just worth reading.
Exploration of artificial intelligence, its future trajectory, and its implications for humanity.
Exploring how attention residuals influence model behavior and internal representations.
Post-training quantization method that preserves accuracy while enabling faster and more efficient inference, especially for large language models.
A simple modification to self-attention that excludes information from a token’s own value vector, encouraging stronger context modeling and improved long-sequence language modeling.