<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Machine Learning on Matthew Bonanni</title><link>https://matthewbonanni.github.io/tags/machine-learning/</link><description>Recent content in Machine Learning on Matthew Bonanni</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Tue, 12 May 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://matthewbonanni.github.io/tags/machine-learning/index.xml" rel="self" type="application/rss+xml"/><item><title>Attention Visualizer</title><link>https://matthewbonanni.github.io/posts/2026-05-12-attention-visualizer/</link><pubDate>Tue, 12 May 2026 00:00:00 +0000</pubDate><guid>https://matthewbonanni.github.io/posts/2026-05-12-attention-visualizer/</guid><description>&lt;div style="text-align: center; margin-bottom: 1.5rem;">
&lt;a href="https://matthewbonanni.github.io/attn-viz/" style="display: inline-flex; align-items: center; gap: 0.5rem; padding: 0.5rem 1.2rem; border: 1px solid var(--border); border-radius: 6px; text-decoration: none; color: var(--primary); font-size: 0.95rem;">Attention Visualizer →&lt;/a>
&lt;/div>
&lt;p>&lt;a href="https://matthewbonanni.github.io/attn-viz/">Attention Visualizer&lt;/a> is an interactive tool I built for exploring the self-attention mechanism at the heart of transformer models. It renders tensors and operations as isometric 3D blocks, making it easy to see how shapes flow through each stage of attention — from the initial QKV projections through the softmax and final output projection. You can adjust architecture parameters like sequence length, number of heads, and head dimension in real time and watch the diagram update instantly. Clicking any tensor or operation opens a detail panel with a breakdown of FLOPs, memory transfer, arithmetic intensity, and roofline analysis for A100, H100, and B200 GPUs.&lt;/p></description></item></channel></rss>