Skip to the content.

Publishing Cadence Summary

Looking for a single place to trace how the notebook has evolved? This reference page aggregates every post that landed on the site, sliced by month, quarter, half-year, and full year. Use it to spot clusters of research, follow multi-part series, or plan what to read next.

What You’ll Gain from This Collection

By reading through this complete collection, you’ll build a comprehensive technical foundation spanning multiple domains:

🎯 Core Technical Skills

Probabilistic Reasoning & State Estimation

  • Master Kalman filtering from Bayesian foundations to nonlinear extensions (EKF, UKF, particle filters)
  • Understand stochastic processes, sampling methods (importance, Gibbs, stratified), and why direct PDF sampling is hard
  • Apply recursive filtering to real-world tracking and control problems

Computer Vision & Image Processing

  • Navigate the complete image contrast landscape: from grayscale fundamentals to color chromatic contrast, same-content comparison metrics, content-independent analysis, and SDR/HDR cross-domain comparison with tone mapping
  • Understand scene-referred workflows, ACES color pipelines, and real-time gamut precomputation
  • Explore logarithmic color spaces, PCA-based color analysis, and spectral imaging fundamentals
  • Study modern CV research: panoptic segmentation without inductive biases, knowledge distillation for video models

High-Performance Computing

  • Write SIMD intrinsics from SSE to AVX2
  • Program GPU kernels with grids, blocks, and warps
  • Bridge ISA concepts to GPU programming mindsets
  • Master C++ concurrency: futures/promises, std::async, and reference types (lvalue, rvalue, universal)

Mathematical Foundations

  • Visualize optimization through gradients and Hessians
  • Understand the implicit function theorem and Lagrange multipliers
  • Explore inverse trig symmetries and functional analysis concepts
  • Bridge elementary mathematics to advanced vision algorithms through normalized power sums
  • Master Brownian motion, stochastic differential equations, and Itô calculus
  • Understand total variation and its role in stochastic processes
  • Connect Brownian motion to modern diffusion models and flow-based generative models

Machine Learning Infrastructure

  • Master PyTorch tensor indexing from 1D slices to N-dimensional views
  • Understand distribution shifts in the AI era (from funnels to loops)
  • Navigate OCR evolution from Tesseract to transformers

Generative Models & Probabilistic Foundations

  • Understand the partition function problem in deep learning (why computing Z is intractable)
  • Learn how VAEs cleverly avoid the partition function through the ELBO
  • Grasp the mathematical foundations of expectation and why it matters for variational inference
  • Explore the historical context: why discriminative learning dominated before generative models
  • Compare ML paradigms: modeling distributions vs learning functions

🔍 Multi-Part Deep Dives

You’ll encounter several comprehensive series that build knowledge systematically:

  1. Kalman Filtering Curriculum (8 posts) — From Bayesian foundations to advanced nonlinear extensions
  2. Image Contrast Masterclass (6 posts) — Grayscale → Color → Same-Content → Different-Content → SDR/HDR cross-domain → Unsupervised ML
  3. Generative Models & the Z Problem (5 posts) — Curse of dimensionality → Partition function problem → Historical context → ML paradigms → VAEs solution
  4. Stochastic Processes & Diffusion (5 posts) — Brownian motion → Mathematical properties → Total variation → Itô calculus → Diffusion models
  5. C++ Concurrency Trilogy (3 posts) — Futures, promises, and async programming patterns
  6. Sampling Theory Arc (3 posts) — Stochastic foundations, advanced techniques, and practical challenges

📚 Interdisciplinary Connections

Beyond pure technical content, you’ll develop:

  • Language precision through detailed word studies (culpable, resent, gripe vs. complaint vs. grievance)
  • Research communication by seeing how complex topics are broken down and explained
  • Cross-domain thinking by observing how concepts from math, physics, and perception converge in practical systems

🎓 Outcome: Production-Ready Knowledge

This isn’t just theoretical—every post is grounded in applied, production-grade understanding:

  • Code examples you can run and adapt
  • Mathematical rigor balanced with practical intuition
  • Awareness of pitfalls, edge cases, and “what NOT to do”
  • References to standards (SMPTE, ITU-R) and seminal papers

If you work through this entire collection, you’ll emerge with the technical depth to:

  • Design and implement computer vision pipelines from capture to display
  • Optimize performance-critical code with SIMD and GPU acceleration
  • Reason about probabilistic systems and uncertainty quantification
  • Make informed architectural decisions backed by mathematical foundations
  • Communicate complex technical concepts clearly and precisely

Time investment: ~35-45 hours of focused reading Payoff: A curated curriculum equivalent to multiple graduate-level courses in CV, HPC, applied mathematics, and modern generative models

Table of Contents

Monthly Rollup

July 2013 — 1 post

August 2013 — 1 post

May 2024 — 2 posts

September 2024 — 9 posts

October 2024 — 1 post

December 2024 — 1 post

January 2025 — 3 posts

February 2025 — 12 posts

March 2025 — 9 posts

September 2025 — 4 posts

October 2025 — 3 posts

November 2025 — 2 posts

December 2025 — 17 posts

January 2026 — 11 posts

February 2026 — 5 posts

Quarterly Rollup

  • 2013 Q3 (Jul–Sep) — 2 early posts revisiting edge detection operators and the Canny detector workflow.
  • 2024 Q2 (Apr–Jun) — 2 posts launching the site and documenting the learning pipeline in May.
  • 2024 Q3 (Jul–Sep) — 9 posts that build the Kalman Filtering series end-to-end, capped by a MathJax rendering check.
  • 2024 Q4 (Oct–Dec) — 2 posts mixing affective semantics with a look at OCR’s leap from Tesseract to transformers.
  • 2025 Q1 (Jan–Mar) — 24 posts spanning optimization math, language studies, the C++ futures trilogy, SIMD/GPU programming, tensor indexing, template metaprogramming, KL divergence, frequency-domain intuition, linear algebra foundations, and a multi-part sampling primer.
  • 2025 Q3 (Jul–Sep) — 4 posts blending vision research write-ups with a language usage study.
  • 2025 Q4 (Oct–Dec) — 22 posts covering C++ reference semantics, knowledge distillation, distribution shifts, sampling theory, panoptic segmentation, the curse of dimensionality, generative models and the partition function problem, discriminative vs generative learning history, ML paradigms, differential equation primers, color balance, the comprehensive 6-part image contrast series, plus Brownian motion, diffusion models, stochastic calculus, and total variation concepts.
  • 2026 Q1 (Jan–Mar) — 16 posts (so far) on variational autoencoders, expected value foundations, depth estimation, image/video matting, determinants, symmetric groups, ML learning-rate tooling, Taylor-series intuition, differentiability edge cases, and production color pipelines.

Semiannual Rollup

  • 2013 H2 (Jul–Dec) — 2 posts capturing classic edge detection workflows.
  • 2024 H1 (Jan–Jun) — 2 foundational posts laying out the project mission and note-taking workflow.
  • 2024 H2 (Jul–Dec) — 11 posts focused on probabilistic state estimation, affective semantics, and the evolution of OCR tooling.
  • 2025 H1 (Jan–Jun) — 24 posts covering advanced calculus, optimization, C++ concurrency, SIMD/GPU programming, color science, tensor indexing, template metaprogramming, KL divergence, linear algebra foundations, and sampling methods.
  • 2025 H2 (Jul–Dec) — 22 posts summarizing vision research, C++ reference semantics, knowledge distillation, distribution shifts, probabilistic modeling, the curse of dimensionality, generative modeling and the partition function problem, discriminative vs generative learning history, ML paradigms, differential equation primers, color balance, Brownian motion, diffusion models, stochastic calculus, total variation, and a comprehensive 6-part image contrast series spanning grayscale, color, same-content comparison, different-content comparison, SDR/HDR cross-domain analysis, and unsupervised ML prediction.
  • 2026 H1 (Jan–Jun) — 16 posts (so far) covering variational autoencoders, expected value foundations, depth estimation, image/video matting, determinants, symmetric groups, ML learning-rate tooling, Taylor-series approximation, differentiability edge cases, and Hollywood color workflow fundamentals.

Annual Rollup

  • 2013 — 2 posts: Early explorations in edge detection operators and the Canny detector workflow.
  • 2024 — 13 posts: From the site introduction to a comprehensive Kalman Filtering curriculum, affective vocabulary studies, and a survey of OCR advances.
  • 2025 — 50 posts: Deep dives into color science, optimization, functional analysis, C++ concurrency (futures/promises, std::async, reference types), SIMD/GPU programming, tensor indexing, template metaprogramming, KL divergence, frequency-domain intuition, linear algebra foundations, sampling theory, computer vision research notes (knowledge distillation, panoptic segmentation, distribution shifts), the curse of dimensionality, generative models and the partition function problem, discriminative vs generative learning history, ML paradigms (distributions vs functions), differential equations, color balance, Brownian motion, diffusion models, stochastic calculus (Itô calculus, SDEs), total variation, and a comprehensive 6-part image contrast series covering grayscale contrast fundamentals, color contrast, same-content comparison metrics, content-independent comparison methods, SDR/HDR cross-domain analysis with tone mapping, and unsupervised ML for contrast prediction.
  • 2026 (Jan–) — 16 posts (ongoing): Variational autoencoders, expected value foundations, depth estimation, image/video matting, determinants, symmetric groups, learning-rate scheduler intuition, Taylor-series approximation, differentiability edge cases, and production color pipeline workflows.

Keep Reading