Publishing Cadence Summary
Looking for a single place to trace how the notebook has evolved? This reference page aggregates every post that landed on the site, sliced by month, quarter, half-year, and full year. Use it to spot clusters of research, follow multi-part series, or plan what to read next.
What You’ll Gain from This Collection
By reading through this complete collection, you’ll build a comprehensive technical foundation spanning multiple domains:
🎯 Core Technical Skills
Probabilistic Reasoning & State Estimation
- Master Kalman filtering from Bayesian foundations to nonlinear extensions (EKF, UKF, particle filters)
- Understand stochastic processes, sampling methods (importance, Gibbs, stratified), and why direct PDF sampling is hard
- Apply recursive filtering to real-world tracking and control problems
Computer Vision & Image Processing
- Navigate the complete image contrast landscape: from grayscale fundamentals to color chromatic contrast, same-content comparison metrics, content-independent analysis, and SDR/HDR cross-domain comparison with tone mapping
- Understand scene-referred workflows, ACES color pipelines, and real-time gamut precomputation
- Explore logarithmic color spaces, PCA-based color analysis, and spectral imaging fundamentals
- Study modern CV research: panoptic segmentation without inductive biases, knowledge distillation for video models
High-Performance Computing
- Write SIMD intrinsics from SSE to AVX2
- Program GPU kernels with grids, blocks, and warps
- Bridge ISA concepts to GPU programming mindsets
- Master C++ concurrency: futures/promises, std::async, and reference types (lvalue, rvalue, universal)
Mathematical Foundations
- Visualize optimization through gradients and Hessians
- Understand the implicit function theorem and Lagrange multipliers
- Explore inverse trig symmetries and functional analysis concepts
- Bridge elementary mathematics to advanced vision algorithms through normalized power sums
- Master Brownian motion, stochastic differential equations, and Itô calculus
- Understand total variation and its role in stochastic processes
- Connect Brownian motion to modern diffusion models and flow-based generative models
Machine Learning Infrastructure
- Master PyTorch tensor indexing from 1D slices to N-dimensional views
- Understand distribution shifts in the AI era (from funnels to loops)
- Navigate OCR evolution from Tesseract to transformers
Generative Models & Probabilistic Foundations
- Understand the partition function problem in deep learning (why computing Z is intractable)
- Learn how VAEs cleverly avoid the partition function through the ELBO
- Grasp the mathematical foundations of expectation and why it matters for variational inference
- Explore the historical context: why discriminative learning dominated before generative models
- Compare ML paradigms: modeling distributions vs learning functions
🔍 Multi-Part Deep Dives
You’ll encounter several comprehensive series that build knowledge systematically:
- Kalman Filtering Curriculum (8 posts) — From Bayesian foundations to advanced nonlinear extensions
- Image Contrast Masterclass (6 posts) — Grayscale → Color → Same-Content → Different-Content → SDR/HDR cross-domain → Unsupervised ML
- Generative Models & the Z Problem (5 posts) — Curse of dimensionality → Partition function problem → Historical context → ML paradigms → VAEs solution
- Stochastic Processes & Diffusion (5 posts) — Brownian motion → Mathematical properties → Total variation → Itô calculus → Diffusion models
- C++ Concurrency Trilogy (3 posts) — Futures, promises, and async programming patterns
- Sampling Theory Arc (3 posts) — Stochastic foundations, advanced techniques, and practical challenges
📚 Interdisciplinary Connections
Beyond pure technical content, you’ll develop:
- Language precision through detailed word studies (culpable, resent, gripe vs. complaint vs. grievance)
- Research communication by seeing how complex topics are broken down and explained
- Cross-domain thinking by observing how concepts from math, physics, and perception converge in practical systems
🎓 Outcome: Production-Ready Knowledge
This isn’t just theoretical—every post is grounded in applied, production-grade understanding:
- Code examples you can run and adapt
- Mathematical rigor balanced with practical intuition
- Awareness of pitfalls, edge cases, and “what NOT to do”
- References to standards (SMPTE, ITU-R) and seminal papers
If you work through this entire collection, you’ll emerge with the technical depth to:
- Design and implement computer vision pipelines from capture to display
- Optimize performance-critical code with SIMD and GPU acceleration
- Reason about probabilistic systems and uncertainty quantification
- Make informed architectural decisions backed by mathematical foundations
- Communicate complex technical concepts clearly and precisely
Time investment: ~35-45 hours of focused reading Payoff: A curated curriculum equivalent to multiple graduate-level courses in CV, HPC, applied mathematics, and modern generative models
Table of Contents
- Monthly Rollup
- July 2013 — 1 post
- August 2013 — 1 post
- May 2024 — 2 posts
- September 2024 — 9 posts
- October 2024 — 1 post
- December 2024 — 1 post
- January 2025 — 3 posts
- February 2025 — 12 posts
- March 2025 — 9 posts
- September 2025 — 4 posts
- October 2025 — 3 posts
- November 2025 — 2 posts
- December 2025 — 17 posts
- January 2026 — 11 posts
- February 2026 — 5 posts
- Quarterly Rollup
- Semiannual Rollup
- Annual Rollup
Monthly Rollup
July 2013 — 1 post
August 2013 — 1 post
May 2024 — 2 posts
- May 01: Introducing the Atul Singh Notes Blog
- May 15: How the Learning Collection Gets Built Each Week
September 2024 — 9 posts
- Sep 20: Introduction to Kalman Filtering and State Estimation
- Sep 21: Fundamentals of Recursive Filtering
- Sep 22: Bayesian Foundations of Kalman Filtering
- Sep 23: Complete Mathematical Derivation of the Kalman Filter
- Sep 24: Implementing the Kalman Filter in Python
- Sep 25: Real-World Applications of Kalman Filtering
- Sep 26: Nonlinear Extensions: EKF, UKF, and Particle Filters
- Sep 27: Advanced Topics and Future Directions in Kalman Filtering
- Sep 30: MathJax Test Page
October 2024 — 1 post
December 2024 — 1 post
January 2025 — 3 posts
- Jan 10: Understanding ACES, Scene-Referred Workflows, and Color Space Conversions
- Jan 15: The Hidden Symmetry of Inverse Sine and Cosine
- Jan 27: Why Intersection Fails in Lagrange Multipliers: The Geometry of Optimization
February 2025 — 12 posts
- Feb 01: From Gradients to Hessians: How Optimization Shapes Vision & ML
- Feb 10: Understanding the Implicit Function Theorem
- Feb 15: Culpable: Word Family, Nuance, and Indian Newsroom Examples
- Feb 18: Understanding Futures and Promises in Modern C++
- Feb 19: Composing Futures in Modern C++
- Feb 20: Mastering std::async in Modern C++
- Feb 21: Stochastic Processes and the Art of Sampling Uncertainty
- Feb 22: Beyond Basics: Importance, Gibbs, and Stratified Sampling
- Feb 24: Logarithmic Color Spaces, PCA, and the lαβ Intuition
- Feb 26: SIMD Intrinsics: From SSE to AVX2 in Practice
- Feb 27: Template Programming Frontiers
- Feb 28: GPU Kernel Programming: Grids, Blocks, and Warps Explained
March 2025 — 9 posts
- Mar 05: Random vs Stochastic: Clarifying Variables, Processes, Sampling, and Optimization
- Mar 08: From ISA to GPU Kernels: Bridging SIMD Mindsets
- Mar 10: Publishing Cadence Summary
- Mar 12: PyTorch Tensor Indexing: From 1D Slices to N-Dimensional Views
- Mar 15: Evolution of Optimization: From Equations to Gradients
- Mar 18: Bitmap-Based Gamut Precomputation for Real-Time Color Management
- Mar 20: Understanding KL Divergence: Measuring One Distribution Against Another
- Mar 22: Frequency Domain vs Spatial Domain: How Images Reveal Different Stories
- Mar 23: Linear Algebra Journey: Vector Spaces
September 2025 — 4 posts
- Sep 22: From Elementary Mathematics to Vision Algorithms: The Hidden Life of Normalized Power Sums
- Sep 22: How Cameras and Eyes See Light: From Spectra to Illuminants
- Sep 22: From Elementary Mathematics to Vision Algorithms: The Hidden Life of Normalized Power Sums
- Sep 30: Gripe, Complaint, or Grievance? Understanding Usage in Indian Newspapers (September 2025)
October 2025 — 3 posts
- Oct 04: Why Direct Sampling from PDFs or PMFs Is So Hard
- Oct 13: How Pix2Seq-D Generates Panoptic Masks Without Heavy Inductive Biases
- Oct 22: C++ Reference Types Explained: lvalue, rvalue, and Universal References
November 2025 — 2 posts
- Nov 14: Adaptive Dual-Teacher Distillation for Lightweight Video Models
- Nov 16: From Funnels to Loops: Distribution Shifts in the AI Era
December 2025 — 17 posts
- Dec 23: The Curse of Dimensionality: Why High-Dimensional Spaces Are So Strange
- Dec 24: The Normalization Constant Problem: Why Computing Z Is So Hard
- Dec 25: Why Discriminative Learning Dominated First: The Z Problem in Historical Context
- Dec 26: Machine Learning Paradigms: Distributions vs Functions
- Dec 27: Infinite Total Variation of Brownian Motion: Why the Path Length Diverges
- Dec 27: Understanding Contrast in Images: From Perception to Computation
- Dec 27: Understanding Contrast in Color Images: Beyond Luminance
- Dec 28: Itô Calculus: Why We Need New Rules for Stochastic Differential Equations
- Dec 28: Measuring Contrast Between Two Color Images: Comparison Metrics and Methods
- Dec 29: Comparing Contrast Across Different Images: Content-Independent Metrics
- Dec 29: The Landscape of Differential Equations: From ODEs to PDEs to SDEs
- Dec 30: Mathematical Properties of Brownian Motion: A Visual Guide
- Dec 30: Measuring Contrast Between SDR and HDR Images: Bridging Dynamic Range Domains
- Dec 30: Understanding Color Balance in Images and Video
- Dec 31: About Atul Singh: Technical Portfolio and Expertise
- Dec 31: From Brownian Motion to Modern Generative Models: The Stochastic Foundation of Diffusion and Flow Models
- Dec 31: Unsupervised Learning for Contrast Prediction Without Ground Truth
January 2026 — 11 posts
- Jan 01: How Variational Autoencoders Avoid Computing the Partition Function
- Jan 01: Expected Value & Expectation: Mathematical Foundations
- Jan 15: Interactive Depth vs Disparity Visualization
- Jan 15: Depth Maps in Computer Vision: From Stereo Geometry to Neural Networks
- Jan 16: Image Matting: Estimating Accurate Mask Edges for Professional Compositing
- Jan 17: Video Matting: Temporal Consistency and Real-Time Foreground Extraction
- Jan 18: Mask Refinement Workflow for Premiere Pro and DaVinci Resolve
- Jan 27: Matrix Determinants: The Leibniz Theorem from First Principles
- Jan 28: Signed Volume: Geometric Interpretation of Determinants
- Jan 29: Bijective Functions and Invertibility
- Jan 30: Symmetric Groups from First Principles
February 2026 — 5 posts
- Feb 02: Dijkstra’s Legacy: Modern ML Uses
- Feb 03: Learning Rate Schedulers: Intuitions and Use Cases
- Feb 13: Taylor Series Expansion: Intuition and Practical Approximation
- Feb 14: Zeroing Model Weights and Differentiability: What’s Really Happening?
- Feb 14: Hollywood Color Pipeline: Dailies, DI Show LUTs, and Deliverables
Quarterly Rollup
- 2013 Q3 (Jul–Sep) — 2 early posts revisiting edge detection operators and the Canny detector workflow.
- 2024 Q2 (Apr–Jun) — 2 posts launching the site and documenting the learning pipeline in May.
- 2024 Q3 (Jul–Sep) — 9 posts that build the Kalman Filtering series end-to-end, capped by a MathJax rendering check.
- 2024 Q4 (Oct–Dec) — 2 posts mixing affective semantics with a look at OCR’s leap from Tesseract to transformers.
- 2025 Q1 (Jan–Mar) — 24 posts spanning optimization math, language studies, the C++ futures trilogy, SIMD/GPU programming, tensor indexing, template metaprogramming, KL divergence, frequency-domain intuition, linear algebra foundations, and a multi-part sampling primer.
- 2025 Q3 (Jul–Sep) — 4 posts blending vision research write-ups with a language usage study.
- 2025 Q4 (Oct–Dec) — 22 posts covering C++ reference semantics, knowledge distillation, distribution shifts, sampling theory, panoptic segmentation, the curse of dimensionality, generative models and the partition function problem, discriminative vs generative learning history, ML paradigms, differential equation primers, color balance, the comprehensive 6-part image contrast series, plus Brownian motion, diffusion models, stochastic calculus, and total variation concepts.
- 2026 Q1 (Jan–Mar) — 16 posts (so far) on variational autoencoders, expected value foundations, depth estimation, image/video matting, determinants, symmetric groups, ML learning-rate tooling, Taylor-series intuition, differentiability edge cases, and production color pipelines.
Semiannual Rollup
- 2013 H2 (Jul–Dec) — 2 posts capturing classic edge detection workflows.
- 2024 H1 (Jan–Jun) — 2 foundational posts laying out the project mission and note-taking workflow.
- 2024 H2 (Jul–Dec) — 11 posts focused on probabilistic state estimation, affective semantics, and the evolution of OCR tooling.
- 2025 H1 (Jan–Jun) — 24 posts covering advanced calculus, optimization, C++ concurrency, SIMD/GPU programming, color science, tensor indexing, template metaprogramming, KL divergence, linear algebra foundations, and sampling methods.
- 2025 H2 (Jul–Dec) — 22 posts summarizing vision research, C++ reference semantics, knowledge distillation, distribution shifts, probabilistic modeling, the curse of dimensionality, generative modeling and the partition function problem, discriminative vs generative learning history, ML paradigms, differential equation primers, color balance, Brownian motion, diffusion models, stochastic calculus, total variation, and a comprehensive 6-part image contrast series spanning grayscale, color, same-content comparison, different-content comparison, SDR/HDR cross-domain analysis, and unsupervised ML prediction.
- 2026 H1 (Jan–Jun) — 16 posts (so far) covering variational autoencoders, expected value foundations, depth estimation, image/video matting, determinants, symmetric groups, ML learning-rate tooling, Taylor-series approximation, differentiability edge cases, and Hollywood color workflow fundamentals.
Annual Rollup
- 2013 — 2 posts: Early explorations in edge detection operators and the Canny detector workflow.
- 2024 — 13 posts: From the site introduction to a comprehensive Kalman Filtering curriculum, affective vocabulary studies, and a survey of OCR advances.
- 2025 — 50 posts: Deep dives into color science, optimization, functional analysis, C++ concurrency (futures/promises, std::async, reference types), SIMD/GPU programming, tensor indexing, template metaprogramming, KL divergence, frequency-domain intuition, linear algebra foundations, sampling theory, computer vision research notes (knowledge distillation, panoptic segmentation, distribution shifts), the curse of dimensionality, generative models and the partition function problem, discriminative vs generative learning history, ML paradigms (distributions vs functions), differential equations, color balance, Brownian motion, diffusion models, stochastic calculus (Itô calculus, SDEs), total variation, and a comprehensive 6-part image contrast series covering grayscale contrast fundamentals, color contrast, same-content comparison metrics, content-independent comparison methods, SDR/HDR cross-domain analysis with tone mapping, and unsupervised ML for contrast prediction.
- 2026 (Jan–) — 16 posts (ongoing): Variational autoencoders, expected value foundations, depth estimation, image/video matting, determinants, symmetric groups, learning-rate scheduler intuition, Taylor-series approximation, differentiability edge cases, and production color pipeline workflows.