<?xml version="1.0" encoding="UTF-8"?>
<rss  xmlns:atom="http://www.w3.org/2005/Atom" 
      xmlns:media="http://search.yahoo.com/mrss/" 
      xmlns:content="http://purl.org/rss/1.0/modules/content/" 
      xmlns:dc="http://purl.org/dc/elements/1.1/" 
      version="2.0">
<channel>
<title>Machine Learning Systems</title>
<link>https://mlsysbook.ai/newsletter/</link>
<atom:link href="https://mlsysbook.ai/newsletter/index.xml" rel="self" type="application/rss+xml"/>
<description>Two-volume textbook on ML systems: foundations and single-machine deployment (Vol I), distributed systems and scale (Vol II). Free to read online.</description>
<generator>quarto-1.9.37</generator>
<lastBuildDate>Tue, 17 Mar 2026 00:00:00 GMT</lastBuildDate>
<item>
  <title>Community Spotlight: TinyML Innovation Emerging from Colombian Universities</title>
  <dc:creator>Diego Méndez</dc:creator>
  <link>https://mlsysbook.ai/newsletter/posts/2026/2026-03-17_community-spotlight-tinyml-innovation-emerging-from-colombian-universities.html</link>
  <description><![CDATA[ 





<p class="MsoNormal" style="text-align: justify;">Our TinyML&nbsp;<a target="_blank" rel="noopener noreferrer nofollow" href="https://discuss.tinyml.seas.harvard.edu/t/the-tinyml4d-academic-network-18th-show-and-tell-will-be-thursday-feburary-26th-2026/1864"><strong>Show&amp;Tell</strong></a>&nbsp;sessions continue to highlight an encouraging trend: students around the world are not only learning about embedded AI, they are actively building real systems with it. A recent session featured projects from Colombia, where a growing community of students and researchers is exploring how TinyML and EdgeAI can support applications in healthcare, rehabilitation, and privacy-preserving sensing.</p><p class="MsoNormal" style="text-align: justify;">Many of these projects are connected to the work of&nbsp;<a target="_blank" rel="noopener noreferrer nofollow" href="https://perfilesycapacidades.javeriana.edu.co/en/persons/diego-mendez/"><strong>Diego Méndez</strong></a>, a full professor at&nbsp;<a target="_blank" rel="noopener noreferrer nofollow" href="https://www.javeriana.edu.co/inicio"><strong>Pontificia Universidad Javeriana</strong></a>&nbsp;in Bogotá, whose students are developing embedded AI systems through research and experimentation. Their work reflects the impact of engaging with the TinyML/EdgeAI ecosystem and collaborating within its global community.</p><p class="MsoNormal" style="text-align: justify;">This momentum can be traced in part to the&nbsp;<strong>SciTinyML 2025 Workshop</strong>&nbsp;held in&nbsp;Bogotá last year, organized by the&nbsp;<a target="_blank" rel="noopener noreferrer nofollow" href="https://tinyml.seas.harvard.edu/"><strong>TinyMLedu</strong></a>&nbsp;initiative and partially supported by the&nbsp;<a target="_blank" rel="noopener noreferrer nofollow" href="https://www.edgeaifoundation.org/"><strong>EdgeAI Foundation</strong></a>,&nbsp;<a target="_blank" rel="noopener noreferrer nofollow" href="https://www.edgeimpulse.com/"><strong>Edge Impulse</strong></a>&nbsp;and&nbsp;<a target="_blank" rel="noopener noreferrer nofollow" href="https://www.seeedstudio.com/"><strong>Seeed Studio</strong></a>. Workshops like this often act as catalysts, bringing together students, educators, and practitioners who continue collaborating and building long after the event ends.</p><p class="MsoNormal" style="text-align: justify;">Today, we highlight a few of the projects emerging from this growing TinyML/EdgeAI community in Colombia.</p><p class="MsoNormal" style="text-align: justify;"><strong><em>Postural Monitoring with TinyML and Federated Learning</em></strong><em>&nbsp;(Ángela Torres - Universidad del Rosario)</em></p><p class="MsoNormal" style="text-align: justify;">Ángela presented her work on the validation of an IoT system with federated learning and TinyML for postural monitoring in a simulated healthcare environment. The system combines embedded sensing with on-device machine learning to detect posture patterns while allowing models to improve collaboratively through federated learning.</p><p class="MsoNormal" style="text-align: justify;">By keeping sensitive health data on local devices and sharing only model updates, the approach enables privacy-preserving monitoring, an important requirement for many healthcare applications.</p><p class="MsoNormal" style="text-align: justify;"><strong><em>Embedded Movement Intention Detection for Post-Stroke Rehabilitation</em></strong><em>&nbsp;(Andrés Gómez - Pontificia Universidad Javeriana)</em></p><p class="MsoNormal" style="text-align: justify;">Andrés presented the results of his master’s thesis: an embedded movement intention detection system based on surface electromyography (sEMG) designed to support the smart rehabilitation of post-stroke patients.</p><p class="MsoNormal" style="text-align: justify;">The system processes muscle activity signals directly on an embedded device to infer a patient’s intended movement in real time. This capability could enable more responsive rehabilitation devices and assistive technologies that adapt to a patient’s intent. Andrés’ research has also resulted in peer-reviewed journal publications.</p><p class="MsoNormal" style="text-align: justify;"><strong><em>Albaricoque: A Privacy-Preserving Human Detection Node</em></strong><em>&nbsp;(Sergio Mesa - Pontificia Universidad Javeriana)</em></p><p class="MsoNormal" style="text-align: justify;">Sergio presented the preliminary results of his master’s thesis: Albaricoque, a TinyML-based human detection node designed with privacy in mind. By using ultrasonic and passive infrared (PIR) sensors and performing inference locally on the device, the system can detect human presence avoiding the usage of cameras that transmit sensitive visual data.</p><p class="MsoNormal" style="text-align: justify;">The project has already gained international recognition: Albaricoque was selected as one of the winning entries in the global&nbsp;<a target="_blank" rel="noopener noreferrer nofollow" href="https://www.edgeimpulse.com/blog/edge-impulse-contest-2025-winners/"><strong>Edge Impulse Hackathon</strong></a>&nbsp;in December 2025.</p><figure class="figure"><img src="https://assets.buttondown.email/images/687f3ef2-ce6a-4007-aad4-4b24359f1b36.png?w=960&amp;fit=max" draggable="false" class="figure-img"><figcaption></figcaption></figure><p class="MsoNormal" style="text-align: justify;">Together, these projects show how local initiatives can help grow a global ecosystem of TinyML and EdgeAI builders. The Bogotá workshop helped spark a community that is now producing research, prototypes, and student-led innovations.</p><p class="MsoNormal" style="text-align: justify;">This is exactly what we hope to see: students building real systems, sharing their work, and helping expand the global TinyML community.&nbsp;We welcome students to <a target="_blank" rel="noopener noreferrer nofollow" href="https://docs.google.com/forms/d/e/1FAIpQLSeuZJ7XZh7NO0zGfS1HUhDyviyEwZxZjs9Ebt4O4NhyKbXQIQ/viewform"><strong>register</strong></a><strong>&nbsp;to present at an upcoming Show&amp;Tell session</strong>!</p><p>Written by Professor Diego Méndez<br><a target="_blank" rel="noopener noreferrer nofollow" href="https://www.javeriana.edu.co/inicio"><strong>Pontificia Universidad Javeriana</strong></a></p><p></p><figure class="figure"><img src="https://assets.buttondown.email/images/7f8edf0c-3457-4386-8a2d-4a8d0efb24bd.png?w=960&amp;fit=max" draggable="false" class="figure-img"><figcaption></figcaption></figure><p><br></p><p></p>



 ]]></description>
  <category>community</category>
  <guid>https://mlsysbook.ai/newsletter/posts/2026/2026-03-17_community-spotlight-tinyml-innovation-emerging-from-colombian-universities.html</guid>
  <pubDate>Tue, 17 Mar 2026 00:00:00 GMT</pubDate>
  <media:content url="https://assets.buttondown.email/images/b9eb5bda-71c9-4e95-bc2f-ca08c72337c5.png" medium="image" type="image/png"/>
</item>
<item>
  <title>The Model Is Not the Product</title>
  <dc:creator>Vijay Janapa Reddi</dc:creator>
  <link>https://mlsysbook.ai/newsletter/posts/2026/2026-03-10_the-model-is-not-the-product.html</link>
  <description><![CDATA[ 





<p>Everyone talks about models. But in production AI systems, the model is only a small piece of the story.</p><p><strong>The model is not the product.</strong></p><p>Last month I argued that <a target="_blank" rel="noopener noreferrer nofollow" href="https://buttondown.com/mlsysbook/archive/the-shift-toward-ai-engineering/">AI Engineering is the missing discipline</a>. The response confirmed what I suspected: the gap is real, and people feel it. But defining a field is only step one. The harder question is: what does an AI engineer actually need to <em>see</em> what others miss?</p><h2 class="anchored">The 5% Problem</h2><p>In 2015, a team of Google engineers <a target="_blank" rel="noopener noreferrer nofollow" href="https://proceedings.neurips.cc/paper/2015/file/86df7dcfd896fcaf2674f757a2463eba-Paper.pdf">audited their production ML systems</a> and published a finding that should have changed how everyone thinks about AI: the model, the neural network, the thing everyone obsesses over, accounted for roughly 5% of the total code (see Figure 1). The other 95% was data pipelines, feature extraction, serving infrastructure, configuration management, and monitoring.</p><figure class="figure"><img src="https://assets.buttondown.email/images/26cca9a4-3641-4e18-bd0a-e2636385a614.png?w=960&amp;fit=max" alt="image.png" draggable="false" class="figure-img"><figcaption><strong><em>Figure 1: The relative hidden technical debt in machine learning systems by Sculley et al. 2015.</em></strong></figcaption></figure><p>That ratio has not changed. If anything, it has gotten worse. The models have gotten larger, but the systems around them have grown faster.</p><p>This is the gap that most AI education ignores. We teach people to build the 5% and hope they figure out the other 95% on the job.</p><h2 class="anchored">Data, Algorithm, Machine</h2><p>So how do you reason about the other 95%? In the <a target="_blank" rel="noopener noreferrer nofollow" href="https://mlsysbook.ai">Machine Learning Systems textbook</a>, I use a simple diagnostic framework I call the <strong>D-A-M taxonomy</strong>: every ML system is shaped by three forces. <strong>Data</strong> is the fuel, what you train on and how you move it. <strong>Algorithm</strong> is the blueprint, the math that turns data into predictions. <strong>Machine</strong> is the engine, the silicon, memory, and power budget you actually have to work with (see Figure 2).</p><p>The key insight is that these three forces are <em>interdependent</em>. Compressing a model to fit on a phone changes its accuracy. Doubling the training data demands more compute. Switching from a CPU to a GPU reshapes which algorithms are even practical. Change any one axis, and the others must adapt.</p><figure class="figure"><img src="https://assets.buttondown.email/images/bd332968-2f6c-4464-b905-dccb9abd9085.png?w=960&amp;fit=max" alt="The D-A-M Taxonomy: Data, Algorithm, and Machine are interdependent axes. The intersections (what to learn from, how to move information, how to execute efficiently) are where ML systems engineering lives." draggable="false" class="figure-img"><figcaption><strong><em>Figure 2: The D-A-M Taxonomy: Data, Algorithm, and Machine are interdependent axes. The intersections (what to learn from, how to move information, how to execute efficiently) are where ML systems engineering lives.</em></strong></figcaption></figure><p>This sounds abstract until you see it in the real world.</p><p>In 2012, Krizhevsky split <a target="_blank" rel="noopener noreferrer nofollow" href="https://papers.nips.cc/paper/2012/hash/c399862d3b9d6b76c8436e924a68c45b-Abstract.html">AlexNet</a> across two GPUs because neither had enough memory to hold the whole network. A Machine constraint shaped the Algorithm that launched the deep learning revolution.</p><p>That was two GPUs and 6 GB of memory. What happens when the constraint is an entire country's hardware supply?</p><h2 class="anchored">DeepSeek: When Constraints Become Advantages</h2><p>In late 2024, DeepSeek released <a target="_blank" rel="noopener noreferrer nofollow" href="https://arxiv.org/abs/2412.19437">V3</a>, a model that matched GPT-4 on most benchmarks. What stunned the field was not the performance. It was the cost: <strong>$5.576 million</strong> (see Figure 3) for the final training run, against an estimated $100 million or more for GPT-4.</p><p>How did they overcome this issue? Not by finding a shortcut. But by being forced into better engineering.</p><p>US export controls barred China from buying NVIDIA's top-tier H100 GPUs. DeepSeek trained on H800s, the same silicon but with NVLink bandwidth cut by 56%. That single Machine constraint cascaded across every axis of the D-A-M taxonomy.</p><figure class="figure"><img src="https://assets.buttondown.email/images/8d5fb20a-1f29-49ac-a647-9ed220754e00.png?w=960&amp;fit=max" draggable="false" class="figure-img"><figcaption><strong><em>Figure 3: How one hardware constraint cascaded into three architectural innovations that produced a frontier model at a fraction of the cost.</em></strong></figcaption></figure><p><strong>Machine → Algorithm: the hardware forced a new architecture.</strong> With half the interconnect bandwidth, the standard way of splitting a model across GPUs was prohibitively expensive. So DeepSeek engineered around it. Instead, they built a 671-billion-parameter <a target="_blank" rel="noopener noreferrer nofollow" href="https://arxiv.org/abs/1701.06538">Mixture-of-Experts</a> model where only 37 billion parameters activate per token. Think of it like a hospital with 100 specialists but only 5 in the room for any given patient. Less data moves through the system on every forward pass, by design.</p><p><strong>Machine → Data: the hardware forced a new number format.</strong> They trained in FP8, 8-bit precision instead of the standard 16, cutting memory usage in half and doubling effective bandwidth. The tradeoff is that lower precision can destabilize training, which required new data engineering to manage.</p><p><strong>Algorithm → Data: the algorithm forced a new memory strategy.</strong> Their Multi-head Latent Attention compresses the model's working memory by over 98%, fundamentally changing how much data moves during inference. Fewer bytes per token means faster serving and lower cost.</p><p>The result: a frontier model trained for a fraction of what competitors spent. Not because DeepSeek had better researchers (though they very well might). Because <strong>the constraint forced a better architecture.</strong></p><p>Export controls meant to slow China's AI progress may have produced the most hardware-efficient training system anyone has built.</p><p>DeepSeek's team published their <a target="_blank" rel="noopener noreferrer nofollow" href="https://arxiv.org/abs/2408.14158">infrastructure work at ISCA 2025</a>, a computer architecture conference, not an AI one, explicitly framing it as hardware-software co-design. That venue choice tells you everything about where the real innovation happened.</p><h2 class="anchored">It Is Not Just the Model</h2><p>DeepSeek shows how Machine constraints reshape Algorithms. But even when the architecture is right, the <em>infrastructure</em> can be the bottleneck.</p><p>When Meta trained <a target="_blank" rel="noopener noreferrer nofollow" href="https://arxiv.org/abs/2407.21783">Llama 3</a>, they used 16,384 H100 GPUs for 54 days. During that run, they recorded <strong>419 hardware failures</strong>, roughly one every three hours. GPUs crashed, network links dropped, storage nodes failed. The dominant engineering challenge was not the model architecture; it was keeping the cluster alive long enough to finish a checkpoint.</p><p>This is the other 95% in action. The model architecture worked fine. The engineering challenge was everything around it: detecting failures, routing around dead nodes, restarting from checkpoints, and coordinating 16,384 GPUs that would rather not cooperate.</p><p>This is exactly the kind of reasoning I want future AI engineers to develop. Not just "how do I train a model" but "where is my system actually breaking, and which axis do I fix first?"</p><p>Here is the practical takeaway: the next time you hit a wall in your ML system, before you reach for a bigger model or more data, ask which axis is actually the bottleneck. Is it Data (you are starving for quality inputs)? Algorithm (you are using the wrong architecture for your hardware)? Machine (you are memory-bound and no algorithm change will help)? That question is where DeepSeek started, and the constraint is what made the answer interesting.</p><h2 class="anchored">Your Turn</h2><p>The lesson is simple. Frontier AI is no longer just about better models. It is about better systems thinking.</p><p>When an ML system breaks, the cause is almost always on one of three axes. <strong>Data. Algorithm. Machine.</strong> AI engineering begins where those three forces collide.</p><p>DeepSeek turned a hardware handicap into an architectural advantage. Meta turned 419 failures into a resilience playbook.</p><p>What constraint shaped <em>your</em> system? And did you fight it, or did you let it guide you?</p><p><em>Next month: the full stack of AI engineering, from silicon to serving. What each layer does, where the bottlenecks hide, and why no one teaches it end-to-end.</em></p><h2 data-pm-slice="1 1 []" class="anchored">What the Community Is Building</h2><p>A discipline does not emerge from a single book or tool. It emerges from people building things together.  </p><p>Visit <a target="_blank" rel="noopener noreferrer nofollow" href="http://MLSysBook.ai"><strong>MLSysBook.ai</strong></a> for curriculum updates, hardware recommendations, and TinyTorch learning materials.</p><p>If you would like to support our community outreach and global workshops, consider contributing at <a target="_blank" rel="noopener noreferrer nofollow" href="http://opencollective.com/mlsysbook"><strong>opencollective.com/mlsysbook</strong></a>.</p><h2 class="anchored">Further Reading</h2><ul><li><p><a target="_blank" rel="noopener noreferrer nofollow" href="https://proceedings.neurips.cc/paper/2015/file/86df7dcfd896fcaf2674f757a2463eba-Paper.pdf">Hidden Technical Debt in Machine Learning Systems</a> (Sculley et al., NeurIPS 2015). The original "5% problem" paper from Google.</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow" href="https://arxiv.org/abs/2412.19437">DeepSeek-V3 Technical Report</a> (DeepSeek-AI, 2024). Full details on MoE, MLA, DualPipe, and FP8 training under hardware constraints.</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow" href="https://arxiv.org/abs/2408.14158">Fire-Flyer AI-HPC: A Cost-Effective Software-Hardware Co-Design for Deep Learning</a> (An et al., ISCA 2025). DeepSeek's infrastructure paper on how they built around the H800's bandwidth limitations.</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow" href="https://arxiv.org/abs/2407.21783">The Llama 3 Herd of Models</a> (Meta, 2024). Section 4 covers the 16,384-GPU training infrastructure and the 419 hardware failures in 54 days.</p></li></ul><hr><p><a target="_blank" rel="noopener noreferrer nofollow" href="https://mlsysbook.ai"><em>Machine Learning Systems</em></a><em> is a two-volume open textbook on the physics of AI engineering.</em></p><p><a target="_blank" rel="noopener noreferrer nofollow" href="https://buttondown.com/mlsysbook"><em>Subscribe</em></a><em> | </em><a target="_blank" rel="noopener noreferrer nofollow" href="https://github.com/harvard-edge/cs249r_book"><em>GitHub</em></a><em> | </em><a target="_blank" rel="noopener noreferrer nofollow" href="https://mlsysbook.ai/tinytorch/"><em>TinyTorch</em></a></p>



 ]]></description>
  <category>essay</category>
  <guid>https://mlsysbook.ai/newsletter/posts/2026/2026-03-10_the-model-is-not-the-product.html</guid>
  <pubDate>Tue, 10 Mar 2026 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Correction: Show &amp; Tell Time Update (Please Note)</title>
  <dc:creator>Vijay Janapa Reddi</dc:creator>
  <link>https://mlsysbook.ai/newsletter/posts/2026/2026-02-26_correction-show-tell-time-update-please-note.html</link>
  <description><![CDATA[ 





<p>Hello everyone,</p><p data-start="77" data-end="191">We made a mistake in the newsletter regarding the time for Thursday’s Student Show &amp; Tell — thank you for your patience!</p><p data-start="193" data-end="213">The correct time is:</p><p data-start="215" data-end="271">📅 <strong>Thursday, February 26 at 9:00am EST / 2:00pm GMT</strong></p><p data-start="273" data-end="331">Please tune in via the Google Meet link <a target="_blank" rel="noopener noreferrer nofollow" href="https://meet.google.com/rns-yyrx-ggw">here</a>.</p><p data-start="333" data-end="472">🇨🇴 Featuring presentations from students in Colombia<br>🛠️ Each presentation is 10–15 minutes and focused on real, applied systems work</p><p data-start="474" data-end="538">We’re looking forward to seeing you there and learning together.</p><p data-start="540" data-end="554" data-is-last-node="" data-is-only-node="">Warmly,<br>Kari</p>



 ]]></description>
  <category>community</category>
  <guid>https://mlsysbook.ai/newsletter/posts/2026/2026-02-26_correction-show-tell-time-update-please-note.html</guid>
  <pubDate>Thu, 26 Feb 2026 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Community Update: Kits and This Month’s Student Show and Tell</title>
  <dc:creator>Vijay Janapa Reddi</dc:creator>
  <link>https://mlsysbook.ai/newsletter/posts/2026/2026-02-24_community-update-kits-and-this-months-student-show-and-tell.html</link>
  <description><![CDATA[ 





<p>Hello Everyone,</p><p data-start="17" data-end="314">Welcome to this week’s MLSysBook community update, where we share progress, tools, and opportunities to help build the discipline of AI engineering together. This week, we’re spotlighting the hardware kits that power hands-on learning and inviting you to see student systems in action at our upcoming TinyML Show &amp; Tell.</p><p data-start="1712" data-end="2024"><strong>Tune in to this month’s Show &amp; Tell via the </strong><a target="_blank" rel="noopener noreferrer nofollow" href="https://meet.google.com/rns-yyrx-ggw"><strong>Google Meet link</strong></a></p><p data-start="2" data-end="43">📅 Thursday, February 26 at 9am EST, 2pm GMT</p><p data-start="46" data-end="106">🇨🇴 Featuring presentations from students in Colombia</p><p data-start="109" data-end="191" data-is-last-node="">🛠️ Each presentation is 10-15 minutes and focused on real, applied systems work</p><p data-start="2356" data-end="2570"><strong>For students:</strong> If you want to share your TinyML project, we would love to see you at a future Show and Tell. These sessions are designed to give you a pedestal to present your work, explain your systems decisions, and engage with a global audience.(<a target="_blank" rel="noopener noreferrer nofollow" href="https://docs.google.com/forms/d/e/1FAIpQLSeuZJ7XZh7NO0zGfS1HUhDyviyEwZxZjs9Ebt4O4NhyKbXQIQ/viewform?usp=header">Register here</a>)</p><figure class="figure"><img src="https://assets.buttondown.email/images/6938009e-1966-4396-837b-36b3e6a04de9.png?w=960&amp;fit=max" draggable="false" class="figure-img"><figcaption></figcaption></figure><p>Many educators and students ask the same question: <strong>where do we begin if we want to build real TinyML and Edge AI systems?</strong></p><p>We have curated a small set of hardware kits that we recommend for hands on coursework, research, and student projects. You can find them here:</p><figure class="figure"><img src="https://assets.buttondown.email/images/aa018a8a-208c-4f4e-9160-b1246409ce6d.jpg?w=960&amp;fit=max" draggable="false" class="figure-img"><figcaption>TinyML Kits: <a target="_blank" rel="noopener noreferrer nofollow" href="https://mlsysbook.ai/kits/">https://mlsysbook.ai/kits/</a></figcaption></figure><p data-start="17" data-end="314">These kits are designed to expose learners to the realities that define AI engineering in practice including memory limits, latency constraints, power considerations, and deployment tradeoffs. They are well suited for TinyML and Edge AI courses, capstone projects, and research prototypes.</p><p><strong>For educators:</strong> If you are teaching and would like guidance on getting started, simply reply. For those in the global south, you may apply for kits <a target="_blank" rel="noopener noreferrer nofollow" href="https://forms.gle/mUurCTnKZGSwwYpj6">here</a>.</p><p data-start="2572" data-end="2630" data-is-last-node="" data-is-only-node=""><strong>If you would like to </strong><a target="_blank" rel="noopener noreferrer nofollow" href="https://opencollective.com/mlsysbook"><strong>sponsor a kit</strong></a><strong> </strong>for a student or institution that cannot afford one, your support will fund under resourced educators and learners and help expand access to practical AI engineering education worldwide.</p><figure class="figure"><img src="https://assets.buttondown.email/images/01fd8758-650a-42a1-a4ba-0704dca47100.jpeg?w=960&amp;fit=max" draggable="false" class="figure-img"><figcaption>Working with a SEEED XIAO ML Kit</figcaption></figure><p data-start="2572" data-end="2630" data-is-last-node="" data-is-only-node="">Thank you for being part of this growing global AI Engineering community.</p><p>Kari Janapareddi</p><p>Director of Partnerships and Strategic Operations<br><a target="_blank" rel="noopener noreferrer nofollow" href="http://MLSysbook.ai"><span style="color: rgb(34, 34, 34)">MLSysbook.ai</span></a><span style="color: rgb(34, 34, 34)"> Community</span></p>



 ]]></description>
  <category>community</category>
  <guid>https://mlsysbook.ai/newsletter/posts/2026/2026-02-24_community-update-kits-and-this-months-student-show-and-tell.html</guid>
  <pubDate>Tue, 24 Feb 2026 00:00:00 GMT</pubDate>
  <media:content url="https://assets.buttondown.email/images/5c932b1c-2ba2-4293-b7bf-ad000a54b1bb.png" medium="image" type="image/png"/>
</item>
<item>
  <title>🔧 Building TinyTorch Together — Live Kickoff</title>
  <dc:creator>Vijay Janapa Reddi</dc:creator>
  <link>https://mlsysbook.ai/newsletter/posts/2026/2026-02-04_building-tinytorch-together-live-kickoff.html</link>
  <description><![CDATA[ 





<p><span style="color: rgb(34, 34, 34)">Dear Tiny Torch Community,</span></p><p><span style="color: rgb(34, 34, 34)">We’re excited to invite you to the&nbsp;<strong>first Tiny Torch kickoff session</strong>—a live Zoom tutorial that marks the beginning of something we’ve been building toward for a while.</span></p><p><span style="color: rgb(34, 34, 34)">Tiny Torch is meant to be learned&nbsp;<em>by doing</em>, refined through real use, and shaped by the people who care about making it genuinely teachable and useful. This first session brings together a small group of early community members who are excited to learn alongside us and help us make the material stronger as we go.</span></p><p><span style="color: rgb(34, 34, 34)">If you enjoy rolling up your sleeves, asking questions, trying things out, and sharing what you notice along the way, we’d love to have you join us.</span></p><p><span style="color: rgb(34, 34, 34)">Those who participate in this initial group will be recognized as&nbsp;<strong>Founding Contributors</strong>&nbsp;to Tiny Torch. Your feedback, questions, and observations will directly influence how this material evolves—for future learners and educators around the world.</span></p><h3 class="anchored"><span style="color: rgb(34, 34, 34)">What we’ll do together in this first session</span></h3><ul><li><p><span style="color: rgb(34, 34, 34)">Walk through the Tiny Torch setup and core tutorial, step by step</span></p></li><li><p><span style="color: rgb(34, 34, 34)">Highlight areas we’re actively refining and would love input on</span></p></li><li><p><span style="color: rgb(34, 34, 34)">Explain how this contributor group will collaborate with a TA and the core team over the coming weeks</span></p></li></ul><p><span style="color: rgb(34, 34, 34)">You don’t need to have everything figured out to participate. Curiosity, enthusiasm, and a willingness to engage are what matter most.</span></p><h3 class="anchored"><span style="color: rgb(34, 34, 34)">Details</span></h3><p>&nbsp;<strong>Date: Tuesday February 10th</strong><br>&nbsp;<strong>Time:</strong>&nbsp;9am EST<span style="color: inherit"><br>&nbsp;<strong>Location:</strong>&nbsp;</span><a target="_blank" rel="noopener noreferrer nofollow" href="https://harvard.zoom.us/webinar/register/WN_W3TpZsfQQwaBn8tGFANfEw#/registration"><span style="color: inherit">Zoom Registration Link</span></a></p><p><span style="color: rgb(34, 34, 34)">If this sounds like something you’d enjoy being part of, please register using the link above. We’ll follow up with next steps and a bit more context before we meet.</span></p><p><span style="color: rgb(34, 34, 34)">I’m grateful for this community and for the energy so many of you bring to learning and building together. If this resonates with you, I’m really glad you’re here.</span></p><p><span style="color: rgb(34, 34, 34)">See you soon,<br>Vijay</span></p>



 ]]></description>
  <category>update</category>
  <guid>https://mlsysbook.ai/newsletter/posts/2026/2026-02-04_building-tinytorch-together-live-kickoff.html</guid>
  <pubDate>Wed, 04 Feb 2026 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Calling for Kit Applications - XIAOML for Applied AI Engineering</title>
  <dc:creator>Vijay Janapa Reddi</dc:creator>
  <link>https://mlsysbook.ai/newsletter/posts/2026/2026-02-03_calling-for-kit-applications-xiaoml-for-applied-ai-engineering.html</link>
  <description><![CDATA[ 





<p>Dear Educator,</p><p>We’re reaching out to invite university educators teaching (or planning to teach) <strong>TinyML and edge AI</strong> to apply to receive <strong>TinyML hardware kits</strong> for use in their courses.</p><p class="isSelectedEnd">These kits are designed to support <strong>underfunded or resource-constrained institutions</strong> that want to offer students meaningful, hands-on experience building AI systems that run in the real world—not just in theory.</p><p class="isSelectedEnd">If this sounds relevant for your classroom, we invite you to apply using this short form:<br>👉 <a target="_blank" rel="noopener noreferrer nofollow" href="https://forms.gle/JFM4hBARfqCsUPN59"><strong>Apply here by March 15th</strong></a></p><p class="isSelectedEnd">The application helps us understand your teaching context, how the kits will be used, and the potential for long-term student impact. Selected educators will receive kits at no cost, along with access to open-source TinyML learning resources and a global educator community.</p><p class="isSelectedEnd">This initiative is made possible through a generous collaboration: <strong>Seeed Studio</strong> developed the kits and offered them at a reduced cost, the <strong>Edge AI Foundation</strong> funded them, and <strong>ICTP</strong> will support distribution to universities across the Global South. We’re deeply grateful for partners who share a commitment to expanding access to hands-on AI education.</p><figure class="figure"><img src="https://assets.buttondown.email/images/af5e7867-27b2-4fd2-be11-b3988d134d6e.png?w=960&amp;fit=max" draggable="false" class="figure-img"><figcaption>XIAOML Kit &amp; Components</figcaption></figure><p>Thank you for the work you do to support and inspire students. We’d love to learn more about what you’re building.</p><p><br>Kari Janapareddi</p><p>Director of Partnerships and Strategic Operations<br><a target="_blank" rel="noopener noreferrer nofollow" href="http://MLSysbook.ai"><span style="color: rgb(34, 34, 34)">MLSysbook.ai</span></a><span style="color: rgb(34, 34, 34)"> Community</span></p><p></p>



 ]]></description>
  <category>update</category>
  <guid>https://mlsysbook.ai/newsletter/posts/2026/2026-02-03_calling-for-kit-applications-xiaoml-for-applied-ai-engineering.html</guid>
  <pubDate>Tue, 03 Feb 2026 00:00:00 GMT</pubDate>
  <media:content url="https://assets.buttondown.email/images/895020d4-8862-4711-9677-c95289d59122.png" medium="image" type="image/png"/>
</item>
<item>
  <title>The Shift Toward AI Engineering</title>
  <dc:creator>Vijay Janapa Reddi</dc:creator>
  <link>https://mlsysbook.ai/newsletter/posts/2026/2026-01-27_the-shift-toward-ai-engineering.html</link>
  <description><![CDATA[ 





<!-- buttondown-editor-mode: plaintext -->
<p>My dad told me if I didn’t study computer science in college, he’d throw me out of the house. So I did. In hindsight, he was right. CS gave me the foundation for everything I’ve done over the past 25 years, the way of thinking and problem-solving instincts that still matter every day.</p>
<p>My career didn’t stop at CS. I studied computer engineering and electrical engineering, and over time worked across the stack, from hardware and systems to software and machine learning. Seeing the same constraints resurface at every layer is what made the systems gap impossible to ignore.</p>
<p>That perspective is what makes the current moment in AI stand out to me. Signals from education and industry suggest the landscape has expanded more than most people realize.</p>
<section id="the-shift-you-can-see-in-the-data" class="level2">
<h2 class="anchored" data-anchor-id="the-shift-you-can-see-in-the-data">The Shift You Can See in the Data</h2>
<p>Computer science enrollment is down at <a href="https://cra.org/crn/2025/10/cerp-pulse-survey-a-snapshot-of-2025-undergraduate-computing-enrollment-patterns/">62% of computing programs</a> this year. CS, software engineering, and information systems are all declining. Meanwhile, <a href="https://cra.org/crn/2025/10/cerp-pulse-survey-a-snapshot-of-2025-undergraduate-computing-enrollment-patterns/">66% of programs report</a> that recent graduates are struggling to find jobs. The “learn to code” promise that defined a generation of career advice isn’t landing the way it used to. That makes sense. Coding was never the destination, it was the foundation.</p>
<p>But this isn’t a retreat from technology, and it’s not a sign that CS doesn’t matter. It’s a redirection. Students are asking a more nuanced question: what kind of tech career will last?</p>
<p>The fields that are <em>growing</em>? Computer engineering. Cybersecurity. AI. <a href="https://www.insidehighered.com/news/admissions/traditional-age/2025/11/11/short-term-credentials-bolster-enrollment-boom">Engineering overall is up 7.5%</a>. Students aren’t abandoning tech. They’re rethinking what kind of tech career will last.</p>
<p>The <a href="https://cra.org/crn/2025/10/cerp-pulse-survey-a-snapshot-of-2025-undergraduate-computing-enrollment-patterns/">CRA report</a> captures it well: students are gravitating toward majors that feel “more physical and less susceptible to the impact of AI.” They want to build real-world devices, not just write code that might get automated. They’re looking for skills that feel durable.</p>
<p>I think this signals something important. And at the same time, there’s a bigger picture that’s easy to miss.</p>
</section>
<section id="the-gap-thats-quietly-widening" class="level2">
<h2 class="anchored" data-anchor-id="the-gap-thats-quietly-widening">The Gap That’s Quietly Widening</h2>
<p>Here’s what I’ve observed after years of teaching computer and machine learning systems, and working with industry partners: everyone wants to use AI, but very few people understand how to <em>build</em> the systems that make it work at scale.</p>
<p>Who optimizes models to run on phones? Who designs the chips, builds the frameworks, creates the deployment pipelines? Who debugs why a model that worked perfectly in a notebook crashes in production? Who figures out how to serve millions of users without the infrastructure costs exploding?</p>
<p>These questions don’t fit neatly into computer science alone, though CS fundamentals remain essential. They’re not pure software engineering either. And they’re not just electrical engineering.</p>
<p>They require a different kind of thinking—one that spans the full stack from silicon to cloud, from training to deployment, from algorithms to real-world constraints.</p>
</section>
<section id="what-is-ai-engineering" class="level2">
<h2 class="anchored" data-anchor-id="what-is-ai-engineering">What Is AI Engineering?</h2>
<p>I’ve started calling this AI Engineering. Not as a replacement for existing fields. CS, EE, and software engineering remain foundational. But building production AI systems requires a synthesis of skills that don’t live neatly in any one discipline.</p>
<p>Think about what it actually takes to deploy an AI system that works:</p>
<p><strong>The ML theory:</strong> Optimization, statistical learning, how models generalize and when they fail. This is the ML research foundation—understanding <em>why</em> things work, not just <em>that</em> they work.</p>
<p><strong>The ML systems:</strong> Distributed training, hardware acceleration, memory hierarchies, serving infrastructure. This is what makes AI run efficiently at scale, the difference between a demo and a product.</p>
<p><strong>The ML applications:</strong> Problem framing, data collection, domain constraints, what “good enough” actually means in a specific context. This is what connects AI to real-world impact.</p>
<p>Most education gives you one of these. Maybe two if you’re lucky. Almost no one gets all three.</p>
<p>AI Engineering sits at the intersection:</p>
<div class="quarto-figure quarto-figure-center">
<figure class="figure">
<p><img src="https://assets.buttondown.email/images/ef8698a9-e990-4df9-ad5e-15fc09213911.png?w=960&amp;fit=max" class="img-fluid figure-img"></p>
<figcaption>CleanShot 2026-01-22 at 21.11.00@2x.png</figcaption>
</figure>
</div>
<p>The ML Systems Researcher lives in the overlap of theory and systems. The Data Scientist bridges theory and application. The MLOps Engineer connects systems and application. The AI Engineer needs to move fluidly across all three. Not as an expert in everything, but with enough literacy across the stack to see how the pieces connect, and enough depth in at least one area to actually build things.</p>
</section>
<section id="why-this-matters-right-now" class="level2">
<h2 class="anchored" data-anchor-id="why-this-matters-right-now">Why This Matters Right Now</h2>
<p>Andrew Ng famously said that AI is the new electricity. He was right. But here’s what’s easy to miss: if AI is electricity, we’re training everyone to use the appliances while almost no one learns how to build the power plants.</p>
<p>Look at what’s actually happening. <a href="https://tomtunguz.com/openai-hardware-spending-2025-2035/">OpenAI has committed to spending $1.15 trillion on infrastructure</a> over the next decade. Microsoft just opened <a href="https://blogs.microsoft.com/blog/2025/09/18/inside-the-worlds-most-powerful-ai-datacenter/">Fairwater, the world’s most powerful AI datacenter</a>. It covers 315 acres, required 26.5 million pounds of structural steel, and runs 120 miles of underground cable. Analysts predict that <a href="https://www.datacenters.com/news/openai-and-the-trillion-dollar-ai-infrastructure-race">building AI infrastructure globally will exceed $1 trillion by 2030</a>, including data centers, power generation, cooling systems, and semiconductor supply chains.</p>
<p>These aren’t software problems. They’re engineering problems. Physical problems. Power, cooling, hardware, optimization, efficiency.</p>
<p>And that’s only half the picture. AI isn’t just running in massive data centers. It’s also moving to the edge.</p>
</section>
<section id="ai-is-going-everywhere" class="level2">
<h2 class="anchored" data-anchor-id="ai-is-going-everywhere">AI Is Going Everywhere</h2>
<p>The <a href="https://finance.yahoo.com/news/tinyml-market-analysis-report-2025-152200302.html">TinyML market is growing at 34% annually</a>, driven by the need to run AI on devices with severe constraints: phones, wearables, medical devices, agricultural sensors, industrial equipment. These systems can’t send data to the cloud and wait for a response. They need to process locally, in real time, with minimal power.</p>
<p>NVIDIA calls this <a href="https://nvidianews.nvidia.com/news/nvidia-releases-new-physical-ai-models-as-global-partners-unveil-next-generation-robots">“Physical AI”</a>. Jensen Huang recently said that “the ChatGPT moment for robotics is here.” Companies like Boston Dynamics, Caterpillar, and LG are building robots that can reason, plan, and act in the real world. <a href="https://techcrunch.com/2026/01/05/nvidia-wants-to-be-the-android-of-generalist-robotics/">Robotics is now the fastest-growing category on Hugging Face</a>.</p>
<p>This is where the “physical computing” instinct that students have makes sense. AI that interacts with the real world requires understanding hardware constraints, power budgets, latency requirements, and sensor systems. You can’t just train a model and throw it over the wall. You have to engineer the entire system.</p>
<p>The students shifting toward computer engineering and hardware? They’re sensing this. The companies scrambling to hire people who understand deployment and optimization? They’re responding to it.</p>
<p>But we do not yet have the educational infrastructure to train AI Engineers at scale. The knowledge exists. It is just scattered across research groups, companies, and a handful of courses. Most people never see the full picture.</p>
<p>That is the gap the <a href="https://mlsysbook.ai">MLSys Book</a> exists to fill. Not by replacing existing disciplines, but by making the connective tissue visible. And that is what this community is here to build together.</p>
<p>And if you want to get hands on, <a href="https://mlsysbook.ai/tinytorch/intro.html">TinyTorch</a> lets you build your own ML framework from scratch. Because you cannot debug what you did not build.</p>
<p>One last thing that matters to me. The reason this book and these materials are freely available is simple. The ability to build and reason about real AI systems should not depend on access to a specific lab, company, or country.</p>
<p>A lot of the most important engineering knowledge today is still learned through proximity. This project is an attempt to make that knowledge explicit, so more people can participate in building AI systems that are reliable, accountable, and actually work in the real world.</p>
</section>
<section id="whats-coming" class="level2">
<h2 class="anchored" data-anchor-id="whats-coming">What’s Coming</h2>
<p>Over the coming months, I will share what we are learning as this work evolves. New chapters as they are ready. Updates in the field. Hands on labs that surface real system tradeoffs. Insights from practitioners building and deploying production AI. Resources for educators teaching this emerging field.</p>
<p>This is a work in progress, shaped by the people who engage with it. If there is a topic you want to go deep on, efficient inference, on device learning, ML operations, or something else entirely, tell me. I read the messages that come in, and they shape what we prioritize next.</p>
<p>If this resonated, forward it to someone who should see it. A student trying to make sense of their path. A colleague watching how the work is changing. Someone who wants to understand what it actually takes to build AI systems in the real world.</p>
<p>Vijay</p>
<hr>
<p><strong>Supporting the work:</strong></p>
<p>The MLSys Book, TinyTorch, and the TinyML Kit materials are freely available. Their ongoing development is supported by Harvard, <a href="https://opencollective.com/mlsysbook">community donations</a>, industry sponsorship, and partnerships.</p>
<p>Many people support the project simply by giving it <a href="https://github.com/harvard-edge/cs249r_book">GitHub ⭐ stars</a> and providing feedback.</p>
<p><strong>Sources:</strong></p>
<ul>
<li><a href="https://cra.org/crn/2025/10/cerp-pulse-survey-a-snapshot-of-2025-undergraduate-computing-enrollment-patterns/">CRA CERP Pulse Survey: 2025 Undergraduate Computing Enrollment Patterns</a></li>
<li><a href="https://www.insidehighered.com/news/admissions/traditional-age/2025/11/11/short-term-credentials-bolster-enrollment-boom">Inside Higher Ed: Short-Term Credentials Bolster Enrollment Boom</a></li>
<li><a href="https://tomtunguz.com/openai-hardware-spending-2025-2035/">OpenAI’s $1 Trillion Infrastructure Spend</a></li>
<li><a href="https://blogs.microsoft.com/blog/2025/09/18/inside-the-worlds-most-powerful-ai-datacenter/">Inside the World’s Most Powerful AI Datacenter</a></li>
<li><a href="https://www.datacenters.com/news/openai-and-the-trillion-dollar-ai-infrastructure-race">OpenAI and the Trillion Dollar AI Infrastructure Race</a></li>
<li><a href="https://finance.yahoo.com/news/tinyml-market-analysis-report-2025-152200302.html">TinyML Market Analysis Report 2025-2029</a></li>
<li><a href="https://nvidianews.nvidia.com/news/nvidia-releases-new-physical-ai-models-as-global-partners-unveil-next-generation-robots">NVIDIA Releases New Physical AI Models</a></li>
<li><a href="https://techcrunch.com/2026/01/05/nvidia-wants-to-be-the-android-of-generalist-robotics/">Nvidia Wants to Be the Android of Generalist Robotics</a></li>
</ul>


</section>

 ]]></description>
  <category>essay</category>
  <guid>https://mlsysbook.ai/newsletter/posts/2026/2026-01-27_the-shift-toward-ai-engineering.html</guid>
  <pubDate>Tue, 27 Jan 2026 00:00:00 GMT</pubDate>
  <media:content url="https://assets.buttondown.email/images/7c78f49d-9110-4132-88e9-a049e4176c49.png?w=960&amp;fit=max" medium="image"/>
</item>
<item>
  <title>MLSysBook.ai Livestream Resources + 2 Days Left for the SEEED Discount</title>
  <dc:creator>Vijay Janapa Reddi</dc:creator>
  <link>https://mlsysbook.ai/newsletter/posts/2025/2025-12-23_mlsysbookai-livestream-resources-2-days-left-for-the-seeed-discount.html</link>
  <description><![CDATA[ 





<p>Hello Everyone,</p><p>We wanted to share a quick follow-up and make sure you have easy access to the resources from our recent MLSys community livestream hosted by Edge AI Foundation.</p><p class="isSelectedEnd"><strong>Here are the key links:</strong></p><ul><li><p class="isSelectedEnd">📄 <a target="_blank" rel="noopener noreferrer nofollow" href="https://docs.google.com/presentation/d/1TnKdHFEuXOt9-Np9xwK-Q7i7_Z5opJLMiC7j0SOy7PA/edit?usp=sharing"><strong>Slides (PPT)</strong></a></p></li><li><p class="isSelectedEnd">▶️ <a target="_blank" rel="noopener noreferrer nofollow" href="https://www.youtube.com/live/ySPnPq5BxjU?si=JxEnKQhsfuttGyKp"><strong>YouTube recording</strong></a></p></li></ul><p class="isSelectedEnd">We’re also excited to share that <strong>TinyTorch.ai has officially launched</strong>. TinyTorch is a lightweight, educational framework designed to help learners understand how deep learning systems work <em>from the ground up</em>, without the abstraction of large production libraries. If you haven’t explored it yet, the holiday break can be a great time to tinker and experiment at your own pace.</p><p class="isSelectedEnd">As a reminder, there are <strong>just two days left</strong> to take advantage of the <strong>SEEED 20% discount code</strong> (VWP6YDXH)<span style="color: rgb(250, 250, 250)"> </span>mentioned during the livestream. If you’ve been considering a kit, now’s a great time as the code expires on <strong>12/24</strong>.</p><p class="isSelectedEnd">As we wrap up the year, we want to thank you again for being part of this growing community. We’re excited about what’s ahead in the new year and look forward to sharing more workshops, resources, and opportunities to learn together.</p><p class="isSelectedEnd">Wishing you a happy, healthy start to the new year.</p><p>All my best,<br>Kari Janapareddi</p><p>Director of Partnerships and Strategic Operations<br><span style="color: rgb(34, 34, 34)">Machine Learning Systems Education</span></p>



 ]]></description>
  <category>update</category>
  <guid>https://mlsysbook.ai/newsletter/posts/2025/2025-12-23_mlsysbookai-livestream-resources-2-days-left-for-the-seeed-discount.html</guid>
  <pubDate>Tue, 23 Dec 2025 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Thank You for Joining the MLSysBook.ai Community Celebration</title>
  <dc:creator>Vijay Janapa Reddi</dc:creator>
  <link>https://mlsysbook.ai/newsletter/posts/2025/2025-12-17_thank-you-for-joining-the-mlsysbookai-community-celebration.html</link>
  <description><![CDATA[ 





<p>Hello everyone,</p><p data-start="264" data-end="510">Thanks to those of you who joined us today for the <a target="_blank" rel="noopener noreferrer nofollow" href="http://MLSysBook.ai">MLSysBook.ai</a> 10K Stars Community Celebration—and a very warm welcome to those of you who subscribed today. We’re grateful you’re here and excited to have you as part of this growing global community.</p><p>If you missed it, here are the links…</p><p><a target="_blank" rel="noopener noreferrer nofollow" href="https://www.youtube.com/live/ySPnPq5BxjU"><strong>To the livestream</strong></a></p><p><a target="_blank" rel="noopener noreferrer nofollow" href="https://docs.google.com/presentation/d/1TnKdHFEuXOt9-Np9xwK-Q7i7_Z5opJLMiC7j0SOy7PA/edit?usp=sharing"><strong>To the slides</strong></a></p><p data-start="512" data-end="1117"><strong>RECAP:</strong> During the livestream, we shared how <a target="_blank" rel="noopener noreferrer nofollow" href="http://MLSysBook.ai">MLSysBook.ai</a> is evolving into an open, living resource for machine learning systems education and how this work is being amplified worldwide through community-led outreach. We highlighted global workshops, online show &amp; tell spaces, and the open sharing of student projects, and real-world deployments. All of this is made possible by a remarkable group of volunteers, educators, and partners who believe in making ML systems education accessible everywhere.</p><p data-start="1401" data-end="1585"><strong>If you’d like to support this work </strong><a target="_blank" rel="noopener noreferrer nofollow" href="https://opencollective.com/mlsysbook"><strong>(click here!)</strong></a><strong>,</strong> donations directly fund workshops, kit donations, and the continued development of open educational materials. As a small thank-you:</p><ul><li><p data-start="1588" data-end="1656"><strong>Every donation of $100 or more today receives a free Edge AI mug</strong></p></li><li><p data-start="1659" data-end="1731"><strong>All new subscribers &amp; donators today are entered to win a free SEEED XIAOML kit</strong></p></li></ul><p data-start="1733" data-end="1868">And a reminder—<strong>this week only</strong>, you can get <strong>20% off SEEED </strong><a target="_blank" rel="noopener noreferrer nofollow" href="https://www.seeedstudio.com/The-XIAOML-Kit.html"><strong>XIAOML kits</strong></a> using our special community discount code <span style="color: rgb(5, 5, 5)">(VWP6YDXH) </span>during checkout.</p><p data-start="1870" data-end="2038"><em>We’d also love to hear from you.</em> If there are topics, workshops, tools, or community activities you’d like to see in the future, please reply to this email and let us know! Your feedback helps shape where we go next.</p><p data-start="1870" data-end="2038">Thank you again for being part of this milestone moment. Your curiosity, participation, and support help us continue building an inclusive, global ML systems community.</p><p data-start="2040" data-end="2105" data-is-last-node="" data-is-only-node="">All my best,<br>Kari Janapareddi<br>On behalf of the MLSysBook Community</p>



 ]]></description>
  <category>community</category>
  <guid>https://mlsysbook.ai/newsletter/posts/2025/2025-12-17_thank-you-for-joining-the-mlsysbookai-community-celebration.html</guid>
  <pubDate>Wed, 17 Dec 2025 00:00:00 GMT</pubDate>
</item>
<item>
  <title>MLSysBook.ai Live Event: Global Community Updates + AMA</title>
  <dc:creator>Vijay Janapa Reddi</dc:creator>
  <link>https://mlsysbook.ai/newsletter/posts/2025/2025-12-16_mlsysbookai-live-event-global-community-updates-ama.html</link>
  <description><![CDATA[ 





<p>Hello and welcome to the MLSysBook community!</p><p data-start="47" data-end="292">We’re excited to invite you to an upcoming live <a target="_blank" rel="noopener noreferrer nofollow" href="http://MLSysBook.ai">MLSysBook.ai</a> community event tomorrow <strong>(Wednesday Dec 17th, 10EST)</strong> . You can find the full event details and RSVP via our LinkedIn event page here:<br><a target="_blank" rel="noopener noreferrer nofollow" href="https://www.linkedin.com/events/7402387002232520704/">https://www.linkedin.com/events/7402387002232520704/</a></p><p data-start="294" data-end="770">This event is a snapshot of what our community has been building together. We’ll share highlights from our global outreach efforts, including how educators and volunteers around the world are coordinating hands-on workshops, running online community spaces, and openly sharing learning materials, tools, and real-world ML systems projects. At the heart of this work is a simple goal: making machine learning systems education accessible, practical, and truly global.</p><p data-start="772" data-end="991">We’ll wrap things up with an open AMA with Professor Vijay Janapa Reddi. <strong>This is your chance to ask questions about the MLSysBook vision, the evolving ML systems discipline, community priorities, and what’s coming next.</strong></p><p data-start="993" data-end="1071" data-is-last-node="" data-is-only-node="">We hope you’ll join us, bring your curiosity, and be part of the conversation.</p><p>Sincerely, </p><p>The MLSysBook Community Team</p>



 ]]></description>
  <category>community</category>
  <guid>https://mlsysbook.ai/newsletter/posts/2025/2025-12-16_mlsysbookai-live-event-global-community-updates-ama.html</guid>
  <pubDate>Tue, 16 Dec 2025 00:00:00 GMT</pubDate>
</item>
</channel>
</rss>
