Learning Python at the beginning feels deceptively simple. You write a few lines, the code runs, and it’s tempting to think you’ve got it. Then you try to build something on your own and… nothing works!? Turns out all the information you had learnt, didn’t find an outlet. That’s where challenging p...
A Gentle Introduction to Language Model Fine-tuning
This article is divided into four parts; they are: • The Reason for Fine-tuning a Model • Dataset for Fine-tuning • Fine-tuning Procedure • Other Fine-Tuning Techniques Once you train your decoder-only transformer model, you have a text generator.
What To Look For In A Cloud Services Provider (Sponsored)
Choosing a cloud services provider can feel a lot like dating: every vendor promises reliability, security, and support, but only a few truly live up to it. The wrong choice can lead to costly downtime, security headaches, or performance bottlenecks that ripple across your business.
A practical guide to observability, evaluations, and model comparisons
The post Measuring What Matters with NeMo Agent Toolkit appeared first on Towards Data Science.
Build ML web apps in minutes with Gradio's Python framework. Create interactive demos for models with text, image, or audio inputs with no frontend skills needed. Deploy and share instantly.
Mastering LLM Tool Calling: The Complete Framework for Connecting Models to the Real World
Most ChatGPT users don't know this, but when the model searches the web for current information or runs Python code to analyze data, it's using tool calling.
Context Engineering Explained in 3 Levels of Difficulty
Long-running LLM applications degrade when context is unmanaged. Context engineering turns the context window into a deliberate, optimized resource. Learn more in this article.
Stop Blaming the Data: A Better Way to Handle Covariance Shift
Instead of using shift as an excuse for poor performance, use Inverse Probability Weighting to estimate how your model should perform in the new environment
The post Stop Blaming the Data: A Better Way to Handle Covariance Shift appeared first on Towards Data Science.
A deep dive on data transfer bottlenecks, their identification, and their resolution with the help of NVIDIA Nsight™ Systems
The post Optimizing Data Transfer in AI/ML Workloads appeared first on Towards Data Science.
DeepSeek mHC: Stabilizing Large Language Model Training
Large AI models are scaling rapidly, with bigger architectures and longer training runs becoming the norm. As models grow, however, a fundamental training stability issue has remained unresolved. DeepSeek mHC directly addresses this problem by rethinking how residual connections behave at scale. Thi...
Drift Detection in Robust Machine Learning Systems
A prerequisite for long-term success of machine learning systems
The post Drift Detection in Robust Machine Learning Systems appeared first on Towards Data Science.
Power BI is an influential tool, shaping raw data into informative visuals and reports. With a user-friendly interface and formidable functionalities, Power BI is an invaluable platform for individuals to refine their skills through hands-on projects. By engaging in Power BI practice projects, begin...
Liquid Foundation Models (LFM 2) define a new class of small language models designed to deliver strong reasoning and instruction-following capabilities directly on edge devices. Unlike large cloud-centric LLMs, LFM 2 focuses on efficiency, low latency, and memory awareness while still maintaining c...
EDA in Public (Part 3): RFM Analysis for Customer Segmentation in Pandas
How to build, score, and interpret RFM segments step by step
The post EDA in Public (Part 3): RFM Analysis for Customer Segmentation in Pandas appeared first on Towards Data Science.
Deep Reinforcement Learning: The Actor-Critic Method
Robot friends collaborate to learn to fly a drone
The post Deep Reinforcement Learning: The Actor-Critic Method appeared first on Towards Data Science.
Google T5Gemma-2 Explained: Trying Out a Laptop-Friendly Multimodal AI Model
Google just dropped T5Gemma-2, and it is a game-changer for someone working with AI models on everyday hardware. Built on the Gemma 3 family, this encoder-decoder powerhouse squeezes multimodal smarts and massive context into tiny packages. Imagine running 270M parameters running smoothly on your la...
Train Your Large Model on Multiple GPUs with Tensor Parallelism
This article is divided into five parts; they are: • An Example of Tensor Parallelism • Setting Up Tensor Parallelism • Preparing Model for Tensor Parallelism • Train a Model with Tensor Parallelism • Combining Tensor Parallelism with FSDP Tensor parallelism originated from the Megatron-LM paper.
Production-Ready LLMs Made Simple with the NeMo Agent Toolkit
From simple chat to multi-agent reasoning and real-time REST APIs
The post Production-Ready LLMs Made Simple with the NeMo Agent Toolkit appeared first on Towards Data Science.
Student ID Benefits Worth Thousands: Get 15+ Premium Tools For Free or on Discount
I remember from my student days the plethora of subscriptions, fees, and payments to be made for a range of tasks. Be it learning a new skill, using the right environment for practice, or simply travelling to and from home, we had to shell money out of our pockets. But it is almost 2026 now, […]
The...
Train Your Large Model on Multiple GPUs with Fully Sharded Data Parallelism
This article is divided into five parts; they are: • Introduction to Fully Sharded Data Parallel • Preparing Model for FSDP Training • Training Loop with FSDP • Fine-Tuning FSDP Behavior • Checkpointing FSDP Models Sharding is a term originally used in database management systems, where it refers to...