AI in Logistics: Route Optimization, Warehouse Automation and Supply Chain Intelligence
Logistics and supply chain management have always been defined by complexity: millions of interdependent variables, tight time windows, eroding operational margins, and unpredictable demand threatening service continuity. For decades, companies managed this complexity with spreadsheets, rules of thumb, and the experience of senior planners. Today, artificial intelligence is rewriting the rules.
The global market for AI applied to supply chain reached $9.8 billion in 2025, with projections pointing to $32 billion by 2030 (CAGR 26.4%). This is not hype: companies that have adopted AI in logistics report 10-15% reductions in transportation costs, 15-20% faster delivery times, and approximately 30% fewer late shipments. Amazon operates over 520,000 AI-powered robots in its warehouses, cutting fulfillment costs by 20% while processing 40% more orders per hour.
This article explores the AI technologies transforming logistics: from the Vehicle Routing Problem (VRP) solved with OR-Tools and reinforcement learning, to demand forecasting with Temporal Fusion Transformer, warehouse automation, last-mile optimization, and intelligent inventory management. We include concrete Python implementations and real-world use cases from the Italian market.
What You Will Learn
- How to solve the Vehicle Routing Problem (VRP) with Google OR-Tools in Python
- Demand forecasting with Prophet, LightGBM and Temporal Fusion Transformer
- Inventory optimization with Reinforcement Learning (PPO/DQN)
- Warehouse automation: robotics, pick optimization, and intelligent WMS systems
- Last-mile delivery: AI, drones and autonomous vehicles in urban contexts
- Real-time visibility and supply chain digital twins
- Carbon footprint optimization and sustainable logistics
- Italian case studies: Amazon IT, Poste Italiane, GLS
Position in the Data Warehouse, AI and Digital Transformation Series
| # | Article | Status |
|---|---|---|
| 1 | Data Warehouse Evolution | Published |
| 2 | Data Mesh and Decentralized Architecture | Published |
| 3 | Modern ETL vs ELT: dbt, Airbyte and Fivetran | Published |
| 4 | Pipeline Orchestration: Airflow, Dagster and Prefect | Published |
| 5 | AI in Manufacturing: Predictive Maintenance | Published |
| 6 | AI in Finance: Fraud Detection and Credit Scoring | Published |
| 7 | AI in Retail: Demand Forecasting and Recommendations | Published |
| 8 | AI in Healthcare: Diagnostics and Drug Discovery | Published |
| 9 | AI in Logistics (You are here) | Current |
| 10 | LLMs in Business: RAG Enterprise and Guardrails | Next |
Vehicle Routing Problem: Optimizing Routes with OR-Tools
The Vehicle Routing Problem (VRP) is one of the most studied problems in operations research: given a set of customers with specific delivery requirements and a fleet of vehicles departing from one or more depots, how do we assign customers to vehicles and plan routes to minimize total cost (distance, time, fuel)?
VRP is NP-hard: no algorithm solves it exactly in polynomial time for large instances. Practical solutions therefore use a combination of metaheuristics (Simulated Annealing, Tabu Search, genetic algorithms) and commercial and open-source solvers. Google OR-Tools is today the most widely used open-source tool for these problems: it supports CVRP (capacitàted VRP), VRPTW (with time windows), Multi-Depot VRP and many realistic variants.
UPS's ORION system, based on similar techniques, calculates 30,000 route optimizations per minute and has saved 38 million liters of fuel annually, eliminating 100 million unnecessary driving miles. This is not a marginal advantage: it is a structural competitive edge translating into tens of millions of dollars in annual savings.
CVRP Implementation with Google OR-Tools
Below is a complete implementation of the capacitàted VRP with time windows (VRPTW), the most common variant in real-world logistics where each customer has precise operating hours.
"""
VRPTW - Vehicle Routing Problem with Time Windows
Solved with Google OR-Tools
Scenario: B2B deliveries in Italian metropolitan area
"""
from ortools.constraint_solver import routing_enums_pb2
from ortools.constraint_solver import pywrapcp
import numpy as np
from typing import List, Dict, Tuple
import json
def create_data_model() -> Dict:
"""
Creates the data model for VRPTW.
In production, this data comes from:
- Order database (PostgreSQL/DWH)
- Geocoding API for coordinates
- Google Maps Distance Matrix API for distances
"""
data = {}
# Time matrix in seconds (travel time)
# Index 0 = depot, indices 1-N = customers
data['time_matrix'] = [
[0, 548, 776, 696, 582, 274, 502, 194, 308, 194, 536, 502, 388, 354],
[548, 0, 684, 308, 194, 502, 730, 354, 696, 742, 1084, 594, 480, 514],
[776, 684, 0, 992, 878, 502, 274, 810, 468, 742, 400, 1278, 1164, 1130],
# ... (truncated for brevity, same as IT version)
]
# Time windows [start, end] in seconds from depot opening
# 0 = 08:00, 3600 = 09:00, 28800 = 16:00
data['time_windows'] = [
(0, 28800), # Depot: open all day
(7200, 14400), # Customer 1: 10:00-12:00
(10800, 18000), # Customer 2: 11:00-13:00
(3600, 14400), # Customer 3: 09:00-12:00
(0, 10800), # Customer 4: 08:00-11:00
(14400, 21600), # Customer 5: 12:00-14:00
(0, 14400), # Customer 6: 08:00-12:00
(7200, 18000), # Customer 7: 10:00-13:00
(0, 21600), # Customer 8: 08:00-14:00
(3600, 10800), # Customer 9: 09:00-11:00
(18000, 25200), # Customer 10: 13:00-15:00
(0, 14400), # Customer 11: 08:00-12:00
(3600, 18000), # Customer 12: 09:00-13:00
(7200, 21600), # Customer 13: 10:00-14:00
]
data['vehicle_capacities'] = [1000, 1000, 800, 800] # kg
data['num_vehicles'] = 4
data['depot'] = 0
data['demands'] = [0, 120, 80, 200, 150, 90, 110, 60, 180, 70, 200, 130, 95, 85]
return data
def solve_vrptw(data: Dict) -> Dict:
"""
Solves VRPTW with OR-Tools.
Search strategy:
- First solution: PATH_CHEAPEST_ARC (fast greedy)
- Improvement: GUIDED_LOCAL_SEARCH (metaheuristic)
- Time limit: 30 seconds
"""
manager = pywrapcp.RoutingIndexManager(
len(data['time_matrix']),
data['num_vehicles'],
data['depot']
)
routing = pywrapcp.RoutingModel(manager)
# Time callback
def time_callback(from_index, to_index):
return data['time_matrix'][manager.IndexToNode(from_index)][manager.IndexToNode(to_index)]
transit_idx = routing.RegisterTransitCallback(time_callback)
# Demand callback
def demand_callback(from_index):
return data['demands'][manager.IndexToNode(from_index)]
demand_idx = routing.RegisterUnaryTransitCallback(demand_callback)
# Set cost and constraints
routing.SetArcCostEvaluatorOfAllVehicles(transit_idx)
routing.AddDimensionWithVehicleCapacity(
demand_idx, 0, data['vehicle_capacities'], True, 'Capacity'
)
routing.AddDimension(transit_idx, 30, 28800, False, 'Time')
time_dim = routing.GetDimensionOrDie('Time')
# Apply time windows
for i, tw in enumerate(data['time_windows']):
if i == data['depot']:
continue
idx = manager.NodeToIndex(i)
time_dim.CumulVar(idx).SetRange(tw[0], tw[1])
# Search parameters
params = pywrapcp.DefaultRoutingSearchParameters()
params.first_solution_strategy = (
routing_enums_pb2.FirstSolutionStrategy.PATH_CHEAPEST_ARC
)
params.local_search_metaheuristic = (
routing_enums_pb2.LocalSearchMetaheuristic.GUIDED_LOCAL_SEARCH
)
params.time_limit.FromSeconds(30)
solution = routing.SolveWithParameters(params)
if not solution:
return {"status": "INFEASIBLE", "routes": []}
# Extract routes
results = {"status": "OPTIMAL", "routes": []}
for v in range(data['num_vehicles']):
index = routing.Start(v)
route = {"vehicle": v, "stops": [], "load": 0}
while not routing.IsEnd(index):
node = manager.IndexToNode(index)
route["stops"].append(node)
route["load"] += data["demands"][node]
index = solution.Value(routing.NextVar(index))
results["routes"].append(route)
return results
In a production environment, the travel time matrix is calculated in real time using the Google Maps Distance Matrix API or HERE Routing, accounting for current traffic. Customer data comes from the ERP system and is updated hourly. OR-Tools returns solutions in seconds for instances up to 200-300 customers; for larger instances, cluster-based approaches or GPU-accelerated solvers like NVIDIA cuOpt are used.
Demand Forecasting: Predicting Demand with Machine Learning
Accurate demand forecasting is the foundation of the entire supply chain. Without knowing how many products will be needed in the coming weeks, it is impossible to optimize procurement, right-size the warehouse, plan transportation, and guarantee service levels. For decades, companies used classical statistical models like ARIMA, SARIMA, and exponential smoothing. Today, machine learning models systematically outperform these baselines.
Demand Forecasting Models Comparison
| Model | Type | Strengths | Limitations | Typical MAPE |
|---|---|---|---|---|
| Prophet (Meta) | Bayesian additive | Handles multiple seasonalities, holidays, trend | Does not scale easily to thousands of SKUs | 8-12% |
| LightGBM | Gradient Boosting | Fast, flexible feature engineering, production-ready | Requires manual feature engineering | 5-9% |
| Temporal Fusion Transformer | Deep Learning | Multi-horizon, interpretable, exogenous variables | Slower to train, requires GPU | 4-7% |
| SARIMA (baseline) | Statistical | Simple, interpretable | Does not capture non-linearities | 12-20% |
Demand Forecasting with LightGBM for Supply Chain
LightGBM is often the best choice for production deployment: fast training, millisecond inference, native support for missing values, and excellent scalability across thousands of SKUs. Here is a complete implementation with logistics-specific feature engineering.
"""
Supply Chain Demand Forecasting with LightGBM
Advanced feature engineering for logistics time series
"""
import pandas as pd
import numpy as np
import lightgbm as lgb
from sklearn.model_selection import TimeSeriesSplit
from sklearn.metrics import mean_absolute_percentage_error
from typing import List, Tuple
import warnings
warnings.filterwarnings('ignore')
def create_lag_features(df: pd.DataFrame, target_col: str,
lags: List[int]) -> pd.DataFrame:
"""Creates lag features to capture temporal dependency."""
df = df.copy()
for lag in lags:
df[f'lag_{lag}'] = df.groupby('sku_id')[target_col].shift(lag)
return df
def create_rolling_features(df: pd.DataFrame, target_col: str,
windows: List[int]) -> pd.DataFrame:
"""Rolling mean and std to capture trends and variability."""
df = df.copy()
for window in windows:
df[f'rolling_mean_{window}'] = (
df.groupby('sku_id')[target_col]
.transform(lambda x: x.shift(1).rolling(window).mean())
)
df[f'rolling_std_{window}'] = (
df.groupby('sku_id')[target_col]
.transform(lambda x: x.shift(1).rolling(window).std())
)
return df
def create_calendar_features(df: pd.DataFrame, date_col: str) -> pd.DataFrame:
"""Calendar features: seasonality, public holidays, weekends."""
df = df.copy()
df['date'] = pd.to_datetime(df[date_col])
df['day_of_week'] = df['date'].dt.dayofweek
df['day_of_month'] = df['date'].dt.day
df['week_of_year'] = df['date'].dt.isocalendar().week.astype(int)
df['month'] = df['date'].dt.month
df['quarter'] = df['date'].dt.quarter
df['is_weekend'] = (df['day_of_week'] >= 5).astype(int)
# Italian public holidays (can be swapped for any country)
holidays = ['01-01', '04-25', '05-01', '06-02',
'08-15', '11-01', '12-08', '12-25', '12-26']
df['is_holiday'] = df['date'].apply(
lambda d: 1 if f'{d.month:02d}-{d.day:02d}' in holidays else 0
)
# August effect: compressed B2B demand
df['is_august'] = (df['month'] == 8).astype(int)
# Peak season Q4 (Oct-Dec: Black Friday, Christmas)
df['is_peak_season'] = df['month'].isin([10, 11, 12]).astype(int)
return df
def train_lgbm_forecaster(df: pd.DataFrame) -> Tuple[lgb.Booster, List[str]]:
"""
Trains LightGBM with walk-forward time series cross-validation.
Key design choices:
- TimeSeriesSplit: NO data leakage, respects temporal ordering
- MAE (L1) loss: more robust than MSE to demand spikes
- Early stopping: prevents overfitting
"""
FEATURE_COLS = [
'lag_1', 'lag_7', 'lag_14', 'lag_28', 'lag_56',
'rolling_mean_7', 'rolling_mean_14', 'rolling_mean_28',
'rolling_std_7', 'rolling_std_14', 'rolling_std_28',
'day_of_week', 'day_of_month', 'week_of_year', 'month', 'quarter',
'is_weekend', 'is_holiday', 'is_august', 'is_peak_season',
'price_ratio', 'promotions', 'supplier_lead_time', 'recent_stockout'
]
df_train = df.dropna(subset=FEATURE_COLS).copy()
X = df_train[FEATURE_COLS]
y = df_train['quantity']
tscv = TimeSeriesSplit(n_splits=5)
lgb_params = {
'objective': 'regression_l1', # MAE - robust to outliers
'metric': 'mape',
'num_leaves': 127,
'learning_rate': 0.05,
'feature_fraction': 0.8,
'bagging_fraction': 0.8,
'bagging_freq': 5,
'min_data_in_leaf': 50,
'lambda_l1': 0.1,
'lambda_l2': 0.1,
'verbose': -1,
'n_jobs': -1
}
mape_scores = []
best_model = None
for fold, (train_idx, val_idx) in enumerate(tscv.split(X)):
X_train, X_val = X.iloc[train_idx], X.iloc[val_idx]
y_train, y_val = y.iloc[train_idx], y.iloc[val_idx]
dtrain = lgb.Dataset(X_train, label=y_train)
dval = lgb.Dataset(X_val, label=y_val, reference=dtrain)
model = lgb.train(
lgb_params,
dtrain,
num_boost_round=1000,
valid_sets=[dval],
callbacks=[lgb.early_stopping(50), lgb.log_evaluation(100)]
)
y_pred = model.predict(X_val)
mape = mean_absolute_percentage_error(y_val, np.maximum(y_pred, 0))
mape_scores.append(mape)
print(f"Fold {fold+1} MAPE: {mape:.2%}")
best_model = model
print(f"\nAverage MAPE: {np.mean(mape_scores):.2%} (+/-{np.std(mape_scores):.2%})")
# Final training on all data
dtrain_full = lgb.Dataset(X, label=y)
final_model = lgb.train(
lgb_params, dtrain_full,
num_boost_round=best_model.best_iteration
)
return final_model, FEATURE_COLS
Inventory Optimization with Reinforcement Learning
Inventory management is a sequential decision problem: every day you must decide how many units to order for each SKU, balancing holding costs (tied-up capital, physical space, obsolescence risk) against stockout costs (lost sales, contractual penalties, reputation damage). Classical models like EOQ (Economic Order Quantity) and reorder point policies fail to capture non-stationary demand, SKU interdependencies, and supply chain disruptions.
Reinforcement Learning (RL) offers a more powerful approach: an agent learns an optimal reordering policy by interacting with a simulation of the environment. Recent research (2025) shows that Proximal Policy Optimization (PPO) reduces replenishment costs by 12.31% and cuts stockouts to 2.21%, significantly outperforming traditional methods.
"""
Inventory Optimization with Reinforcement Learning (PPO)
Using Gymnasium and Stable-Baselines3
"""
import gymnasium as gym
import numpy as np
from stable_baselines3 import PPO
from stable_baselines3.common.env_checker import check_env
from stable_baselines3.common.callbacks import EvalCallback
from typing import Optional
class InventoryEnv(gym.Env):
"""
Custom Gymnasium environment for inventory optimization.
State: [current_stock, avg_demand_7d, expected_lead_time,
in_transit_orders, accumulated_cost]
Action: quantity to order (discrete, 0-10x MOQ)
Reward: -(holding_cost + stockout_cost + order_cost)
"""
metadata = {"render_modes": ["human"]}
def __init__(self, demand_data: np.ndarray, config: dict):
super().__init__()
self.demand_data = demand_data
self.n_steps = len(demand_data)
self.holding_cost = config.get('holding_cost', 0.5) # EUR/unit/day
self.stockout_cost = config.get('stockout_cost', 5.0) # EUR/unit missing
self.order_cost = config.get('order_cost', 50.0) # EUR per order
self.lead_time = config.get('lead_time', 3) # Days
self.max_stock = config.get('max_stock', 1000)
self.moq = config.get('moq', 10) # Minimum Order Quantity
# Action space: 0 (do not order) to 10 MOQs
self.action_space = gym.spaces.Discrete(11)
# Observation space: 5 normalized variables
self.observation_space = gym.spaces.Box(
low=np.float32([0, 0, 0, 0, 0]),
high=np.float32([1, 1, 1, 1, 1]),
dtype=np.float32
)
self.reset()
def reset(self, seed: Optional[int] = None, options=None):
super().reset(seed=seed)
self.current_step = 0
self.stock = self.max_stock // 2
self.pending_orders = []
self.total_cost = 0.0
self.stockouts = 0
return self._get_obs(), {}
def _get_obs(self) -> np.ndarray:
window = self.demand_data[self.current_step:self.current_step + 7]
avg_demand = np.mean(window) if len(window) > 0 else 0
return np.float32([
self.stock / self.max_stock,
avg_demand / 100,
self.lead_time / 14,
len(self.pending_orders) / 5,
min(1.0, self.total_cost / 10000)
])
def step(self, action: int):
# 1. Receive incoming orders
arrived = [qty for qty, day in self.pending_orders
if day <= self.current_step]
self.pending_orders = [(q, d) for q, d in self.pending_orders
if d > self.current_step]
for qty in arrived:
self.stock = min(self.max_stock, self.stock + qty)
# 2. Place new order
order_qty = action * self.moq
order_cost = 0
if order_qty > 0:
order_cost = self.order_cost
self.pending_orders.append(
(order_qty, self.current_step + self.lead_time)
)
# 3. Satisfy demand
demand = self.demand_data[min(self.current_step, self.n_steps - 1)]
if demand <= self.stock:
self.stock -= demand
stockout_cost = 0
else:
stockout_cost = (demand - self.stock) * self.stockout_cost
self.stock = 0
self.stockouts += 1
# 4. Compute step cost
holding_cost = self.stock * self.holding_cost
step_cost = holding_cost + stockout_cost + order_cost
self.total_cost += step_cost
reward = -step_cost / 100
self.current_step += 1
terminated = self.current_step >= self.n_steps
return self._get_obs(), reward, terminated, False, {
"step_cost": step_cost,
"stock": self.stock,
"stockouts": self.stockouts
}
def train_inventory_agent(demand_data: np.ndarray, config: dict) -> PPO:
"""
Trains a PPO agent for inventory optimization.
Why PPO?
- Stable training via clipped surrogate objective
- Works well with discrete and continuous action spaces
- Production-proven in supply chain applications
- Outperforms DQN on episodic inventory problems
"""
env = InventoryEnv(demand_data, config)
check_env(env, warn=True)
eval_env = InventoryEnv(demand_data, config)
eval_callback = EvalCallback(
eval_env,
best_model_save_path="./inventory_agent/",
log_path="./inventory_logs/",
eval_freq=5000,
deterministic=True
)
model = PPO(
"MlpPolicy",
env,
verbose=1,
learning_rate=3e-4,
n_steps=2048,
batch_size=64,
n_epochs=10,
gamma=0.99,
gae_lambda=0.95,
clip_range=0.2,
ent_coef=0.01,
tensorboard_log="./inventory_tensorboard/"
)
model.learn(total_timesteps=500_000, callback=eval_callback, progress_bar=True)
return model
Warehouse Automation: AI in Modern Distribution Centers
Warehouse automation is not just about physical robots. AI is transforming every aspect of warehouse operations: from product slotting optimization and pick path planning, to automated quality control and dynamic workforce management.
Intelligent Warehouse Technology Stack (2025)
| Layer | Technology | Function | Typical ROI |
|---|---|---|---|
| Physical | AMR (Autonomous Mobile Robots) | Move bins/shelves to operators | 30-40% picking productivity |
| Physical | Robotic arms with computer vision | Pick-and-place, depalletizing | 24/7 operations, -60% errors |
| Software | AI-powered WMS (Manhattan, Blue Yonder) | Operations orchestration, task interleaving | 15-25% throughput improvement |
| Software | ML Slotting Optimization | Place fast-moving items closer to output | 20% reduction in pick distance |
| Software | Computer Vision QC | Verify dimensions, damages, labels | 99.5% accuracy vs 96% human |
| Data | Warehouse Digital Twin | Layout simulation and optimization | Reduces redesign time by 70% |
Pick Path Optimization
A picker collecting 20 items in an unoptimized warehouse walks an average of 1.5-2.5 km per mission. With TSP-based heuristics, the path shrinks by 20-30%, delivering significant time and operational cost savings.
"""
Pick Path Optimization for aisle-based warehouse layouts.
Algorithms: S-shape routing + Nearest Neighbor heuristic
"""
from dataclasses import dataclass
from typing import List, Tuple, Dict
import math
@dataclass
class Location:
"""A slot position in the warehouse."""
aisle: int # Aisle number (1-N)
bay: int # Position along the aisle (1-M)
level: int # Shelf height (0=floor)
@dataclass
class PickItem:
"""Item to pick."""
sku_id: str
location: Location
quantity: int
def manhattan_distance(loc1: Location, loc2: Location,
aisle_width: float = 3.0,
bay_depth: float = 1.2) -> float:
"""Manhattan distance between two warehouse positions."""
if loc1.aisle == loc2.aisle:
return abs(loc1.bay - loc2.bay) * bay_depth
aisle_dist = abs(loc1.aisle - loc2.aisle) * aisle_width
max_bay = max(loc1.bay, loc2.bay)
exit_dist = min(loc1.bay, max_bay - loc1.bay + 1) * bay_depth
entry_dist = min(loc2.bay, max_bay - loc2.bay + 1) * bay_depth
return aisle_dist + exit_dist + entry_dist
def s_shape_routing(items: List[PickItem]) -> List[PickItem]:
"""
S-Shape routing: traverse aisles alternately (forward-backward).
Best for missions with many items spread across aisles.
"""
by_aisle: Dict[int, List[PickItem]] = {}
for item in items:
by_aisle.setdefault(item.location.aisle, []).append(item)
route = []
for i, aisle in enumerate(sorted(by_aisle.keys())):
aisle_items = sorted(by_aisle[aisle], key=lambda x: x.location.bay)
if i % 2 == 0:
route.extend(aisle_items)
else:
route.extend(reversed(aisle_items))
return route
def nearest_neighbor_routing(items: List[PickItem],
start: Location = None) -> List[PickItem]:
"""
Nearest Neighbor: always pick the closest remaining item.
Best for sparse missions with few items.
"""
if not items:
return []
current = start or Location(aisle=1, bay=1, level=0)
remaining = list(items)
route = []
while remaining:
nearest = min(remaining,
key=lambda x: manhattan_distance(current, x.location))
route.append(nearest)
current = nearest.location
remaining.remove(nearest)
return route
def optimize_pick_mission(items: List[PickItem]) -> Tuple[List[PickItem], float]:
"""
Selects the best routing strategy based on mission size.
- Small missions (<= 10 items): Nearest Neighbor
- Large missions (> 10 items): S-Shape
"""
route = nearest_neighbor_routing(items) if len(items) <= 10 else s_shape_routing(items)
total_distance = 0.0
current = Location(aisle=1, bay=1, level=0)
for item in route:
total_distance += manhattan_distance(current, item.location)
current = item.location
total_distance += manhattan_distance(current, Location(aisle=1, bay=1, level=0))
return route, total_distance
Last-Mile Delivery: Optimizing the Final Step
Last-mile delivery is the most expensive and complex phase of the supply chain: it accounts for 28-40% of total delivery cost, yet it is the most visible to the end customer. In Italian urban contexts, the challenge is amplified by ZTL restricted traffic zones, congestion, difficult parking, and the fragmentation of residential destinations.
AI Technologies for Last-Mile Delivery in 2025
| Technology | Maturity | Cost Reduction | Limitations |
|---|---|---|---|
| AI route optimization | Mature, widespread | 10-20% | Data quality dependent |
| Dynamic re-routing | Mature | 5-10% | Driver app integration |
| Delivery drones | Pilot, limited | Potential 40% | Regulations, payload, weather |
| Delivery robots | Experimental (IT) | Potential 60% | Infrastructure, regulation |
| Micro-fulfillment centers | Growing | 15-30% | Urban real estate costs |
Italian Case Studies: How Companies Are Using AI in Logistics
The Italian context presents specific challenges that make AI adoption in logistics both more necessary and more complex: uneven road infrastructure, a strong SME presence with fragmented volumes, marked seasonality (tourism, agriculture, fashion), and a last-minute ordering culture that strains planning systems.
Amazon Italy: The Most Advanced Automation Ecosystem
Amazon has invested heavily in Italy: distribution centers in Castel San Giovanni (PC), Vercelli, Passo Corese (RI), Castelguglielmo (RO) and sorting hubs are laboratories of logistics innovation. Key features:
- Kiva/Sparrow robots: mobile shelving that moves toward operators, virtually eliminating walking. Picking productivity increases 200-300%.
- Anticipatory shipping: ML algorithms pre-position items likely to be ordered in the next week at the geographically closest warehouse to target customers.
- Dynamic routing for Delivery Service Partners: algorithms adapt in real time to traffic, weather, and failed delivery attempts.
- Computer vision QC: AI cameras verify every outbound package, detecting damage and order discrepancies in milliseconds.
Poste Italiane: Digital Transformation of a Historic Operator
Poste Italiane handles 60 million deliveries per year with a network of over 35,000 letter carriers and 13,000 post offices. The digital transformation of Poste's logistics has three main axes:
- SDA Express Courier: ML-based routing system for courier route optimization, integrated with TomTom WEBFLEET for real-time tracking.
- Demand peak management: predictive algorithms that anticipate e-commerce volumes during Black Friday and the Christmas period, enabling proactive staffing and fleet scaling.
- Smart locker network (Punto Poste): AI for geographic distribution optimization and usage rate prediction.
GLS Italy: Route Intelligence for B2B
GLS Group (with a strong Italian presence) has implemented a logistics intelligence platform focused on the B2B segment, where punctuality is critical and contracts include SLAs with penalties. Key innovations:
- Daily dynamic routing: routes are not fixed but recalculated every night based on actual volume, with intraday adjustments for anomalous pickup points.
- Delivery success rate prediction: ML models predict the probability of successful delivery for each address/day, enabling more efficient delivery attempt scheduling.
- ERP customer integration: APIs allowing B2B customers to receive accurate delivery forecasts 48 hours in advance, improving end-customer satisfaction.
Real-Time Supply Chain Visibility and Digital Twins
Real-time visibility is the prerequisite for any form of AI optimization. Without knowing where goods are, what the status of supplier orders is, and what capacity is available in warehouses, any predictive model operates in the dark.
Real-Time Supply Chain Visibility Architecture
| Layer | Technology | Data Collected | Latency |
|---|---|---|---|
| Collection | IoT (GPS, RFID, temperature/humidity sensors) | Location, environmental conditions | 1-30 seconds |
| Streaming | Apache Kafka + Flink | Event stream from all touchpoints | < 1 second |
| Processing | ML anomaly detection | ETA deviations, proactive alerts | 1-5 seconds |
| Visualization | Control tower (Databricks/Snowflake) | Unified operational dashboard | 5-30 seconds |
| Simulation | Digital Twin | Virtual replica of the supply chain | Batch (nightly) |
Carbon Footprint Optimization
With the approaching deadlines of the CSRD (Corporate Sustainability Reporting Directive), measuring and reducing logistics emissions has become a business priority, not just an ethical one. Companies subject to CSRD must report Scope 3 emissions (which include logistics) from 2025 onwards.
AI contributes in three concrete ways to reducing logistics carbon footprint:
- Load consolidation: ML algorithms maximize vehicle fill rates, reducing empty miles, which represent an average of 20-25% of freight traffic in Europe.
- Modal shift: multi-modal optimization that prefers rail and coastal shipping when delivery times allow.
- Eco-routing: computing routes that minimize CO2 emissions instead of just distance, accounting for elevation profiles and traffic conditions.
Best Practices and Anti-Patterns in AI Logistics
Anti-Patterns to Avoid
- Optimizing in silos: optimizing routing without considering warehouse availability, or vice versa, leads to locally optimal but globally suboptimal solutions.
- Ignoring real operational constraints: time windows, customer operating hours, ZTL restrictions, vehicle axle weight limits. A model that does not know these constraints generates unusable solutions.
- Historical data without proper seasonality handling: training a demand forecasting model on data that includes anomalous periods (COVID, chip crisis, August shutdown) without adequate preprocessing produces biased forecasts.
- No post-deployment monitoring: demand patterns change, road networks change, customers change. An unmonitored model silently degrades.
- Big bang implementation: do not replace all logistics processes with AI at once. Start with a high-ROI use case, demonstrate value, then scale.
Best Practices for AI Implementation in Logistics
- Data quality first: before training any model, ensure that customer location data, vehicle dimensions, warehouse capacity, and demand history are clean and consistent.
- Hybrid approach: combine business rules (planner expertise) with AI. Pure ML models often violate constraints that a human planner would respect instinctively.
- Explainability for decision makers: logistics managers must understand why the system suggests a route or a reorder. Use SHAP values and natural language explanations.
- Graceful fallback: when the model is uncertain (low confidence), revert to heuristic rules rather than emitting unreliable predictions.
- Rigorous ROI measurement: define baseline metrics before go-live (cost per km, fill rate, OTIF, stockout rate) and measure the delta every quarter.
AI Adoption Roadmap for SMBs
Three-Year AI in Logistics Roadmap
| Phase | Timeline | Initiatives | Investment (EUR) | Expected ROI |
|---|---|---|---|---|
| Foundation | Year 1 | Data quality, modern WMS, basic route optimization, statistical demand forecasting | 50K - 200K | 15-25% |
| Intelligence | Year 2 | ML demand forecasting, advanced VRPTW, inventory optimization, real-time tracking | 150K - 500K | 25-40% |
| Automation | Year 3 | AMR warehouse, autonomous planning, digital twin, AI carbon reporting | 300K - 2M | 40-60% |
Cross-Series Connections
- MLOps for Business: how to deploy demand forecasting and routing models to production with MLflow and CI/CD pipelines.
- LLMs in Business: how to use Large Language Models to build conversational supply chain control towers and automated reports.
- Vector Database Enterprise: how to use pgvector and Pinecone for semantic search over supplier documentation and logistics audit trails.
- Data Governance: CSRD compliance for Scope 3 emissions reporting in logistics.
Conclusions
AI in logistics is no longer a laboratory experiment: it is an operational reality that the most competitive companies are already leveraging to build structural advantages. The Vehicle Routing Problem solved with OR-Tools, demand forecasting with LightGBM and TFT, inventory optimization with Reinforcement Learning, physical warehouse automation with AMRs and computer vision: each piece of this puzzle contributes to a more efficient, sustainable and resilient supply chain.
For Italian SMBs, the good news is that it is not necessary to tackle everything at once. The three-phase roadmap presented in this article allows starting with contained investments (50-200K EUR in year one) and demonstrating concrete ROI before scaling. Italy's PNRR Transizione 5.0, with its 12.7 billion euros allocated (of which only 1.7 billion had been used as of early 2026), offers significant tax incentives for investments in digitalization and automation: an opportunity Italian logistics companies cannot afford to ignore.
In the next article of the series we explore LLMs in Business: how to build enterprise RAG systems for internal documentation, fine-tuning on proprietary data, and guardrails to ensure safe and compliant responses in critical business contexts.







