I plugged a wattmeter into my Mac Studio M1 Max and left it running for 16 days. Normal use: 25 Docker containers always on, Ollama inference sessions throughout the day, a few benchmark runs mixed in.
4.574 kWh over 393 hours. That’s 11.6 watts average.
An earlier 78-hour measurement (the first weekend) had landed at 11.8W. The longer window confirmed the number.
#The raw numbers
March 6, 08:54 to March 22, 17:47. Meter started at 0.240 kWh, ended at 4.814 kWh.
| Period | Consumption |
|---|---|
| Per hour | 0.0116 kWh (11.6W avg) |
| Per day | 0.279 kWh |
| Per month | 8.5 kWh |
| Per year | 101.6 kWh |
Electricity costs:
| Period | Germany (€0.38/kWh) | US ($0.16/kWh) |
|---|---|---|
| Monthly | €3.22 | $1.36 |
| Yearly | €38.61 | $16.26 |
Under €40 a year in Germany, where electricity prices are among the highest in Europe.
#What’s actually running
This isn’t an idle box sitting in a corner. The Mac Studio M1 Max with 64GB unified memory runs:
- 25 Docker containers via OrbStack (Immich, Matrix/Synapse, Element, Paperless-ngx, Caddy, AdGuard Home, Homepage, Dockge, Open WebUI, OpenClaw, and more)
- Ollama running natively with Metal GPU acceleration
Peak draw during LLM inference: 50 watts. That’s the maximum I saw during the entire 16-day window.
#Storage: Terramaster DAS
The numbers above are pure Mac Studio. Storage is separate. I run two 6TB WD-REDs drives in a Terramaster DAS enclosure for Time Machine and WORM backups, measured independently with the same wattmeter:
| State | Power draw |
|---|---|
| Disks mounted | 9W |
| Disks unmounted (standby) | 2.5W |
I schedule daily backup windows instead of keeping spinning disks alive 24/7. Most of the day the Terramaster sits at 2.5W.
Full stack with storage:
| Configuration | Average draw | Yearly cost (DE) | Yearly cost (US) |
|---|---|---|---|
| Mac Studio alone | 11.6W | €38.61 | $16.26 |
| + Terramaster (disks mounted) | 20.6W | €68.58 | $28.88 |
| + Terramaster (standby) | 14.1W | €46.94 | $19.76 |
#How that compares
#vs. a NAS
A Synology DS923+ with four drives idles around 30-35W. The DS1522+ sits at roughly 23W in standby with drives spinning. Most of that power goes to keeping the drives alive.
A NAS does one thing well: reliable file storage with RAID. Different use case than a general-purpose home server. But the power comparison is notable: the Mac Studio plus its Terramaster DAS with both disks mounted draws 20.6W while running 25 containers, AI inference, and photo ML indexing. The Synology draws 30-35W serving files.
| Device | Power | What it does |
|---|---|---|
| Synology DS923+ (4 drives) | ~30-35W | File storage, RAID |
| Synology DS1522+ | ~23W | File storage, RAID |
| Mac Studio + Terramaster DAS | 20.6W | 25 containers, AI, photo ML, 2x6TB backup |
| Mac Studio (compute only) | 11.6W | 25 containers, AI, photo ML |
#vs. a Bose 5.1 doing literally nothing
I measured our ancient Bose 5.1 entertainment system with the same wattmeter. Powered off. Not playing anything. Sitting in standby. 30 watts. A surround sound system on standby draws more than the Mac Studio averages while running 25 containers and local AI.
#vs. an AMD mini PC (no GPU)
An AMD Ryzen mini PC is the closest x86 competitor on power efficiency. A Beelink SER8 with a Ryzen 7 8845HS idles at 7-10W at the wall running Ubuntu. Other mini PCs in this class (Minisforum UM890, Beelink SER5) land in the same 6-10W range. Comparable to the Mac Studio at idle.
The trade-off: no discrete GPU means no meaningful local LLM inference. The integrated Radeon graphics can technically run small models, but expect single-digit tok/s on anything larger than 7B. For Docker workloads, Pi-hole, reverse proxies, and media serving, these mini PCs are hard to beat on watts-per-dollar.
#vs. an AMD desktop running LLMs
I haven’t measured an AMD system myself. The numbers below come from published specs and community measurements.
Running local LLMs on x86 means a discrete GPU. An RTX 4090 pulls up to 412W during inference. An RTX 3090 is rated for 350-390W. That’s the GPU alone, before CPU, motherboard, RAM, and PSU overhead.
At the wall, a Ryzen tower with an RTX 4090 or 3090 running headless on Linux idles around 50-70W with power management configured. Without tuning or with a monitor plugged in, that climbs to 80-130W. The GPU alone idles at 10-20W (RTX 4090) or 20-30W (RTX 3090) on headless Linux.
The RTX 4090 has roughly 2.5x the memory bandwidth of an M1 Max and generates tokens faster. That speed comes at 450W peak vs 50W, and 50-70W idle vs 12W.
| Setup | Idle (at wall) | Peak inference (at wall) |
|---|---|---|
| AMD + RTX 4090 tower (headless, tuned) | ~60W | ~450W |
| AMD + RTX 3090 tower (headless, tuned) | ~65W | ~400W |
| AMD mini PC (no GPU) | ~8W | N/A |
| Mac Studio M1 Max | ~12W | ~50W |
The AMD tower idle numbers assume headless Linux with power management configured. Plug in a monitor or skip the tuning and add 20-40W to idle.
#Why Apple Silicon draws so little
On a standard x86 system, the GPU lives on a separate board with its own VRAM, power delivery, and cooling. Model weights have to fit in that VRAM. Data moves between system RAM and GPU memory across the PCIe bus. Even at idle, the discrete GPU draws 10-30W just being present.
On Apple Silicon, the CPU, GPU, and Neural Engine share the same physical memory pool on one chip package. No PCIe transfer overhead, no separate VRAM, no discrete GPU power draw. Ollama loads a model into unified memory and the GPU cores access it directly. Metal handles the matrix math. The entire inference pipeline runs on one SoC.
This applies beyond AI workloads. Docker containers, Immich’s photo ML indexing, file serving, all of it runs on the same low-power chip. There’s no separate component drawing idle power.
The RTX 4090 is faster at raw inference (roughly 2.5x the memory bandwidth of an M1 Max). But that performance costs 9x the peak power and 5x the idle power. I measured the actual inference performance in MLX vs llama.cpp on Apple Silicon.
#Mac Mini should draw similar numbers
I measured a Mac Studio M1 Max, but the power characteristics follow from the chip architecture, which is shared across the lineup. A Mac Mini M4 idles at around 5-7W. The M4 Pro with 48GB should land in the same ballpark under similar workloads. Apple’s own specs page lists the Mac Studio M1 Max at 17W idle, which tracks with what I measured under light container load.
The Mac Studio gives you more unified memory (up to 192GB on Ultra configurations) and more Thunderbolt ports. The power behavior is the same architecture. If you’re setting one up, the prepare your Mac as a home server guide covers the base configuration.
#What it costs over time
For a machine running 24/7, electricity is a recurring cost that adds up. Based on 16 days of measured data, projected:
| Setup | Annual kWh | Germany (€0.38) | US ($0.16) |
|---|---|---|---|
| Mac Studio alone | 101.6 | €38.61 | $16.26 |
| Mac Studio + Terramaster | ~180 | ~€69 | ~€29 |
| Synology DS923+ (4 drives) | ~263-307 | ~€100-117 | ~$42-49 |
| AMD + RTX 4090 tower (idle) | ~526 | ~€200 | ~$84 |
The AMD number is idle-only, from published measurements. Any inference load adds to it. The Mac Studio’s 101.6 kWh includes real inference sessions throughout the measurement period.
Over five years, the Mac Studio costs roughly €193 in electricity in Germany. An AMD tower with an RTX 4090 at idle costs roughly €1,000 before factoring in inference workloads. That €800 difference is worth considering when choosing hardware.
#Bottom line
A Mac Studio M1 Max draws 12W on average running 25 Docker containers, being used as workstation at the same time and local AI inference. Add a Terramaster DAS with two 6TB drives and it’s 21W. Peak during inference: 50W. That’s less power than a Synology NAS draws serving files alone.
For under €40/year in electricity (under €70 with storage), it runs a photo library with ML indexing, a document management system, a private messenger, ad blocking, a reverse proxy, and local LLM inference. In one of the most expensive electricity markets in Europe.
The big picture: You Bought a Mac Mini. Now What? → — what all that electricity actually powers.