← All guides

Why Your Next Home Server Should Be a Mac Mini or Mac Studio

Real wattmeter numbers from a Mac Studio M1 Max running 25 Docker containers and local LLM inference. 12W average, 50W peak. Mac Mini should be similar. Under €40/year in Germany.

I plugged a wattmeter into my Mac Studio last Friday morning. Left it running over the weekend. Three days of normal use: 25 Docker containers, some Ollama inference, a couple of benchmark runs.

Monday afternoon I checked the reading. 0.921 kWh consumed over 78 hours.

That’s 11.8 watts average. For a machine running a full home server stack and doing local AI inference.

The raw numbers

Friday 8am to Monday 2pm. 78 hours of uptime. The meter started at 0.22 kWh and landed at 1.141 kWh.

PeriodConsumption
Per hour0.0118 kWh (11.8W avg)
Per day0.283 kWh
Per month8.50 kWh
Per year103.4 kWh

And the costs:

PeriodGermany (€0.38/kWh)US ($0.16/kWh)
Monthly€3.23$1.36
Yearly€39.29$16.54

Under €40 a year in Germany. Where electricity prices are among the highest in Europe.

What’s actually running

This isn’t some idle box sitting in a corner. The Mac Studio M1 Max with 64GB unified memory is running:

  • 25 Docker containers via OrbStack (Immich, Matrix/Synapse, Element, Paperless-ngx, Caddy, AdGuard Home, Homepage, Dockge, Open WebUI, OpenClaw, and more)
  • Ollama running natively with Metal GPU acceleration

The external HDDs (two 6TB drives for Time Machine and WORM backup) were disconnected during this measurement. I measure and optimize them separately, scheduling low-power daily backup windows instead of keeping spinning disks alive 24/7. The numbers above are pure Mac Studio.

During the measurement window I ran local LLM inference sessions and even pushed some benchmark workloads through it. Peak draw during inference: 50 watts. That’s the maximum I saw. Fifty.

How that compares

This is where it gets absurd.

vs. a NAS

A Synology DS923+ with four drives idles around 30-35W. The DS1522+ sits at roughly 23W in standby with drives spinning. Most of that power goes to keeping the drives alive.

A NAS does one thing well: reliable file storage with RAID. That’s a different use case than a general-purpose home server. But if you’re comparing power draw, the spinning disks alone put a NAS above the Mac Studio’s average.

DeviceIdle/Avg powerWhat it does
Synology DS923+ (4 drives)~30-35WFile storage, RAID
Synology DS1522+~23WFile storage, RAID
Mac Studio M1 Max11.8W avg25 containers, AI inference, photo ML

vs. a Bose 5.1 system doing literally nothing

I measured our ancient Bose 5.1 entertainment system connected to the TV with the same wattmeter. Powered off, not playing anything, just sitting there in standby. 30 watts. A surround sound system on standby draws more than twice what my Mac Studio averages while running 25 containers and local AI.

vs. an AMD mini PC (no GPU)

An AMD Ryzen mini PC is the closest x86 competitor on power efficiency. A Beelink SER8 with a Ryzen 7 8845HS idles at 7-10W at the wall running Ubuntu. Other mini PCs in this class (Minisforum UM890, Beelink SER5) land in the same 6-10W range. That’s comparable to the Mac Studio at idle.

The trade-off: no discrete GPU means no meaningful local LLM inference. The integrated Radeon graphics on these chips can technically run small models, but expect single-digit tok/s on anything larger than 7B. For Docker workloads, Pi-hole, reverse proxies, and media serving, these mini PCs are hard to beat on watts-per-dollar.

vs. an AMD desktop running LLMs

I haven’t measured an AMD system myself. The numbers below come from published specs and community measurements.

If you want to run local LLMs on x86 hardware, you need a discrete GPU. An RTX 4090 pulls up to 412W during inference. An RTX 3090 is rated for 350-390W. That’s just the GPU. Add the CPU, motherboard, RAM, and PSU overhead.

At the wall, a Ryzen tower with an RTX 4090 or 3090 running headless on Linux idles around 50-70W with proper power management (GPU in low-power state, no display connected). Without power management tuning or with a monitor plugged in, that climbs to 80-130W. The GPU alone idles at 10-20W (RTX 4090) or 20-30W (RTX 3090) on headless Linux.

Nobody runs inference 24/7 though. A local AI assistant on a home server is more like 5% active inference, 95% idle. Here’s what that looks like in practice, all numbers at the wall:

SetupIdleInferenceRealistic avg (5% inference)Annual cost (DE)
AMD + RTX 4090 tower~60W~450W~80W€265
AMD + RTX 3090 tower~65W~400W~82W€271
AMD mini PC (no GPU)~8WN/A~8W€27
Mac Studio M1 Max~12W~50W~14W€46

The AMD tower idle numbers assume a headless Linux setup with power management configured. Plug in a monitor or skip the power tuning and add 20-40W to the idle column.

Why Apple Silicon is this efficient

Unified memory. That’s the answer.

On an x86 system, model weights have to fit in the GPU’s VRAM. Data shuttles between system RAM and GPU memory across the PCIe bus. The discrete GPU has its own power delivery and cooling, plus a constant idle draw. Even when the model fits in VRAM, the whole chain burns power.

On Apple Silicon, the CPU, GPU, and Neural Engine share the same memory pool on the same chip package. No PCIe bus, no separate VRAM, no discrete GPU spinning fans. Ollama loads a model into unified memory and the GPU cores just access it directly. Metal acceleration handles the matrix math, and the whole thing sips power while doing it.

An RTX 4090 has roughly 2.5x the memory bandwidth of an M1 Max and generates tokens faster. But that speed costs 9x the power at peak and 4-5x at idle. A 50W peak for local LLM inference is not a typo.

This applies to Mac Mini too

I measured a Mac Studio M1 Max, but the power story is the same across Apple Silicon. A Mac Mini M4 idles at around 5-7W and the M4 Pro with 48GB would land in the same ballpark as my numbers under similar workloads. The M2 and M3 generations sit somewhere in between.

If you’re considering a Mac Mini M1, M2, M3, or M4 as a home server, expect similar efficiency. The Mac Studio gives you more unified memory (up to 192GB on Ultra configurations) and more Thunderbolt ports, but the power characteristics of the chip architecture are shared across the lineup. Apple’s own specs page lists the Mac Studio M1 Max at 17W idle, which lines up with what I measured under light container workloads.

The math that changes the conversation

The homelab community has been sleeping on Apple Silicon for servers. I get it. macOS isn’t Linux. You can’t stick four drives in it. PCIe expansion doesn’t exist.

But look at the numbers. If your server runs 24/7 (and mine does), power consumption is a recurring cost that compounds over years.

5-year electricity cost (5% inference)GermanyUS
Synology DS923+~€500~$210
AMD + RTX 4090 tower~€1,325~$560
AMD mini PC (no GPU)~€135~$56
Mac Studio M1 Max~€231~$97

That €1,100 difference over an AMD tower in Germany buys a decent chunk of used Mac Studio.

What I’d still like to measure

This was a 78-hour window. I want a full month of data to see how inference-heavy weeks compare to quiet ones. The external HDDs are measured separately (3.5” drives typically draw 3-6W each at idle, which is why I schedule backup windows instead of keeping them spinning).

I’m going to leave the wattmeter connected. More data to come.

Bottom line

A NAS draws more power serving files than the Mac Studio draws running 25 services and local AI. An AMD mini PC matches the Mac on idle power but can’t do inference. An AMD tower running LLMs costs 5-6x more in electricity per year in Germany, even with power management configured. The RTX 4090 is faster at inference, but that speed costs 9x the power.

For €3 a month in one of the most expensive electricity markets on the planet, I’m running a full home server, a photo library with ML indexing, a document management system, a private messenger, and local LLM inference.

I don’t know what else you’d want a home server to do.

We invested the time to perfect the setup. So you don't have to.

Check out famstack.dev →

Hi, I'm Arthur 👋 Interesting? Stuck? Got improvements? Come yell at me or just say hi.

I'm making this reusable for you.

Get notified when the repo goes online. One mail. Promise.