ARK Strategy

ARK Strategy

The Power of an M5 Mac for Local LLMs

Or, Why I Actually Feel Like a Secret Agent Running an LLM Locally on My MacBook Pro

Alex Randall Kittredge's avatar
Alex Randall Kittredge
Nov 25, 2025
∙ Paid

When the new Apple M5 chips hit the market, I didn’t just feel the buzz of a fresh silicon generation… I felt an opportunity to put the future of large language modeling back into my own hands.

Per Apple, the M5’s claims of “unparalleled GPU density and energy efficiency” were bold, but the real test was whether I could run a 20‑billion‑parameter model locally on my own laptop, without the latency of the cloud or the cost of a GPU farm. The answer? Absolutely, yes.


1. Why an M5 Mac?ceed, the performance of traditional CPUs for many workloads. The M5 takes that a step further:

  • GPU cores: 10 Core GPUs which means more parallelism for matrix‑multiplication heavy workloads.

  • Unified memory architecture: 24 GB of high‑bandwidth RAM on a single die, eliminating the latency of data shuttling between CPU and GPU.

  • Power envelope: 15 W at idle, 90 W under load – you get workstation‑grade performance without the heat or noise of a desktop.

For an LLM that needs to load 20 B parameters (roughly 80 GB of float‑16 weights) into RAM, the M5’s memory bandwidth is a game‑changer. It also means you can keep your entire workflow local: from data ingestion to inference, with no external API calls.


2. Getting the Gear

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Alex Randall Kittredge · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture