The question nobody asks out loud
You're a few months into Flutter. You've built screens, wired up state management, maybe even shipped something. And somewhere along the way someone told you to call dispose() on your controllers. You do it. You don't fully know why. You move on.
Then one day you read that Dart has a garbage collector. And something doesn't add up. If the runtime cleans up memory automatically, why am I manually calling `dispose()`? Why do I need to cancel `StreamSubscription`s? Why does the Flutter team keep warning me about memory leaks?
It's a fair question. And the answer reveals something fundamental about how Dart — and Flutter — actually work at runtime.
First, let's clear something up about C++
If you've read the Flutter Compilation Article in this series, you know that Dart's compiler is partly written in C++. That fact tends to make developers nervous.
C++? That's the language where you manually allocate and free memory. Does that mean my Flutter app is somehow inheriting that risk?
No. And the distinction matters.
Dart's compiler is a tool — a program that runs on your development machine, takes your Dart source code, and produces compiled output. The compiler being written in C++ is about the compiler itself: a separate process, managed by the Dart team, that lives on your machine and exits when it's done. Once it's done its job, it steps aside entirely.
Your Flutter app runs inside the Dart runtime — a completely different piece of software, with its own memory model, its own object lifecycle, and its own garbage collector. The language the compiler was written in has no bearing on what happens when your users tap buttons and your app allocates objects.
Think of it this way: a bakery's industrial oven might be made of German steel. That doesn't mean the bread tastes German.
What a garbage collector actually does
Before we talk about leaks, we need to understand what the GC is solving in the first place.
In C or C++, you are responsible for every byte. You call malloc() to request memory from the operating system, you use it, and you call free() when you're done. Forget the free() and that memory sits there — orphaned, accessible to no one, returned to no one — until your process exits. That's a memory leak in the classical sense.
Most modern languages — Java, Python, Go, JavaScript, Dart — introduced garbage collectors to remove this burden. The idea is conceptually simple:
If no part of your running program can reach an object, that object is garbage.
The GC periodically scans the heap, builds a graph of what's reachable from your "roots" (global variables, active stack frames, open closures), marks everything it can reach, and frees everything it can't. You never call free(). You just stop holding references, and eventually the GC notices.
The key word is reachable. Not "in use." Not "useful to you." Just reachable — from somewhere, by something.
Dart's garbage collector, specifically
Dart uses a generational garbage collector — a design shared with the JVM, V8, CPython, and most other high-performance runtimes. The insight behind it is empirical: most objects die young.
The Widget built during a frame rebuild, the intermediate String produced during JSON parsing, the temporary List created inside a map() call — these are born and discarded in milliseconds. A small number of objects survive those first few collection cycles and become long-lived: your app state, your repositories, your open streams.
Dart's heap is split into two regions that handle these two populations differently.
Young generation — the scavenger
Short-lived objects are born here. Dart uses a two-space scavenger: the young generation is divided into a "from" space and a "to" space. When the young gen fills up, the GC copies all surviving objects from "from" into "to", then declares the entire "from" space dead and resets it. Objects that weren't copied — the vast majority — are gone. No sweeping needed; just a pointer reset.
This collection takes microseconds. It's why creating many small, transient objects in Flutter is completely fine — this is exactly the workload this design optimizes for. setState() rebuilding your widget tree hundreds of times per second is not a problem.
Old generation — mark, sweep, compact
Objects that survive enough young-gen collections get promoted to the old generation. This heap is collected less frequently. The algorithm: mark all reachable objects by tracing the reference graph from roots, sweep away everything unmarked, then compact the survivors to reduce fragmentation. This takes longer — milliseconds instead of microseconds — but happens rarely if your architecture is reasonable.
From your daily perspective as a Flutter developer: you almost never think about this. Widget rebuilds, lists, parsed responses — all young gen, all cheap. The GC handles it invisibly.
So why do we still have memory leaks?
The leak that the GC cannot save you from
Here's the uncomfortable truth: a garbage collector doesn't prevent memory leaks. It prevents a specific *kind* of memory leak.
The GC frees objects that are unreachable. It is completely powerless against objects that are reachable but useless. And in Flutter, we create useless-but-reachable objects all the time — often without realizing it.
Consider this:
class _MyScreenState extends State<MyScreen> {
late StreamSubscription _subscription;
@override
void initState() {
super.initState();
_subscription = someBloc.stream.listen((event) {
setState(() {
// update something in this widget
});
});
}
@override
Widget build(BuildContext context) {
return Scaffold(/* ... */);
}
// ← No dispose() override
}When the user navigates away and this widget is removed from the tree, what happens to _subscription?
Nothing. The stream still exists. The stream still holds an internal reference to the listener closure. That closure captures this — the _MyScreenState instance. _MyScreenState holds a reference to the BuildContext, the widget, and everything hanging off it.
From the GC's perspective: every one of those objects is reachable. None of them is garbage. None of them will be collected.
But from your perspective: the screen is gone. The state is dead. That subscription is calling setState() on an unmounted widget — potentially throwing errors, definitely consuming CPU cycles, and keeping an entire object chain alive that should have been freed minutes ago.
That is a Flutter memory leak. Not a dangling pointer — a dangling reference.
The anatomy of common Flutter leaks
StreamSubscription
// Leaks — listener is never removed
someStream.listen((event) => _update(event));
// Safe
final _sub = someStream.listen((event) => _update(event));
@override
void dispose() {
_sub.cancel(); // stream drops its reference to your closure
super.dispose();
}AnimationController
An AnimationController registers itself with the Flutter engine's ticker — the mechanism that drives frame callbacks. If you don't dispose it, it keeps ticking even after the widget it belonged to is gone. Scheduling work for a dead widget on every frame.
@override
void dispose() {
_animationController.dispose(); // unregisters from the ticker
super.dispose();
}ChangeNotifier listeners
// In initState:
model.addListener(_onChanged);
// model now holds a reference to _onChanged, which captures this
// Without this in dispose(), your State lives as long as the model:
model.removeListener(_onChanged);This one is particularly subtle when your model is a long-lived singleton — an app-level provider, a globally registered service. The model lives forever. It holds your listener. Your listener holds your widget's state. Your widget's state holds everything.
Timers capturing `this`
Timer.periodic(const Duration(seconds: 1), (_) {
setState(() => _seconds++); // captures this implicitly
});If this timer isn't stored and cancelled in dispose(), the engine holds a reference to the timer, the timer holds the closure, the closure holds your state. Every second, it fires — setState() on a widget that isn't mounted, potentially crashing, always leaking.
The pattern behind all of them
Every Flutter leak follows the same structure:
- You create an object that needs to call back into your widget (subscription, listener, timer, controller)
- That object stores a reference to your callback
- Your callback captures
this— yourStateinstance - When your widget is removed, nothing severs that reference
- The GC sees a live chain of references and correctly keeps everything alive
`dispose()` is not freeing memory. It is severing references.
When you call _subscription.cancel(), you're asking the stream to drop its internal reference to your listener. When you call _controller.dispose(), you're asking the Flutter engine to remove its reference to the ticker. When you call model.removeListener(...), the model forgets you exist.
Once those back-references are gone, your State is only referenced by Flutter's widget tree — and when you pop the route, that reference disappears too. Now the GC can collect everything, because finally nothing is pointing to it.
The GC was never the problem. The problem is that well-designed, reactive systems deliberately hold references to their subscribers. Your job is to unsubscribe cleanly so that when you leave, everything you brought with you can leave too.
Seeing it in DevTools
Flutter's DevTools Memory tab makes this visible.
Open DevTools while your app is running, navigate to Memory, and start a recording. Navigate into a screen, stay a moment, navigate back. Do this a few times. Then take a heap snapshot.
If your screen was leaking, you'll see its class still listed in the snapshot with a non-zero live instance count — instances that should not exist anymore, still sitting in the old generation, still considered reachable.
Snapshot diffing is particularly useful: take a snapshot before navigating into a screen, navigate in and back out, take another snapshot. The diff shows exactly what was added and not freed. Grow the route five times and the count should not grow — if it does, something held on.
For development, the leak_tracker package (used internally by the Flutter team itself) can automatically detect Disposable objects — widgets, controllers, notifiers — that were never disposed. It integrates with flutter_test for automated coverage, and can be wired into debug builds for runtime warnings.
// In your test:
testWidgets('screen disposes all controllers', (tester) async {
await tester.pumpWidget(MyScreen());
await tester.pumpWidget(SizedBox()); // remove it
// leak_tracker will report any undisposed objects here
});Bringing it back to compile time
There's a thread running through this entire series. We talked about how the best engineering decisions push runtime errors into compile time — making invalid states unrepresentable before the program ever runs. Dart's null safety was the textbook example.
Memory leaks are one of the few places where this shift hasn't happened yet in most Flutter code. They're runtime errors — invisible, silent, and cumulative. Your app feels fine. Your tests pass. And somewhere in old gen, instance counts climb.
But the ecosystem is moving in the right direction:
- `flutter_hooks` ties subscriptions and controllers to the widget's lifecycle automatically. You declare a
useAnimationController()and the hook disposes it when the widget is unmounted — nodispose()override needed. - Riverpod and modern BLoC patterns enforce teardown by design, scoping provider lifecycles to routes or widgets rather than leaving them open-ended.
- Upcoming Dart features around structured concurrency aim to make it structurally impossible to forget to cancel an async operation — the same instinct as null safety, applied to lifecycle management.
The direction is always the same: from "something you remember to do" toward "something the structure guarantees."
What comes next
In the next post, we go to the other end of the rendering pipeline: Impeller — Flutter's new rendering engine, and the reason Flutter 3.10 was a much bigger deal than the changelog made it seem.
The Flutter team rewrote their entire rendering stack from scratch. Not because Skia was bad — Skia is an excellent library. But because of one specific runtime problem that no amount of GC tuning or memory discipline could solve: shader compilation jank.
And the solution, as you might now expect, involved moving work from runtime to compile time.