You press a button. A lot happens.
There's a moment every Flutter developer knows — you hit Run in your IDE, or type flutter run in the terminal, and a few seconds later your app is alive on a device. It feels instantaneous. Magical, almost.
It's not magic. It's a pipeline. And understanding that pipeline changes how you think about everything from build times to binary size to crash reports to why hot reload works at all.
Let's walk through it. From the first character of your Dart code to the moment your main() function is called on an Android device.
Your code has two lives
Before we follow the pipeline, we need to talk about something fundamental: your Dart code doesn't compile the same way in debug and release mode. These are not just two versions of the same process — they are structurally different execution strategies.
In debug mode, your code runs with a JIT compiler — Just In Time. The Dart VM (if Dart code were a CD, then the Dart VM would be the CD player — in software terms, this is called a runtime environment) receives a compact binary representation of your code and compiles it to machine code on the fly, function by function, as they're called. This is slower — there's compilation overhead at runtime, observability hooks everywhere, extra safety checks — but it enables something remarkable: hot reload. Because your app is running bytecode, not fully compiled native code, the VM can swap in new bytecode mid-flight without restarting. Your app state survives. Your scroll position survives.
In release mode, your code runs with AOT compilation — Ahead Of Time. Everything is compiled to native machine code before the app ever runs. No VM warmup, no JIT overhead, no bytecode swapping. The result is roughly 2-3x faster execution and dramatically lower startup latency. But hot reload is gone — you compiled everything in advance.
The same source code. Two completely different execution models. This distinction matters for everything that follows.
Stage one: the frontend compiler
When you run flutter build apk, the first thing that runs is not the C++ compiler, not the native toolchain — it's the Dart frontend compiler, also called the CFE (Common Front End).
The CFE is a pure Dart program. It takes your source files — main.dart, all your imports, the Flutter SDK, every package in pub cache — and produces a single, compact binary file called a Kernel snapshot, with the extension .dill.
Kernel is Dart's intermediate representation: a typed, resolved, tree-structured representation of your entire program. All imports have been resolved. All types have been checked. All sugar (extension methods, cascade notation, collection spread) has been desugared into simpler forms. The .dill file knows nothing about ARM or x86 — it's platform-agnostic. It's the program in a form that's easy to analyze and transform before committing to any hardware target.
In debug mode, this is where it ends — the .dill is shipped to the device and the Dart VM takes over from here. In release mode, the Kernel file is an intermediate artifact that feeds the next stage.
Stage two: gen_snapshot
This is where AOT compilation actually happens. gen_snapshot is a program (written in C++, as part of the Dart SDK) that takes the Kernel IR and compiles it all the way down to native machine code.
It does this in several passes:
Type flow analysis — a whole-program analysis that figures out what types can actually flow through each function. This enables aggressive dead code elimination. If your code has a code path that's provably unreachable given the types involved, it gets deleted entirely. This is part of why Flutter release builds are dramatically smaller than debug builds.
Intermediate optimizations — inlining, constant folding, loop unrolling. Similar to what a C++ compiler's optimizer does, applied to Dart's IR.
Code generation — produces actual ARM64 (or x86_64, or ARMv7) assembly, then assembles it into machine code.
The output is an ELF shared library — libapp.so. This is a standard Linux shared library file, the same format used by every native C or C++ library on Android. It contains your compiled Dart code as machine code, an entry point symbol (_kDartVmSnapshotData, _kDartIsolateSnapshotInstructions), and a symbol table.
One libapp.so per CPU architecture. For a modern Flutter release build targeting all current Android devices, you typically get three: arm64-v8a, armeabi-v7a, and x86_64.
Stage three: the APK
An APK is a ZIP file. Not metaphorically — literally. Change the extension to .zip, double-click it, and you'll see the contents:
my_app.apk
├── META-INF/
│ ├── MANIFEST.MF ← file hashes for tamper detection
│ └── CERT.RSA ← your signing certificate
├── AndroidManifest.xml ← binary-encoded, describes permissions + entry points
├── res/ ← Android resources (layouts, drawables, strings)
├── assets/
│ └── flutter_assets/ ← your pubspec assets, fonts, images
│ ├── AssetManifest.json
│ └── fonts/
├── lib/
│ ├── arm64-v8a/
│ │ ├── libapp.so ← your compiled Dart code ← THIS IS YOUR APP
│ │ └── libflutter.so ← the Flutter engine
│ ├── armeabi-v7a/
│ │ ├── libapp.so
│ │ └── libflutter.so
│ └── x86_64/
│ ├── libapp.so
│ └── libflutter.so
└── classes.dex ← the Android entry point (tiny Java bootstrapper)Notice what classes.dex is: a small Java bootstrapper. It's generated by Flutter during the build. It contains just enough Java code to tell Android "load libflutter.so, start the Flutter engine, hand control over to Dart." Your actual application logic — every widget, every service, every line of Dart you wrote — is in libapp.so. The Java layer is a formality.
libflutter.so is the Flutter engine: the rendering pipeline, the Dart runtime, the platform channel implementation, the accessibility layer, and — since Flutter 3.10 — Impeller. It's roughly 5-8MB per architecture. It's the same for every Flutter app; only libapp.so changes from app to app.
Why Flutter is harder to reverse engineer than native Android
When a security researcher or a curious competitor gets hold of your native Kotlin app, their first move is to run it through jadx or apktool. These tools decompile the classes.dex bytecode back into readable Java. Variable names are gone, but class names, method names, and the overall structure are largely intact. A 2,000-line Kotlin service class is recoverable in about ten minutes.
Flutter apps are different.
Your code is in libapp.so — native ARM machine code. To analyze it, you need tools like Ghidra or IDA Pro. You're looking at disassembled instructions, not decompiled source. Instead of:
fun calculateDiscount(price: Double, user: User): Double {
if (user.isPremium) return price * 0.85
return price
}You're looking at something closer to:
; somewhere in libapp.so, arm64
LDR X0, [X19, #0x18] ; load user object field at offset 0x18
CBZ X0, loc_0x4d8f2 ; if null, branch
LDR W1, [X0, #0x34] ; load isPremium field at offset 0x34
CBZ W1, loc_0x4d8f2 ; if false, branch
FMOV D0, D8 ; price into fp register
FMOV D1, #0.85 ; discount factor
FMUL D0, D0, D1 ; multiply
RETNo class names. No method names. No hint of what 0x18 or 0x34 represent. Reverse engineering a meaningful Flutter app binary is a research project, not a lunch break task.
Taking it further: obfuscation
If you want to make it even harder, Flutter has a --obfuscate flag:
flutter build apk --obfuscate --split-debug-info=./symbolsWhat this does: during the gen_snapshot phase, it replaces all Dart symbol names (class names, method names, field names) with meaningless short identifiers before generating the binary. So where libapp.so might have had a symbol called _UserRepository_fetchCurrentUser, it now has _a4f.b2. Stack traces become unreadable.
That last part is important — unreadable to you too. If a user's app crashes in production and you have no symbol map, the crash report is useless. That's what --split-debug-info is for: it writes the mapping between obfuscated names and original names to the ./symbols directory. You store that file somewhere safe. When a crash comes in, you run:
flutter symbolize --debug-info=./symbols/app.android-arm64.symbols \
--input=crash_report.txtAnd you get your readable stack trace back. The mapping never ships in the APK. It lives only with you.
One critical detail: the symbols file is tied to a specific build. If you release version 2.0 and then lose the symbols file for version 1.8, crashes from users still on 1.8 are permanently unreadable. Store them in version control, in a release artifact store, somewhere durable.
AAB vs APK: what actually goes to the Play Store
When you submit to Google Play, you almost certainly submit an AAB — Android App Bundle — not an APK. The difference matters:
An APK is self-contained and device-agnostic. It has libraries for all CPU architectures, resources for all screen densities, translations for all configured languages. If a user downloads your APK directly from a website, they get all of it — even the parts their device will never use.
An AAB is a publishing format. It contains everything, structured so that Google Play can dynamically split it. When a user with an ARM64 device downloads your app, Play generates and delivers only the ARM64 native libraries. Their APK is smaller, downloads faster, installs faster, and takes less storage. For an app targeting modern Android, the download size difference can be 30-40%.
The structure of an AAB is similar to an APK — it's also a ZIP — but organized into base, feature, and configuration splits. You don't install AABs directly on a device; Play handles the assembly. For local testing, you can use bundletool to simulate what Play would deliver to a specific device configuration.
How Android actually starts your app
When a user taps your icon, here's the sequence:
- Android creates a new process for your app
- The process loads
classes.dex— the tiny Java bootstrapper FlutterActivity.onCreate()is called (generated boilerplate; you rarely touch this)- The bootstrapper loads
libflutter.so— the Flutter engine — into memory - The engine initializes itself: Dart VM or AOT runtime, rendering pipeline, platform channels
- The engine loads
libapp.so— your compiled Dart code - The entry point is called: your
main()function runApp(MyApp())starts the widget tree- The first frame is rendered
The time between step 1 and step 9 is your cold start latency. In a release build with no lazy initialization debt, this is typically under 300ms on a modern device. In a debug JIT build, it's easily 1-3 seconds because the VM is JIT-compiling your code on-demand instead of running pre-compiled machine code.
Hot restart (debug only) skips steps 1-5. It reloads your Dart kernel and re-runs main(). Hot reload (debug only) skips everything — it patches the running VM's bytecode in place without re-running main().
The `libflutter.so` you never look at
Your libapp.so gets a lot of attention. libflutter.so mostly doesn't — it's treated as a black box. But it's worth knowing what lives there, because it shapes what your app can do:
- Dart runtime: the VM in debug mode, the AOT runtime in release
- Skia (legacy) or Impeller (Flutter 3.10+): the rendering engine. More on this in Post 4.
- Platform channel infrastructure: how your Dart code calls native Java/Kotlin/Swift/ObjC
- Dart UI library: the bindings between Dart and the engine's rendering primitives
- Text layout engine: HarfBuzz and minikin, for correct Unicode text shaping
- Accessibility layer: semantic tree, screen reader integration
When you see libflutter.so is 7MB in your APK, that's what 7MB of engine looks like.
What comes next
You now know what your app actually is at the binary level. But there's a question that this pipeline raises and doesn't answer: why does your code sometimes fail at compile time and sometimes at runtime? And why do those two categories feel so different as a developer?
The distinction is more interesting than it first appears. It's not just about when errors are caught — it's about two fundamentally different phases of your program's life, each with different properties, different tools, and different failure modes.
That's explored in Compile Time vs Run Time.