HomeDocumentationFlutter under the hood
Flutter under the hood
16

What Is a Process (And What's Running Inside Yours)

What Is a Process? Android, iOS, and Flutter Explained

March 24, 2026

The container you've never seen

Every Flutter app you've ever built runs inside a process. Every app on every phone — yours, your users', the ones in the background right now — is a process. The word appears in crash logs, in documentation, in system settings ("force stop" kills a process), in the FFI series when we said a native library gets "loaded into the process's address space."

But what is a process?

Not the dictionary definition. The real thing — the operating system construct that contains your running app, isolates it from every other app, gives it memory and CPU time and file access, and can kill it without warning when the system needs resources back.

Understanding what a process is changes how you think about memory, threads, background execution, crashes, and the boundary between your code and everything else on the device.

The definition, from the OS up

A process is the operating system's unit of isolation. It is a running program plus everything that program needs to execute: its own memory space, its own file descriptors, its own permissions, its own view of the system.

When you launch a Flutter app, the OS creates a process. That process gets:

  1. An address space — a private, virtual map of memory. Every pointer your code uses — every Pointer<Uint8> from FFI, every object on the Dart heap, every stack frame — is an address in this space. The address space is virtual: the OS maps it to physical RAM, but your process doesn't know or care which physical pages it's using. Two processes can both have data at "address 0x7FFF1000" without conflict, because they're looking at different physical memory through different virtual mappings.
  1. One or more threads — the actual execution units. A thread is a sequence of CPU instructions being executed. A process starts with one thread (the main thread) and can create more. All threads in a process share the same address space — they can read the same memory, access the same files, call the same functions. This is both the power and the danger of threads.
  1. File descriptors — handles to open files, network sockets, pipes. When your app opens a database file or establishes a network connection, the OS gives the process a file descriptor — a small integer that represents that open resource.
  1. A security context — permissions, user ID, group ID. On mobile, this determines what your app can access: the camera, the filesystem, contacts, network. The OS enforces this at the process level. Your code can't bypass it.

The crucial property: processes are isolated from each other. Process A cannot read process B's memory. It cannot access process B's file descriptors. It cannot see process B's threads. The OS kernel enforces this boundary using hardware support — the CPU's memory management unit (MMU) ensures that every memory access by a process is checked against that process's virtual address space mapping. An attempt to access memory outside your mapping doesn't return wrong data; it triggers a hardware fault. The OS catches the fault and kills the offending process.

This is not a software convention. It's a hardware-enforced guarantee. It's why a crashing app doesn't take down other apps. It's why a malicious app can't read your banking app's memory. It's the foundational security boundary of every modern operating system.

How Android creates your process

Android is, at its core, Linux. Every Android app runs as a Linux process, with a Linux user ID, Linux file permissions, and Linux process isolation. Understanding how Android creates your app's process reveals a lot about what's happening under the surface.

Zygote: the template process

When an Android device boots, a special process called Zygote starts. Zygote is a pre-initialized process that contains a loaded, warmed-up Android Runtime (ART) — the virtual machine that runs Android's managed code. It has the base class libraries loaded, the standard framework classes ready, the heap pre-populated with commonly used objects.

When you tap your app's icon, the system doesn't start a new ART VM from scratch. That would take seconds. Instead, it asks Zygote to fork.

Forking is a Unix system call that creates an exact copy of a process — same memory contents, same loaded libraries, same initialized state. The copy gets its own process ID, its own address space (via copy-on-write — the OS doesn't physically copy memory until one of the processes modifies it), and its own future. The original Zygote remains unchanged, ready to fork the next app.

Your app's process starts as a clone of Zygote. Then Android tells it "you are now com.mycompany.myapp" — it loads your APK, initializes your application class, and starts your main activity. But the ART runtime, the framework classes, the base libraries — all of that was already there, inherited from Zygote.

This is why cold-starting an Android app takes one to three seconds instead of ten. Zygote pre-paid the cost of VM initialization at boot time. Every app launch is a fork plus your app-specific initialization.

What's in your process on Android

Once your Flutter app is running, the process contains:

  • The ART virtual machine — managing the Android framework layer. Even though your app logic is Flutter/Dart, the Android shell (the FlutterActivity, the lifecycle callbacks, the platform channel handlers) runs in ART.
  • The Dart virtual machine — running your Dart code. This is a separate runtime, loaded as part of libflutter.so. It has its own heap, its own garbage collector (the one we covered in Post 3), its own isolate system.
  • libflutter.so — Flutter's engine, written in C++. This contains the Dart VM, the Impeller rendering engine (Post 4), the text layout engine, the platform interface layer.
  • libapp.so — your compiled Dart code, as AOT-compiled machine code (as we covered in Post 1).
  • Any native libraries you loaded — if you used FFI to load libsodium.so or libnative_math.so, those are mapped into this same process's address space. They share memory with your Dart code. They run on threads in this process. They are not sandboxed from your app — they are your app, at the process level.

All of this — ART, Dart VM, Flutter engine, your code, your native libraries — lives in one Linux process. One address space. One set of permissions. One entry in ps.

How iOS creates your process

iOS uses a different kernel (XNU, a hybrid of Mach and BSD), but the process model is conceptually similar with some important differences.

No Zygote equivalent

iOS doesn't use a Zygote-style forking model. Instead, when you launch an app, the OS creates a fresh process and loads the app's binary — a Mach-O executable — directly. The dynamic linker (dyld) resolves library dependencies, and the app's main() function is called.

iOS compensates for the lack of pre-forking with aggressive caching. The dynamic linker maintains a shared cache of system frameworks — UIKit, Foundation, CoreGraphics — that are pre-linked and memory-mapped. These frameworks appear in your process's address space but the physical memory pages are shared across all running apps. If five apps use UIKit, there's only one copy in physical RAM, mapped into five different virtual address spaces.

Static linking and code signing

Here's where iOS diverges sharply from Android, and it connects directly to something we discussed in the FFI series.

On Android, you call DynamicLibrary.open('libmylibrary.so') — the OS loads a shared library into your process at runtime. On iOS, Apple prohibits dynamically loading code that wasn't signed and bundled with the app at build time. Everything — your Dart code, Flutter's engine, your FFI native code — must be statically linked into the app binary or bundled as a signed framework.

This is why the FFI series uses DynamicLibrary.process() on iOS instead of DynamicLibrary.open(). Your native code is already in the process — it was compiled into the binary at build time. DynamicLibrary.process() returns a handle to the current process's own symbol table, where your statically linked functions already live.

The restriction exists for security. If apps could download and execute arbitrary native code at runtime, the App Store review process would be meaningless — an app could pass review with benign code and then download malicious native code after installation. Static linking plus code signing means every byte of executable code in an iOS app was reviewed, signed, and sealed at submission time.

What's in your process on iOS

  • The Objective-C / Swift runtime — managing UIKit, the app delegate, lifecycle events, platform channel handling.
  • The Dart virtual machine — same as Android, loaded as part of Flutter's engine framework.
  • Flutter.framework — Flutter's engine, the iOS equivalent of libflutter.so. Contains the Dart VM, Impeller (using Metal as the GPU backend), text rendering, platform interface.
  • App.framework — your AOT-compiled Dart code.
  • Any statically linked native code — your FFI C/C++ code, compiled into the binary.

Same pattern as Android: one process, one address space, everything coexisting.

Threads inside your process

A process starts with one thread, but your Flutter app has several. Understanding what each thread does explains a lot about performance, jank, and why certain operations need to happen on specific threads.

The main thread (platform thread)

This is the thread the OS starts your process with. On Android, it runs the ART event loop — handling lifecycle callbacks, input events, system broadcasts. On iOS, it runs the main run loop — handling UIKit events, gesture recognition, system notifications.

Every platform channel call from Dart arrives on this thread on the native side. Every method channel handler executes here by default. If your native handler blocks this thread for too long — Android gives you roughly 5 seconds — the OS shows the infamous ANR dialog (Application Not Responding) and offers the user the option to kill your app.

The UI thread (Dart thread)

This is where your Dart code runs. The Dart VM's main isolate executes on this thread — your build() methods, your state management, your business logic, your event handlers. It's also where Flutter's framework runs: layout, hit testing, semantics, producing the display list for the Raster thread.

This thread has a 16ms budget per frame. If your code takes longer than that — a heavy computation in a build() method, a synchronous file read, a complex where().map().toList() chain on a large collection — frames drop. The fix is to move expensive work to a separate isolate, as we'll discuss shortly.

The Raster thread

The rendering pipeline's second half. Takes the display list produced by the UI thread, translates it into GPU commands via Impeller, and submits them to the graphics API. This thread is managed by Flutter's engine — you don't write code that runs on it directly, but you affect it through widget complexity, shader usage, and the visual operations your UI demands.

The I/O thread

Handles asynchronous I/O operations at the engine level — file access, network sockets, image decoding. When you call rootBundle.load() or decode an image, the actual bytes-from-disk work happens here, not on the UI thread.

Your isolate threads

When you spawn Dart isolates — via Isolate.run(), Isolate.spawn(), or compute() — the Dart VM creates new OS threads inside the same process. Each isolate has its own Dart heap and cannot directly share mutable state with other isolates (they communicate via message passing). But at the OS level, they're threads in the same process — they share the same address space, the same file descriptors, the same permissions.

This is important for FFI. A native library loaded in the main isolate is visible to all isolates, because they share the address space. But Dart isolate memory is not shared — each isolate has its own heap, and the Dart VM enforces isolation at the language level even though the OS doesn't.

What happens inside the address space

Let's zoom into the address space — the virtual memory map that your process sees. Understanding its layout explains where your different kinds of data live.

A simplified view of a Flutter app's address space:

javascript
High addresses
┌─────────────────────────┐
│  Stack(s)               │  ← One per thread. Grows downward.
│  (main thread, UI       │     Local variables, function call frames,
│   thread, raster, etc.) │     return addresses.
├─────────────────────────┤
│  Memory-mapped files    │  ← Shared libraries (.so / .framework),
│  and shared libraries   │     mapped into the address space.
│                         │     libflutter.so, libapp.so, libc.so,
│                         │     your FFI libraries.
├─────────────────────────┤
│  Dart heap              │  ← Managed by the Dart GC.
│  (young gen + old gen)  │     Your Dart objects live here.
│                         │     GC can move objects within this region.
├─────────────────────────┤
│  Native heap            │  ← malloc/calloc allocations.
│                         │     FFI memory (Pointer<T>) lives here.
│                         │     GC cannot see or manage this.
├─────────────────────────┤
│  ART heap (Android)     │  ← Android framework objects.
│  or ObjC heap (iOS)     │     Activity/ViewController instances, etc.
├─────────────────────────┤
│  Code segment           │  ← Your compiled code: libapp.so,
│  (read-only, exec)      │     Flutter engine code. Read-only —
│                         │     the CPU can execute it but not write to it.
├─────────────────────────┤
│  Data segment           │  ← Global variables, constants.
│  (read-only data +      │
│   initialized data)     │
└─────────────────────────┘
Low addresses

This map is what the FFI post was referring to. When you call DynamicLibrary.open('libsodium.so'), the OS maps that .so file into the "memory-mapped files" region of this address space. The library's code and data become part of your process's virtual memory. lookupFunction then finds a function symbol in that mapped region and gives you a Dart function pointer to it. When you call that function, the CPU jumps to an address in the same address space your Dart code runs in — no process boundary to cross, no inter-process communication overhead. That's why FFI is fast.

Compare this with platform channels, which cross from the Dart VM into the platform runtime (ART or ObjC/Swift), through message serialization and thread hops. Platform channels are still within the same process — but they cross runtime boundaries, which involves overhead.

How the OS schedules your process

Your app's process doesn't own the CPU. The OS kernel runs a scheduler that decides, thousands of times per second, which thread gets to run on which CPU core.

Time slicing

A modern mobile phone has 4–8 CPU cores. At any moment, dozens of processes are running — your app, the system UI, background services, sensor managers, the radio interface. The scheduler gives each thread a time slice — a few milliseconds of CPU time — then preempts it (pauses it) and runs the next thread. The switching happens so fast that every process appears to run continuously.

When your Flutter app is in the foreground, the OS gives its threads higher scheduling priority. The UI thread and Raster thread get preferential treatment — they're more likely to get CPU time promptly, and less likely to be preempted mid-frame. This is why your app feels smooth in the foreground but background tasks run slower.

Foreground vs background on Android

When your app moves to the background, Android reduces its scheduling priority. Background processes get less CPU time, and after a period of inactivity, Android may further restrict them through App Standby Buckets — a system that categorizes apps by recency of use and limits their background execution accordingly.

If the system needs memory, Android kills background processes in order of priority — the least recently used app dies first. The process is terminated entirely: all threads stop, all memory is reclaimed, all file descriptors are closed. If the user returns to the app, Android creates a new process from scratch (another Zygote fork) and your app restarts.

This is why onSaveInstanceState exists in Android. It's not a convenience — it's survival. Your process can be killed at any time while backgrounded, with no warning. Any state not persisted to disk is gone.

Foreground vs background on iOS

iOS is more aggressive. Background processes are suspended — all threads are stopped, and the process gets zero CPU time. The memory is preserved (the process is still "alive"), but no code executes. If the system needs memory, the suspended process is terminated.

There are narrow exceptions: background audio, location updates, VoIP, Bluetooth accessories, and short-duration background tasks (roughly 30 seconds after backgrounding). But the default is suspension. An iOS app in the background is frozen.

This is why you can't run continuous background sync in a standard iOS app. The process isn't running. Flutter's Isolate.run() on a background isolate doesn't help — the entire process is suspended, all threads included.

Process errors: what kills your app

Understanding processes explains the different ways your app can die.

Segmentation fault (SIGSEGV)

Your code tried to access a memory address outside your process's valid address space — or in a region that's read-only. The CPU's MMU raises a hardware fault. The OS delivers a SIGSEGV signal to your process. Default behavior: immediate termination. No exception, no catch block, no try/finally. The process is gone.

In pure Dart, this essentially can't happen — the Dart VM manages memory access and doesn't expose raw pointers. But with FFI, you're working with raw pointers. Dereference a null pointer, read past the end of a buffer, use a pointer after freeing the memory it pointed to — segfault. This is the class of error that the FFI series warns about when it says "the GC will not save you."

Out of memory (OOM)

Your process requested more memory than the OS is willing to give it. On mobile, the OS sets per-process memory limits (typically 256MB–512MB on Android, varying by device). When your process exceeds the limit, the OS kills it.

Memory leaks cause this — both the Dart kind (unreachable-but-referenced objects, as covered in Post 3) and the native kind (FFI allocations never freed). The insidious part: the OOM kill looks like a "crash" to the user, but there's no exception in your crash reporter. The OS killed the process externally. Your code never got a chance to log anything.

ANR (Application Not Responding) — Android

Your main thread didn't respond to an input event within 5 seconds, or a BroadcastReceiver didn't finish within 10 seconds. The OS shows the ANR dialog. If the user taps "Close app," the process is killed.

Common causes in Flutter: a platform channel handler that performs blocking I/O on the main thread, a heavy native library initialization on the main thread, a synchronous database operation in a method channel callback.

Watchdog termination — iOS

iOS has a watchdog timer: if your app takes too long to launch (roughly 20 seconds), to respond to a system event, or to return from a lifecycle callback, the watchdog kills the process. No dialog — just termination.

Signal-based termination

The OS can send signals to your process: SIGKILL (immediate, unblockable termination — used by the OOM killer and "force stop"), SIGTERM (polite request to terminate — your process can handle it and clean up). On Android, Process.killProcess() sends SIGKILL. On iOS, the system sends SIGKILL when reclaiming memory from a suspended app.

FFI and your process: the same address space

This section ties directly to the FFI series. When you load a native library and call a C function, here's exactly what happens at the process level:

  1. DynamicLibrary.open('libsodium.so') asks the OS to memory-map the library into your process's address space. The library's code, data, and symbols become accessible at addresses your process can read.
  1. lookupFunction('crypto_secretbox_easy') searches the library's symbol table — a data structure inside the .so that maps function names to addresses — and finds the address of the function within the mapped region.
  1. When you call the function, the Dart VM sets up the CPU registers according to the platform's calling convention (which registers hold which arguments), and executes a call or bl instruction that jumps the CPU's instruction pointer to the function's address. The CPU is now executing C code. No process switch. No context switch. No IPC. Just a jump to a different address in the same virtual address space.
  1. The C function accesses memory — reading from and writing to addresses in the native heap. These addresses are in the same address space as your Dart heap, but the Dart GC knows nothing about them. The C function's stack frame is on the same thread's stack as the Dart frames above it.
  1. When the C function returns, the CPU jumps back to the Dart code. The Dart VM reads the return value from the register designated by the calling convention.

The entire call happened within your process. No boundary was crossed. This is why FFI has essentially zero overhead beyond the function call itself — it's the same mechanism as any function call inside a compiled C program.

It also means that a bug in your C code — a buffer overflow, a use-after-free, a null pointer dereference — crashes your entire process. Not just the C part. The Flutter engine, the Dart VM, the UI, everything. The C code has full access to your process's address space. It can read your Dart heap (though it shouldn't, and the addresses would be meaningless to it). It can corrupt memory that the Dart VM relies on. There is no sandbox between your Dart code and your FFI code within a process.

This is the fundamental trade-off of FFI: maximum performance (no overhead), maximum risk (no isolation).

Isolates are not processes

This distinction trips up many Flutter developers. Dart isolates provide memory isolation — each isolate has its own heap, and you can't share mutable objects between them. This feels like process isolation. It isn't.

Isolates are threads in the same process with language-enforced restrictions. The Dart VM ensures that isolates don't share heap memory, and communication happens via message passing (which copies data). But at the OS level, all isolates share the same address space, the same file descriptors, the same network connections, the same permissions.

Practical consequences:

  • An FFI library loaded in one isolate is accessible from any isolate (same address space).
  • A file opened in one isolate is visible to all isolates (same file descriptor table).
  • A segfault in any isolate kills the entire process (all isolates).
  • A native memory leak in one isolate consumes memory for the entire process.
  • If the OS kills your process, all isolates die simultaneously.

Isolates provide concurrency safety (no data races on Dart objects) but not security isolation (no protection boundary between isolates). For security isolation, you need separate processes — which is exactly what the OS provides between different apps.

Inter-process communication: when you need to cross the boundary

Sometimes your app needs to talk to other processes. The OS provides controlled mechanisms for this, because direct memory access between processes is forbidden.

Android

  • Intents — message objects that the system routes between processes. When you open a URL and it launches a browser, an intent crossed a process boundary. When you share a photo, an intent carried the file URI to another app's process.
  • Content Providers — a structured interface for one app's process to expose data to another. The contacts app exposes your contact list through a content provider; other apps query it without ever accessing the contacts database file directly.
  • Binder IPC — Android's custom inter-process communication mechanism, built on top of a Linux kernel driver. Every system service call — accessing the camera, reading sensors, checking permissions — goes through Binder. It's fast (no data copying for small messages, thanks to shared memory mappings) and secure (the kernel verifies caller identity on every call).
  • AIDL (Android Interface Definition Language) — a structured way to define Binder interfaces. If you've ever written a bound service, you've used AIDL — it generates the proxy and stub code that makes cross-process calls look like local method calls.

iOS

  • XPC (Cross-Process Communication) — Apple's IPC mechanism. System services, extensions (share extensions, notification extensions, widgets), and system daemons communicate through XPC.
  • URL Schemes and Universal Links — one app opening another by URL. The OS creates or activates the target app's process and delivers the URL.
  • App Groups — a shared container (shared filesystem and UserDefaults) that processes from the same developer can access. Your main app and its widget extension are different processes — App Groups let them share data through the filesystem, not through memory.

In all cases, the OS mediates the communication. Your process never directly accesses another process's memory. The data crosses the boundary through kernel-managed channels that enforce permissions and copy data safely.

One process, one app, one world

Here's the mental model to carry forward:

Your Flutter app is one process. Inside that process: the platform runtime (ART or ObjC/Swift), the Dart VM, the Flutter engine, your compiled code, your FFI native libraries — all sharing one address space, one set of permissions, one fate.

Threads within the process share everything and can communicate freely (with care). Dart isolates add a language-level restriction on top: no shared mutable state, communication by copying. But the underlying reality is one process.

Other apps are other processes. The boundary between them is enforced by hardware. Communication across that boundary is mediated by the kernel. Your app cannot see, modify, or affect another app's process — and this is the foundation of mobile security.

When the OS kills your process — whether from an OOM condition, an ANR timeout, a segfault, or simply reclaiming resources — everything in it goes away. Every thread stops. Every allocation (Dart and native) is reclaimed. Every file descriptor is closed. The process, and everything it contained, is gone.

That's what a process is. The container for everything your app ever does, bounded by the OS's guarantee that it stays in its lane.

Next in the series

The next post goes deeper inside the process — into threads, isolates, and concurrency. How Dart's event loop actually works. Why async/await doesn't create threads. How Isolate.run() relates to OS threads. And why understanding the difference between parallelism and concurrency changes how you write Flutter code that doesn't drop frames.

Related Topics

what is a process operating systemandroid app process explainedios app process sandboxflutter process threads isolatesandroid zygote processprocess address space explainedprocess vs thread fluttermobile app process memory

Ready to build your app?

Flutter apps built on Clean Architecture — documented, tested, and yours to own. See which plan fits your project.