Part 8 of the Flutter Security Beyond the Basics series.
Every mobile operating system enforces a boundary around each app. On Android, this is the Linux user model combined with SELinux policies. On iOS, it is the sandbox — a strict set of rules enforced by the kernel. Each app gets its own slice of the filesystem, its own memory space, and its own permissions. It cannot read another app's files. It cannot modify the operating system. It cannot execute unsigned code. These restrictions are not optional features. They are the foundation that every other security measure — secure storage, certificate pinning, biometric authentication — is built on.
Rooting an Android device and jailbreaking an iOS device both do the same fundamental thing: they remove those restrictions. The user gains superuser access to the entire operating system. They can read any file, modify any process, install unsigned code, and bypass the security measures that the OS was enforcing on every app's behalf.
This post covers both platforms side by side, because the concepts are mirrors of each other. The technical mechanisms differ, the detection approaches differ, but the core problem is identical: when the OS can no longer be trusted to protect your app, what can you actually do about it?
Why people root and jailbreak — it is not always malicious
Before discussing detection, it is worth understanding who actually does this and why. The motivations range from entirely benign to explicitly hostile.
Legitimate use cases exist. Power users root their Android devices to run system-wide ad blockers, to install custom ROMs that extend the life of older hardware, or to access features their carrier has disabled. iOS users jailbreak to customise their home screen beyond what Apple allows, to install apps from outside the App Store, or simply because they believe they should have full control over hardware they purchased. Security researchers root and jailbreak devices as part of their daily work — you cannot audit an app's security without being able to inspect what it writes to disk and how it communicates over the network.
But attackers use the same access. A rooted or jailbroken device gives an attacker the ability to read your app's secure storage, hook into your app's runtime to modify its behaviour, bypass biometric checks, disable certificate pinning, and install modified copies of your app that behave differently from the original. The same superuser access that lets a researcher inspect your app lets an attacker exploit it.
The practical conclusion: you cannot assume that a rooted device is hostile. Many of your legitimate users may have rooted devices. But you must assume that your app's client-side defences are weaker on a compromised device. Every security measure that depends on the OS enforcing boundaries — and that is nearly all of them — is diminished.
What becomes possible on a compromised device
To understand why root and jailbreak detection matters, you need to understand what an attacker gains. This is not about teaching exploitation. It is about understanding what you are defending against, because the defence only makes sense if you know the threat.
Reading other apps' data
On a stock Android device, each app's internal storage directory (/data/data/com.yourapp/) is protected by Linux file permissions. Only the app's own UID can access it. On a rooted device, the superuser can read any directory. This means SharedPreferences files, SQLite databases, and files written to internal storage are all accessible. If you stored a token, an API key, or user data in any of these locations, it is readable.
On iOS, the Keychain is the recommended secure storage. On older jailbreaks that modified the filesystem extensively, Keychain items could be dumped using tools like keychain-dumper. Modern iOS versions and newer jailbreaks have made this harder but not impossible — a jailbroken device with the right tools can still extract Keychain items for apps that do not use the most restrictive access control flags.
Hooking framework methods and system calls
This is where the damage potential escalates. Tools like Frida (cross-platform), Xposed Framework (Android), and Cydia Substrate (iOS) allow an attacker to intercept any function call your app makes — at runtime, without modifying the app's binary on disk.
Frida is particularly relevant. It injects a JavaScript engine into your app's process and lets the attacker replace any function's implementation with their own. Your app calls a method to check whether biometric authentication succeeded? The attacker hooks it to always return true. Your app checks a boolean to decide whether the device is rooted? The attacker hooks it to always return false. This is not theoretical — Frida scripts for bypassing common security checks are widely published and take minutes to deploy.
Bypassing biometric authentication
Biometric authentication on mobile devices ultimately resolves to an API call that returns a success or failure result. On a compromised device, that API call can be intercepted. If your app uses biometrics purely as a local gate — "show the fingerprint prompt, and if it succeeds, proceed" — an attacker with root access can bypass it entirely by hooking the callback. The biometric check is only truly secure when it is tied to a cryptographic operation: the biometric unlocks a key stored in the Secure Enclave or hardware-backed Keystore, and that key is required for a server-side operation. Without the cryptographic binding, biometrics on a compromised device are decorative.
Disabling certificate pinning
If you implemented certificate pinning (as covered in Post 4 of this series), a rooted or jailbroken device makes it trivial to bypass. On Android, an attacker can install a custom Network Security Configuration via a Magisk module, or use Frida to hook the certificate verification methods directly. On iOS, tools like SSL Kill Switch or Frida scripts achieve the same result. The pinning logic runs inside your app's process, and on a compromised device, the attacker controls that process.
Modifying app binaries at runtime
On a stock device, app code is signed and the OS verifies that signature. On a rooted Android device, the attacker can extract your APK, decompile it, modify the Dart snapshot or native libraries, repackage it with a new signature, and install the modified version. On a jailbroken iOS device, tools like CrackerXI can decrypt and extract the IPA, allowing similar modification. The modified app looks and behaves like yours but with the attacker's changes — perhaps logging credentials, perhaps skipping payment verification, perhaps granting premium features without purchase.
Installing instrumentation tools
Root and jailbreak access allows the installation of tools that sit between your app and the rest of the system. Proxy tools like mitmproxy combined with custom CA certificates (trivial to install on a rooted device) allow full inspection of HTTPS traffic. Debugging tools can attach to your app's process. Memory inspection tools can read your app's heap. None of this is possible on a properly locked-down device.
How rooting works on Android
Understanding the mechanism helps you understand why detection is difficult.
Android is built on Linux. Each app runs as its own Linux user. The root user (UID 0) has unrestricted access to the entire system. On a stock device, no app runs as root, and there is no way to escalate to root privileges.
The typical rooting process:
- Unlock the bootloader. The bootloader is the first code that runs when the device powers on. It verifies the integrity of the operating system before loading it. Most manufacturers allow bootloader unlocking, though it usually wipes the device and may void the warranty. Once unlocked, the bootloader will load unverified boot images.
- Flash a custom recovery. The recovery partition is a minimal OS used for system updates and factory resets. A custom recovery like TWRP gives the user full read/write access to all partitions.
- Install a root management tool. Historically, this meant placing a
subinary in the system partition. Thesubinary is a standard Unix tool that grants superuser privileges to the calling process. Tools like SuperSU managed which apps were allowed to usesu.
- Modern approach: Magisk. Magisk changed the game. Instead of modifying the system partition, Magisk patches the boot image. It is "systemless" — the system partition remains untouched. This matters enormously for detection because many root detection methods check whether the system partition has been modified. Magisk leaves no traces there.
Magisk also includes Zygisk (formerly Magisk Hide), which can hide the presence of root from specific apps. When your app is on the deny list, Magisk unmounts its modifications before your app's process starts. Your app sees a clean, unrooted system. The root access is still there — it is just invisible to your app.
This is the core of the detection problem on Android: the most popular rooting tool is specifically designed to evade detection.
How jailbreaking works on iOS
iOS jailbreaking is fundamentally different in mechanism but identical in outcome.
Apple's security model is more restrictive than Android's. There is no official bootloader unlock. There is no way to install unsigned code without a developer certificate. The entire system is locked down by design.
Jailbreaking exploits a vulnerability in iOS to gain code execution outside the sandbox. The specific vulnerability varies with each iOS version — it might be a kernel exploit, a userland exploit chain, or (in the case of checkm8/checkra1n) a hardware-level exploit in the device's boot ROM that Apple cannot patch with software updates.
Once the exploit runs, the jailbreak tool:
- Disables code signing enforcement. iOS normally refuses to run any code that Apple has not signed. The jailbreak patches the kernel or its extensions to remove this check.
- Installs a package manager. Cydia was the original; Sileo is its modern replacement. These are essentially alternative app stores that distribute packages (tweaks, tools, modifications) outside Apple's control.
- Provides root access. The default root password on a jailbroken iOS device is
alpine(and has been for over a decade, which is a security concern in itself). SSH access to the device gives full filesystem access.
Modern jailbreaks come in several varieties:
- Tethered: requires a computer connection on every reboot. The jailbreak is lost when the device restarts until the tool is run again.
- Semi-tethered / semi-untethered: the device boots normally but without jailbreak. The user runs an app or connects to a computer to re-enable the jailbreak after each reboot.
- Untethered: the jailbreak persists across reboots. These are rare and require powerful exploit chains.
Rootless jailbreaks are a more recent development. Traditional jailbreaks modify the root filesystem (/), which makes detection relatively straightforward — you can check for the existence of Cydia, check for common jailbreak file paths, check whether the root filesystem is writable. Rootless jailbreaks (like Dopamine for iOS 15-16) avoid modifying the root volume entirely. They operate within /var or use different mechanisms to achieve the same result. This makes traditional file-path-based detection less reliable.
Detection approaches in Flutter
With the background covered, let us look at what you can actually do in a Flutter app.
The flutter_jailbreak_detection package
The most common starting point is the flutter_jailbreak_detection package. Despite the name, it covers both Android root detection and iOS jailbreak detection.
On Android, it checks for:
- The presence of
subinary in common paths (/system/bin/su,/system/xbin/su, etc.) - Known root management apps (SuperSU, Magisk Manager, etc.)
- The
ro.build.tagssystem property containingtest-keys(indicates a custom build) - Whether the system partition is mounted as read-write
- Ability to execute the
sucommand
On iOS, it checks for:
- The existence of common jailbreak files (
/Applications/Cydia.app,/Library/MobileSubstrate/,/bin/bash,/usr/sbin/sshd, etc.) - Whether the app can open the
cydia://URL scheme - Whether the app can write files outside its sandbox
- The presence of suspicious dylibs
Here is how to use it:
import 'package:flutter_jailbreak_detection/flutter_jailbreak_detection.dart';
class DeviceIntegrityService {
/// Returns true if the device appears to be compromised.
Future<bool> isDeviceCompromised() async {
try {
final isJailbroken = await FlutterJailbreakDetection.jailbroken;
final isDeveloperMode = await FlutterJailbreakDetection.developerMode;
return isJailbroken || isDeveloperMode;
} catch (e) {
// If the check itself fails, treat it as suspicious.
// A hooked method might throw rather than return false.
return true;
}
}
}And how to respond to the result:
class AppStartupService {
final DeviceIntegrityService _integrityService;
AppStartupService(this._integrityService);
Future<DeviceIntegrityResult> checkDeviceIntegrity() async {
final isCompromised = await _integrityService.isDeviceCompromised();
if (!isCompromised) {
return DeviceIntegrityResult.clean;
}
// Log the event server-side before deciding how to respond.
await _reportCompromisedDevice();
return DeviceIntegrityResult.compromised;
}
Future<void> _reportCompromisedDevice() async {
// Send to your backend — don't rely on client-side logging alone.
// Include device info, app version, timestamp.
// This gives you data to decide policy, even if you
// choose not to block the user immediately.
}
}
enum DeviceIntegrityResult {
clean,
compromised,
}In your UI layer, you then decide what to do based on the result:
class IntegrityGate extends StatefulWidget {
final Widget child;
const IntegrityGate({required this.child, super.key});
@override
State<IntegrityGate> createState() => _IntegrityGateState();
}
class _IntegrityGateState extends State<IntegrityGate> {
DeviceIntegrityResult? _result;
@override
void initState() {
super.initState();
_checkIntegrity();
}
Future<void> _checkIntegrity() async {
final result = await AppStartupService(
DeviceIntegrityService(),
).checkDeviceIntegrity();
setState(() => _result = result);
}
@override
Widget build(BuildContext context) {
if (_result == null) {
return const Scaffold(
body: Center(child: CircularProgressIndicator()),
);
}
if (_result == DeviceIntegrityResult.compromised) {
return const CompromisedDeviceScreen();
}
return widget.child;
}
}The CompromisedDeviceScreen should explain clearly why the app is restricted. More on that later.
Manual checks you can layer on top
The package covers the common cases, but you can add your own checks for defence in depth. These run on the platform side via method channels.
Android — checking for Magisk artefacts:
// In your Android native code (MainActivity.kt or a dedicated plugin)
fun checkForMagisk(): Boolean {
val magiskPaths = listOf(
"/sbin/.magisk",
"/data/adb/magisk",
"/data/adb/modules",
)
return magiskPaths.any { path -> File(path).exists() }
}
fun checkForBusyBox(): Boolean {
val busyboxPaths = listOf(
"/system/bin/busybox",
"/system/xbin/busybox",
"/sbin/busybox",
)
return busyboxPaths.any { path -> File(path).exists() }
}
fun checkBuildProperties(): Boolean {
val tags = android.os.Build.TAGS
return tags != null && tags.contains("test-keys")
}iOS — checking for jailbreak file paths:
// In your iOS native code (AppDelegate.swift or a dedicated plugin)
func checkForJailbreak() -> Bool {
let suspiciousPaths = [
"/Applications/Cydia.app",
"/Applications/Sileo.app",
"/Library/MobileSubstrate/MobileSubstrate.dylib",
"/bin/bash",
"/usr/sbin/sshd",
"/usr/bin/ssh",
"/etc/apt",
"/var/lib/cydia",
"/var/jb", // Rootless jailbreak path
]
for path in suspiciousPaths {
if FileManager.default.fileExists(atPath: path) {
return true
}
}
// Check if the app can write outside its sandbox
let testPath = "/private/jailbreak_test"
do {
try "test".write(toFile: testPath, atomically: true, encoding: .utf8)
try FileManager.default.removeItem(atPath: testPath)
return true // Should not be able to write here
} catch {
return false // Expected behaviour on a clean device
}
}The /var/jb check is worth noting — it targets rootless jailbreaks that use this directory as their base. Many older detection libraries miss it because they were written before rootless jailbreaks existed.
The arms race — why detection is a speed bump, not a wall
Everything described above works against casual modification. A user who rooted their phone and did not bother to hide it will be detected. But an attacker who specifically targets your app will bypass every one of these checks. This is not a weakness of any particular implementation. It is a fundamental property of client-side detection.
Magisk's Zygisk DenyList
Magisk includes a feature that hides root from selected apps. When your app is added to the deny list, Magisk unmounts its modifications, hides the su binary, and cleans up environment indicators before your app's process is created. Your app starts in what looks like a completely clean environment. The file path checks find nothing. The su execution check fails (as it should on a clean device). The build properties look normal. Your detection code runs, finds no evidence, and concludes the device is not rooted.
The device is still rooted. Your detection simply cannot see it.
Liberty Lite and Shadow (iOS)
The iOS equivalents are tweaks like Liberty Lite, Shadow, A-Bypass, and others. These hook the system calls that your detection code uses — fileExistsAtPath, canOpenURL, dlopen — and return false results when the calling process is your app. Your code asks "does /Applications/Cydia.app exist?" and the hooked fileExistsAtPath responds "no," regardless of reality.
Some detection methods try to detect these hooks themselves — checking whether a method's implementation pointer points to a known system library or to an injected dylib. The bypass tools then adapt to hide their hooks from those checks. It is an arms race with no finish line.
Frida can remove detection entirely
Frida deserves special mention because it operates at a different level. Rather than hiding the compromised state of the device, Frida can hook your detection function directly and replace its return value. If your Dart code calls isDeviceCompromised() and Frida is attached to the process, a three-line script can make that function always return false:
// Frida script — this is what an attacker runs
Java.perform(function() {
var target = Java.use("com.example.app.IntegrityCheck");
target.isDeviceCompromised.implementation = function() {
return false;
};
});This applies to any check that runs within your app's process. It does not matter how sophisticated the check is, how many layers of obfuscation you add, or how cleverly you hide the detection logic. If the code runs on a device the attacker controls, the attacker can modify its behaviour.
This is not a reason to skip detection. It is a reason to understand what detection can and cannot do. Client-side detection raises the bar. It stops casual abuse. It forces an attacker to actively work to bypass it. But it cannot stop a motivated, skilled attacker. For that, you need something that runs outside the device's control.
Server-side attestation — the real answer for serious threat models
Client-side detection asks your app: "Is this device compromised?" The app looks around and reports what it sees. The problem is that on a compromised device, the app cannot trust what it sees.
Server-side attestation asks a different question. Your server asks Google or Apple: "Is this device genuine, unmodified, and running a legitimate copy of my app?" The answer comes from infrastructure the device owner does not control.
Google Play Integrity API (Android)
The Play Integrity API replaced SafetyNet Attestation in 2024 and is now the recommended approach. Here is how it works conceptually:
- Your app requests an integrity token from the Play Integrity API on the device. This request includes a nonce — a one-time value your server generates to prevent replay attacks.
- Google's servers evaluate the device. They check whether the device has a verified boot state, whether the bootloader is locked, whether the device passes Google's compatibility tests, and whether your app was installed from the Play Store and has not been tampered with.
- Google returns a signed integrity token to your app.
- Your app sends this token to your backend server.
- Your server sends the token to Google's servers for verification. Google responds with the verdict: the device integrity level, the app integrity status, and the licensing status.
The critical point: the verdict comes from Google's servers, signed with Google's keys. A rooted device cannot forge this response. Even Magisk Hide cannot fool Google's attestation — Google's checks run at a level below what Magisk can manipulate, using hardware-backed attestation where available.
The verdict includes multiple levels:
- MEETS_DEVICE_INTEGRITY: the app is running on a genuine Android device with Google Play Services. The bootloader is locked.
- MEETS_BASIC_INTEGRITY: the device may be rooted or running a custom ROM, but it passes basic checks. This is a weaker guarantee.
- MEETS_STRONG_INTEGRITY: the device has passed recent security updates and has hardware-backed attestation.
You decide on your server which levels are acceptable for which operations. A news reader might accept basic integrity. A banking app might require strong integrity.
Apple App Attest (iOS)
Apple's equivalent is the App Attest service, part of the DeviceCheck framework. The concept is similar:
- On first launch, your app asks the device's Secure Enclave to generate a cryptographic key pair. The private key never leaves the Secure Enclave — it cannot be extracted, even on a jailbroken device.
- Your app sends the public key to Apple for attestation. Apple verifies that the key was generated by a genuine Apple device's Secure Enclave and that the app is a legitimate, unmodified build.
- Apple returns an attestation object — a certificate chain rooted in Apple's attestation CA.
- Your app sends this attestation to your backend. Your server validates the certificate chain with Apple and stores the public key.
- For subsequent requests, your app uses the Secure Enclave to sign an assertion (a cryptographic proof that includes your request data and a counter to prevent replay). Your server verifies the assertion against the stored public key.
Because the private key lives in hardware and Apple attests the key's origin, a jailbroken device cannot forge a valid attestation. The Secure Enclave operates independently of the main processor and is not affected by jailbreak modifications to the OS.
Why attestation is stronger
The fundamental difference: client-side checks run in an environment the attacker controls. Server-side attestation relies on responses from Google or Apple, which the attacker does not control. The trust anchor moves from the device (compromised) to the platform provider (not compromised).
This does not mean attestation is perfect. Google Play Integrity requires Google Play Services, which excludes devices without them (some Chinese market phones, custom ROMs without GApps). Apple App Attest requires iOS 14 or later. Both require network connectivity for the initial attestation. And both are services controlled by Apple and Google, which means you are adding a dependency on their infrastructure and policies.
Implementing either of these properly is a substantial piece of work — the attestation flow, the server-side verification, the nonce management, the error handling, the fallback for devices that cannot attest. A full implementation guide is its own article. But if your threat model requires confidence that the device is genuine and unmodified, this is the path that works.
The proportionate response
Not every app needs the same level of protection. The response to a compromised device should be proportionate to what is at risk.
Low-risk apps
If your app is a content reader, a utility, or anything where the worst case of compromise is that the user gets a slightly different experience — do not block. Warn the user with a dismissible message: "This device appears to be rooted/jailbroken. Some security features may not work as intended." Log the event to your server for analytics. Do not restrict functionality.
Blocking users from a weather app because they rooted their phone is hostile for no security benefit.
Medium-risk apps
If your app handles personal data, makes purchases, or integrates with third-party services that have their own security requirements — warn the user, restrict sensitive features, and log server-side. For example: allow browsing but require re-authentication for purchases. Disable biometric login and fall back to password-only. Increase server-side monitoring for the account.
High-risk apps
If you are building a fintech app, a health data app, or anything subject to regulatory compliance requirements — server-side attestation becomes a necessity, not a nice-to-have. The response might be to refuse to run entirely on a compromised device, but this decision should be backed by Play Integrity or App Attest, not purely by client-side checks.
Even in this case, consider offering a degraded experience rather than a hard block. Some financial apps allow read-only access (view balances, see transaction history) on compromised devices but block write operations (transfers, payments). This serves legitimate users who happen to have rooted devices while protecting the operations that actually carry risk.
Never block silently
Whatever your policy, communicate it. A user who rooted their device for legitimate reasons and finds your app mysteriously broken — with no error message, no explanation — will leave a one-star review and uninstall. A clear message explaining that the device's security state does not meet the app's requirements, with a brief explanation of why, is the minimum. If you can, link to a support page that explains what rooting/jailbreaking means and what the user's options are.
What this means in practice
Root and jailbreak detection is a layer, not a solution. It belongs in your security architecture alongside secure storage, certificate pinning, code obfuscation, and proper server-side validation. No single layer is sufficient. Together, they raise the cost of attacking your app from trivial to substantial.
The honest summary:
- Client-side detection catches casual compromise and deters unsophisticated attackers. It takes an hour to implement and costs nothing. Do it.
- The detection will be bypassed by anyone who specifically targets your app with modern tools. Accept this. Do not chase the arms race of ever-more-elaborate client-side checks.
- Server-side attestation is the credible answer for apps where device integrity genuinely matters. It requires more work — server infrastructure, platform API integration, error handling — but it moves the trust anchor to a place the attacker does not control.
- Proportionality matters. Match your response to your actual risk. Over-reacting to a rooted device alienates legitimate users for no meaningful security gain. Under-reacting to a compromised device running a banking app is negligent.
The device is the user's property. They have every right to root it. You have every right to decide what your app does on a compromised device. The goal is to make that decision deliberately, with clear eyes about what detection can and cannot tell you, and to communicate it honestly to the people using your software.