Security
18

Platform Security: Android Hardening for Flutter Apps

Android Security Hardening for Flutter Apps

March 25, 2026

Part 10 of the Flutter Security Beyond the Basics series.

If you read the previous post on iOS platform security, the theme was straightforward: iOS is secure by default. Apple makes the opinionated choices for you, and your job is mostly not to weaken them. App Transport Security blocks cleartext traffic unless you explicitly opt out. The Keychain encrypts data with hardware-backed keys unless you ask it not to. The sandbox is non-negotiable.

Android is the opposite philosophy. It gives you more control than iOS does — more configuration surfaces, more options, more granularity. But it does not force your hand. Cleartext traffic is allowed unless you block it. Backups are enabled unless you disable them. Components are exported unless you declare otherwise. Android gives you every tool you need to build something as secure as an iOS app. It simply will not do it for you.

That distinction matters enormously for Flutter developers. Your Dart code is cross-platform, but the platform configuration that wraps it is not. A Flutter app with no Android-specific security configuration is relying on defaults that were designed for backwards compatibility, not for security. This post walks through every Android-specific configuration you should be setting, with the actual XML and the reasoning behind each choice.

Network Security Configuration

Android introduced the Network Security Configuration in API 24 (Android 7.0). It is a declarative XML file that controls how your app handles network connections — which domains allow cleartext traffic, which certificates to trust, and whether to enforce certificate pinning. It is Android's equivalent of iOS's App Transport Security, but more granular.

The file

Create android/app/src/main/res/xml/network_security_config.xml:

xml
<?xml version="1.0" encoding="utf-8"?>
<network-security-config>
    <base-config cleartextTrafficPermitted="false">
        <trust-anchors>
            <certificates src="system" />
        </trust-anchors>
    </base-config>
</network-security-config>

This does two things. First, cleartextTrafficPermitted="false" blocks all HTTP (non-HTTPS) traffic from your app, globally. Any attempt to make a plaintext HTTP request to any domain will fail. Second, trust-anchors with src="system" means your app trusts only the system certificate store — the same certificates that the device ships with.

Referencing it in the manifest

In android/app/src/main/AndroidManifest.xml, add the networkSecurityConfig attribute to the <application> tag:

xml
<application
    android:networkSecurityConfig="@xml/network_security_config"
    ...>

Without this reference, the XML file exists but does nothing.

Per-domain exceptions for development

During development, you often need to connect to a local backend running on your machine. That backend is probably serving HTTP, not HTTPS. Rather than disabling cleartext protection globally, add a domain-specific exception:

xml
<?xml version="1.0" encoding="utf-8"?>
<network-security-config>
    <base-config cleartextTrafficPermitted="false">
        <trust-anchors>
            <certificates src="system" />
        </trust-anchors>
    </base-config>

    <domain-config cleartextTrafficPermitted="true">
        <domain includeSubdomains="false">10.0.2.2</domain>
        <domain includeSubdomains="false">192.168.1.100</domain>
    </domain-config>
</network-security-config>

Cleartext is blocked everywhere except those two specific addresses. 10.0.2.2 is the emulator's alias for localhost on the host machine.

Certificate pinning at the platform level

Post 4 of this series covered certificate pinning at the application level using Dart HTTP clients. Android's Network Security Configuration lets you pin certificates at the platform level, which means the pinning applies to all connections from your app — including those made by native code, third-party SDKs, and WebViews:

xml
<domain-config>
    <domain includeSubdomains="true">api.yourapp.com</domain>
    <pin-set expiration="2027-01-01">
        <pin digest="SHA-256">base64EncodedSHA256OfSubjectPublicKeyInfo=</pin>
        <pin digest="SHA-256">base64EncodedBackupPin=</pin>
    </pin-set>
</domain-config>

Two pins are required: a primary and a backup. The expiration date is a safety mechanism — if both pins become invalid (because you rotated certificates and forgot to update the app), the pinning stops being enforced after that date rather than permanently locking users out. This is a pragmatic Android-specific feature that iOS does not offer at the platform level.

Debug overrides for proxy tools

When you need to inspect traffic with Charles Proxy or mitmproxy during development, those tools use their own certificate authority. You need your debug builds to trust that CA without weakening your release builds:

xml
<network-security-config>
    <base-config cleartextTrafficPermitted="false">
        <trust-anchors>
            <certificates src="system" />
        </trust-anchors>
    </base-config>

    <debug-overrides>
        <trust-anchors>
            <certificates src="system" />
            <certificates src="user" />
        </trust-anchors>
    </debug-overrides>
</network-security-config>

The <debug-overrides> block applies only when your app is built with debuggable=true (which Flutter does automatically for debug builds). Adding src="user" means debug builds trust user-installed certificates — the ones you install when setting up Charles or mitmproxy. Release builds ignore this block entirely.

Play Integrity API

Root and jailbreak detection was covered in Post 8, focusing on client-side checks. The fundamental weakness of client-side detection is that it runs on the device the attacker controls. The Play Integrity API (which replaced the deprecated SafetyNet Attestation) solves this by moving the verdict off the device entirely.

What it does

When your app requests an integrity verdict, Google's servers — not your app, not the device — evaluate three things:

  1. Device integrity: Is this a genuine Android device running an unmodified OS? Or is it an emulator, a rooted device, or a device with an unlocked bootloader?
  2. App integrity: Is the app binary the same one published on Google Play? Or has it been repackaged, modified, or sideloaded from an unknown source?
  3. Account details: Does the device have a licensed Google Play account? This helps distinguish genuine users from automated bots.

The verdict comes as a signed token from Google. Your app cannot forge it. An attacker with root access cannot modify it in transit because the token is cryptographically signed by Google's key, not by anything on the device.

The flow

  1. Your app calls the Play Integrity API on the device to request an integrity token.
  2. The API returns a signed, encrypted token.
  3. Your app sends this token to your backend server.
  4. Your backend calls Google's servers to decrypt and verify the token.
  5. Google returns the verdict: device integrity, app integrity, and account details.
  6. Your backend decides what to do — allow the request, restrict functionality, or reject it.

The critical point is step 4. The verification happens on your server, talking to Google's server. The device is not involved in the verification. An attacker who has compromised the device cannot intercept or modify the server-to-Google communication.

Limitations

Play Integrity requires Google Play Services. Devices without it — Huawei devices running HarmonyOS, Amazon Fire tablets, custom ROMs without GApps — cannot produce a valid integrity token. If your app needs to support those devices, you cannot rely on Play Integrity as a hard gate. You can use it as a signal (present and valid means high trust; absent means unknown, not necessarily hostile) combined with other signals.

Play Integrity also has request quotas. The free tier allows 10,000 requests per day. For most apps this is sufficient, but if you are verifying integrity on every API call, you will need to either cache verdicts or purchase a higher quota.

Integration in Flutter

The play_integrity package on pub.dev provides a Dart wrapper. The general pattern:

dart
// On the client
final integrityToken = await PlayIntegrity.requestIntegrityToken(
  nonce: generateNonce(), // a unique, server-generated nonce
);

// Send integrityToken to your backend
await apiClient.post('/verify-integrity', body: {
  'token': integrityToken,
});

Your backend then decodes the token using Google's playintegrity API:

javascript
POST https://playintegrity.googleapis.com/v1/{packageName}:decodeIntegrityToken

The nonce prevents replay attacks — your server generates it, your app includes it in the request, and your server checks that the nonce in the decoded verdict matches the one it issued.

Android Keystore

Android Keystore is the platform's secure key storage system. Unlike iOS, where the Keychain is the single answer, Android Keystore is specifically designed for cryptographic keys — not arbitrary data.

Hardware-backed storage

On modern devices, the Keystore is backed by dedicated hardware. On Pixel and Samsung flagship devices, this is StrongBox — a tamper-resistant hardware security module. On most other devices, it is the Trusted Execution Environment (TEE), a secure area of the main processor that runs its own operating system isolated from Android.

The critical property: when you generate a key inside the Keystore, the private key never leaves the secure hardware. The operating system cannot extract it. Root access cannot extract it. Even if someone physically extracts the flash storage chip and reads every byte, the key is not there — it exists only inside the secure element.

Key properties

When generating a key, you can set security requirements:

  • `setUserAuthenticationRequired(true)`: The key can only be used after the user authenticates with biometric or device PIN. Without authentication, any attempt to use the key throws an exception. This is the mechanism that makes biometric authentication meaningful — the biometric does not just return a boolean; it unlocks a cryptographic key.
  • `setUserAuthenticationValidityDurationSeconds(30)`: After authentication, the key remains usable for 30 seconds before requiring re-authentication.
  • `setIsStrongBoxBacked(true)`: Requests that the key be stored in StrongBox hardware rather than the TEE. If StrongBox is not available, key generation fails rather than silently falling back. Use this when you need the highest level of assurance and are willing to handle the absence gracefully.
  • `setUnlockedDeviceRequired(true)`: The key is only usable when the device is currently unlocked. If the device locks, the key becomes inaccessible.

How flutter_secure_storage uses this

When you use flutter_secure_storage on Android, it does not store your data directly in the Keystore (the Keystore holds keys, not arbitrary strings). Instead, it generates an AES key inside the Keystore, then uses that key to encrypt your data via EncryptedSharedPreferences. The encrypted data sits in a SharedPreferences file on disk, but without the Keystore-backed key, it is unreadable.

This is a genuinely good design. The encryption key is hardware-protected. The data is encrypted at rest. Even on a rooted device, reading the SharedPreferences file yields only ciphertext. The attacker would need to extract the key from the hardware security module, which is the one thing the hardware is specifically designed to prevent.

Backup security — the silent leak

This is one of the most commonly overlooked Android security settings, and it is a genuine risk.

The default

In AndroidManifest.xml, the default value for android:allowBackup is true. You do not need to set it. If you do not set it, it is true. This means Android's Auto Backup system will copy your app's data — SharedPreferences, databases, files in internal storage — to the user's Google Drive.

Why this matters

If an attacker gains access to a user's Google account (phishing, credential stuffing, reused passwords), they can restore your app's backup to a different device. That restored backup includes everything Auto Backup captured: tokens, cached data, database contents. If you are using flutter_secure_storage, the encrypted SharedPreferences file is backed up, but the Keystore key is not (Keystore keys are not included in backups). This means the restored data is unreadable — the encryption holds. But if you stored anything sensitive in regular SharedPreferences or SQLite databases without encryption, it is fully exposed.

The straightforward fix

For high-security apps, disable backup entirely:

xml
<application
    android:allowBackup="false"
    ...>

This prevents all Auto Backup. The user loses the convenience of restoring app data when they switch devices, but sensitive data cannot leak through this channel.

Granular control

If disabling backup entirely is too aggressive, you can control exactly what gets backed up.

For Android 11 and below, use <full-backup-content>:

xml
<!-- android/app/src/main/res/xml/backup_rules.xml -->
<?xml version="1.0" encoding="utf-8"?>
<full-backup-content>
    <exclude domain="sharedpref" path="FlutterSecureStorage" />
    <exclude domain="database" path="sensitive.db" />
    <exclude domain="file" path="tokens/" />
</full-backup-content>

Reference it in the manifest:

xml
<application
    android:allowBackup="true"
    android:fullBackupContent="@xml/backup_rules"
    ...>

For Android 12 and above, Google introduced a new format with <data-extraction-rules>:

xml
<!-- android/app/src/main/res/xml/data_extraction_rules.xml -->
<?xml version="1.0" encoding="utf-8"?>
<data-extraction-rules>
    <cloud-backup>
        <exclude domain="sharedpref" path="FlutterSecureStorage" />
        <exclude domain="database" path="sensitive.db" />
        <exclude domain="file" path="tokens/" />
    </cloud-backup>
    <device-transfer>
        <exclude domain="sharedpref" path="FlutterSecureStorage" />
        <exclude domain="database" path="sensitive.db" />
    </device-transfer>
</data-extraction-rules>

The newer format distinguishes between cloud backup (Google Drive) and device-to-device transfer (which happens during the Android setup wizard when migrating to a new phone). You may want to allow device transfer for some data while still excluding it from cloud backup.

Reference it in the manifest:

xml
<application
    android:allowBackup="true"
    android:fullBackupContent="@xml/backup_rules"
    android:dataExtractionRules="@xml/data_extraction_rules"
    ...>

Include both attributes for compatibility across Android versions. The system uses dataExtractionRules on Android 12+ and falls back to fullBackupContent on older versions.

Permission model

Android's runtime permission model is well documented, but there are security-relevant details that Flutter developers often miss.

Request only what you need, when you need it

This is not just good UX — it reduces your attack surface. Every permission you hold is a permission that can be exploited if your app is compromised. If your app requests ACCESS_FINE_LOCATION at startup but only uses it in one rarely-accessed feature, you are holding a sensitive permission for no reason most of the time. Request it at the moment the user accesses the feature. If they deny it, degrade gracefully.

The android:exported requirement

Since Android 12 (API 31), every <activity>, <service>, <receiver>, and <provider> that has an intent filter must explicitly declare android:exported="true" or android:exported="false". If you omit it, the app will not install.

This matters for security because exported="true" means any other app on the device can start that component. If an activity is exported and it accepts data from the intent without validation, another app can feed it malicious input.

For your Flutter app, the main activity needs exported="true" because it handles the launcher intent. But if you have additional activities — deep link handlers, notification click handlers — ask whether they genuinely need to be accessible to other apps:

xml
<activity
    android:name=".MainActivity"
    android:exported="true">
    <intent-filter>
        <action android:name="android.intent.action.MAIN" />
        <category android:name="android.intent.category.LAUNCHER" />
    </intent-filter>
</activity>

<activity
    android:name=".DeepLinkActivity"
    android:exported="true">
    <intent-filter>
        <action android:name="android.intent.action.VIEW" />
        <category android:name="android.intent.category.DEFAULT" />
        <category android:name="android.intent.category.BROWSABLE" />
        <data android:scheme="yourapp" android:host="open" />
    </intent-filter>
</activity>

<receiver
    android:name=".InternalReceiver"
    android:exported="false" />

The InternalReceiver is not accessible to other apps. If it were exported, any app could send it a broadcast and trigger whatever logic it performs.

Intent filter security

Any component with an <intent-filter> is implicitly exported, even if you set exported="false". Android will override your setting and export it anyway, because intent filters exist specifically to allow other apps to interact with the component. If you declare an intent filter, accept that the component is public and validate all incoming data accordingly.

Content Providers and FileProvider

If your Flutter app shares files with other apps — camera captures sent to a photo editor, documents opened in a viewer — you need a FileProvider. A FileProvider is a content provider that generates secure content:// URIs for files in your app's internal storage, granting temporary read access to a specific file rather than exposing a directory.

Configuration

In AndroidManifest.xml:

xml
<provider
    android:name="androidx.core.content.FileProvider"
    android:authorities="${applicationId}.fileprovider"
    android:exported="false"
    android:grantUriPermissions="true">
    <meta-data
        android:name="android.support.FILE_PROVIDER_PATHS"
        android:resource="@xml/file_paths" />
</provider>

Note android:exported="false". The FileProvider is never exported. Access is granted per-URI through intent flags, not through a blanket export.

Create android/app/src/main/res/xml/file_paths.xml:

xml
<?xml version="1.0" encoding="utf-8"?>
<paths>
    <files-path name="shared_documents" path="documents/" />
    <cache-path name="shared_images" path="images/" />
</paths>

This declares that only the documents/ subdirectory of internal files and the images/ subdirectory of the cache directory are shareable. Everything else is inaccessible. Never use <root-path> with path="" — that exposes the entire filesystem to any app that receives a URI from your provider.

WebView security

If your Flutter app uses webview_flutter or flutter_inappwebview to display web content, the WebView introduces a separate attack surface.

JavaScript

setJavaScriptEnabled(true) should only be set when the content you are loading genuinely requires it. If you are displaying a static HTML page, a terms of service document, or a privacy policy, leave JavaScript disabled. Every enabled feature is an enabled attack vector.

File access

By default, WebViews can read local files through file:// URIs. Disable this unless you specifically need it:

dart
WebView(
  initialUrl: 'https://yourapp.com/content',
  javascriptMode: JavascriptMode.disabled,
  // In the Android platform-specific settings:
  // setAllowFileAccess(false)
  // setAllowContentAccess(false)
)

If an attacker can control the URL loaded in your WebView — through a deep link, a push notification payload, or a server response — and file access is enabled, they can read files from your app's storage.

URL validation

Never load a URL in a WebView without validating it first. If your app receives a URL from a deep link, a push notification, or any external source, check that it matches an expected pattern before loading it:

dart
bool isAllowedUrl(String url) {
  final uri = Uri.tryParse(url);
  if (uri == null) return false;

  const allowedHosts = ['yourapp.com', 'docs.yourapp.com'];
  return uri.scheme == 'https' && allowedHosts.contains(uri.host);
}

Loading an arbitrary attacker-controlled URL in your WebView, especially with JavaScript enabled, gives the attacker a scripting context inside your app's process.

JavaScript interfaces

Android's addJavascriptInterface (accessible through flutter_inappwebview) allows you to expose a Dart/Java object to JavaScript running in the WebView. Every public method on that object becomes callable from any page loaded in the WebView. If you expose a method that reads from secure storage, and then an attacker manages to load their own page in the WebView, they can call that method and extract the data.

The rule is simple: if you must use a JavaScript interface, expose the minimum possible surface. Validate every call. And ensure the WebView can only load URLs you control.

The complete secure AndroidManifest.xml

Bringing all of these configurations together, here is a reference AndroidManifest.xml with every security-relevant attribute in place:

xml
<manifest xmlns:android="http://schemas.android.com/apk/res/android">

    <!-- Request only the permissions you actually need -->
    <uses-permission android:name="android.permission.INTERNET" />

    <application
        android:label="@string/app_name"
        android:icon="@mipmap/ic_launcher"
        android:networkSecurityConfig="@xml/network_security_config"
        android:allowBackup="false"
        android:fullBackupContent="false"
        android:dataExtractionRules="@xml/data_extraction_rules"
        android:usesCleartextTraffic="false">

        <activity
            android:name=".MainActivity"
            android:exported="true"
            android:launchMode="singleTop"
            android:theme="@style/LaunchTheme"
            android:configChanges="orientation|keyboardHidden|keyboard|screenSize|smallestScreenSize|locale|layoutDirection|fontScale|screenLayout|density|uiMode"
            android:hardwareAccelerated="true"
            android:windowSoftInputMode="adjustResize">
            <intent-filter>
                <action android:name="android.intent.action.MAIN" />
                <category android:name="android.intent.category.LAUNCHER" />
            </intent-filter>
        </activity>

        <!-- FileProvider for secure file sharing -->
        <provider
            android:name="androidx.core.content.FileProvider"
            android:authorities="${applicationId}.fileprovider"
            android:exported="false"
            android:grantUriPermissions="true">
            <meta-data
                android:name="android.support.FILE_PROVIDER_PATHS"
                android:resource="@xml/file_paths" />
        </provider>

        <!-- Internal receivers: never exported -->
        <receiver
            android:name=".NotificationReceiver"
            android:exported="false" />

        <meta-data
            android:name="flutterEmbedding"
            android:value="2" />
    </application>
</manifest>

A few notes on this manifest. Setting both android:allowBackup="false" and android:fullBackupContent="false" is redundant — allowBackup="false" is sufficient. But including fullBackupContent="false" makes the intent explicit and prevents lint warnings on some build tool versions. If you choose granular backup rules instead of disabling backup entirely, replace android:allowBackup="false" with android:allowBackup="true" and point fullBackupContent and dataExtractionRules to your rule files as shown in the backup section above.

The android:usesCleartextTraffic="false" attribute is technically redundant when you have a network_security_config.xml with cleartextTrafficPermitted="false" in the base config. Including both is a belt-and-braces approach — if someone removes the network security config reference, the manifest attribute still blocks cleartext.

What Android expects from you

The through-line of this entire post is that Android's security posture is opt-in. The platform provides robust mechanisms — Network Security Configuration, hardware-backed Keystore, Play Integrity attestation, granular backup controls, explicit component export declarations — but it does not enable them by default. The defaults prioritise backwards compatibility and developer convenience.

For a Flutter developer, this means the Android side of your app requires deliberate configuration. It is not enough to write secure Dart code. You need to open the Android project, edit the manifest, create the XML configuration files, and make explicit choices about every security-relevant setting.

The good news is that these configurations are declarative. They are XML files and manifest attributes, not complex code. You set them once, they apply globally, and they protect your app at the platform level — below your Dart code, below the Flutter engine, at the layer where the operating system itself enforces the rules.

The next post in this series will cover secure CI/CD pipelines — protecting your signing keys, managing secrets in build environments, and ensuring that the security you built into your app is not undermined by the process that builds and ships it.

Related Topics

flutter android securitynetwork security config flutterplay integrity api flutterandroid keystore flutterandroid backup securityflutter android hardeningandroid manifest securityflutter webview securityfileprovider flutterandroid permissions flutter

Ready to build your app?

Flutter apps built on Clean Architecture — documented, tested, and yours to own. See which plan fits your project.