Security
14

The Attack Surfaces You Forgot: Logging, Clipboard, Deep Links, WebViews, and Supply Chain

Flutter Forgotten Attack Surfaces: Logs, Links, More

March 25, 2026

Part 12 of the Flutter Security Beyond the Basics series.

Why this post exists

Over the previous eleven posts, we covered the major security surfaces in a Flutter application: how data is stored, how keys and tokens are managed, how certificates are pinned, how biometrics work, how obfuscation raises the cost of reverse engineering, how to detect compromised devices, how to protect screens and memory. Each of those topics is large enough to warrant its own deep treatment, and each one got it.

But security is also about the small things. The print() statement you left in during debugging that writes an access token to the system log. The clipboard that holds a copied password for as long as the user forgets about it. The deep link scheme that any app on the device can register. The WebView that loads a third-party URL with JavaScript enabled and a bridge to your native code. The package from pub.dev that you added six months ago and never audited.

None of these justifies a full post. Together, they represent the gap between "we implemented security" and "we thought about security." This final post is the sweep — checking every corner before locking up.

Logging — the leak nobody audits

Every Flutter developer has written print() statements during debugging. Most of them remove those statements before a release build. Some do not. The ones that survive are a quiet, persistent data leak.

What happens to print() in production

On Android, print(), debugPrint(), and log() all write to logcat — the system-wide logging buffer. On a stock device, logcat output is restricted to the calling app. But on a rooted device, or on older Android versions where the READ_LOGS permission was available to third-party apps, any process with the right access can read every app's log output.

Consider what debugging statements typically contain:

dart
// "Just for debugging, I'll remove it later"
print('Login response: $response');
print('User object: ${user.toJson()}');
print('Token refreshed: $newAccessToken');
print('Payment payload: ${paymentData.toString()}');

Each of those statements writes sensitive data — authentication tokens, user PII, payment details — into the system log in plaintext. On a rooted device with a log-reading tool running in the background, an attacker collects everything your app prints without touching the app itself.

On iOS, print() writes to the unified logging system. While Apple has tightened access in recent versions, a connected device with Xcode or Console.app can stream the log output in real time. During development, this is convenient. In production, it is a data leak to anyone with physical access and a USB cable.

The fix: structured logging with release guards

The simplest approach is to guard every log statement behind a debug mode check:

dart
if (kDebugMode) {
  print('Debug: user loaded successfully');
}

This works but does not scale. A better approach is to build a logging utility that enforces the policy centrally:

dart
import 'package:flutter/foundation.dart';

/// Centralised logger that suppresses output in release builds
/// and redacts sensitive fields in all builds.
class SecureLogger {
  static const _redactedKeys = {
    'token',
    'accessToken',
    'refreshToken',
    'password',
    'secret',
    'authorization',
    'cookie',
    'creditCard',
    'cvv',
  };

  /// Log a message — only in debug mode.
  static void debug(String message) {
    if (kDebugMode) {
      debugPrint('[DEBUG] $message');
    }
  }

  /// Log an info message — only in debug mode.
  static void info(String message) {
    if (kDebugMode) {
      debugPrint('[INFO] $message');
    }
  }

  /// Log an error — always, but redact sensitive content.
  static void error(String message, [Object? error, StackTrace? stackTrace]) {
    if (kDebugMode) {
      debugPrint('[ERROR] $message');
      if (error != null) debugPrint('  Error: $error');
      if (stackTrace != null) debugPrint('  Stack: $stackTrace');
    }
    // In release: send to crash reporting service (Sentry, Crashlytics)
    // but never include raw tokens or credentials.
  }

  /// Log a map, automatically redacting sensitive keys.
  static void debugMap(String label, Map<String, dynamic> data) {
    if (kDebugMode) {
      final redacted = _redactMap(data);
      debugPrint('[DEBUG] $label: $redacted');
    }
  }

  static Map<String, dynamic> _redactMap(Map<String, dynamic> data) {
    return data.map((key, value) {
      if (_redactedKeys.any(
        (k) => key.toLowerCase().contains(k.toLowerCase()),
      )) {
        return MapEntry(key, '[REDACTED]');
      }
      if (value is Map<String, dynamic>) {
        return MapEntry(key, _redactMap(value));
      }
      return MapEntry(key, value);
    });
  }
}

Use SecureLogger.debug() everywhere instead of print(). In debug mode, you see the output. In release mode, nothing reaches the system log.

Dio interceptors — the other logging source

If you use Dio for HTTP requests, you likely have a logging interceptor. The default LogInterceptor prints request and response bodies, which means every API response — including authentication responses containing tokens — is written to the log.

dart
// Dangerous in production
dio.interceptors.add(LogInterceptor(
  requestBody: true,
  responseBody: true,
));

Replace it with a conditional interceptor:

dart
if (kDebugMode) {
  dio.interceptors.add(LogInterceptor(
    requestBody: true,
    responseBody: true,
    requestHeader: false, // Don't log Authorization headers even in debug
  ));
}

Or build a custom interceptor that redacts sensitive headers and response fields before logging.

Clipboard — the shared buffer

When a user taps "Copy" on a password field, a token, or an account number, that value sits in the system clipboard. The clipboard is a shared resource. On Android versions before 12, any foreground app could read it silently. On Android 12 and later, background apps are restricted, but a foreground app can still read the clipboard without notification. On iOS 14 and later, a banner appears when an app reads the clipboard — but the read still succeeds.

The risk is straightforward: a user copies a password from a password manager, switches to your app to paste it, and in between, a malicious app in the foreground reads the clipboard. Or the user copies a one-time code and forgets about it, and the value remains in the clipboard for hours.

Clearing the clipboard on a timer

If your app places sensitive data on the clipboard — or if the user copies something sensitive from your app — clear it after a short timeout:

dart
import 'package:flutter/services.dart';

/// Copy text to clipboard and clear it after [seconds].
Future<void> secureCopy(String text, {int seconds = 10}) async {
  await Clipboard.setData(ClipboardData(text: text));

  Future.delayed(Duration(seconds: seconds), () {
    Clipboard.setData(const ClipboardData(text: ''));
  });
}

Ten seconds is generous. Password managers like 1Password and Bitwarden typically clear the clipboard after 10 to 30 seconds.

Preventing copy from sensitive fields

For password fields, TextField(obscureText: true) disables the system copy action on most platforms. The user sees dots instead of characters, and the long-press context menu does not offer "Copy."

dart
TextField(
  obscureText: true,
  enableInteractiveSelection: false, // Prevents select-all and copy
  decoration: const InputDecoration(
    labelText: 'Password',
  ),
)

Setting enableInteractiveSelection: false goes further — it prevents the user from selecting and copying the field content through any gesture. Use this for fields where copying should never happen: password inputs, PIN fields, CVV inputs.

The paste question

Blocking paste on password fields is a different matter. Some apps disable paste to prevent "clipboard attacks," but this is actively hostile to users who rely on password managers. A user with a 40-character random password generated by their manager needs to paste it. Forcing them to type it manually means they will choose a simpler password instead.

The pragmatic approach: allow paste on password fields, but auto-clear the clipboard after the paste succeeds. Protect the user without punishing them.

Deep links and URL scheme hijacking

Flutter apps commonly use deep links for navigation — opening specific screens from notifications, emails, or OAuth callbacks. The implementation usually involves registering a custom URL scheme like myapp://.

The problem: custom URL schemes are not unique. Any app on the device can register the same scheme. There is no verification, no ownership check, no priority system. If two apps register myapp://, the operating system decides which one handles the link, and the user may not get a choice.

The OAuth redirect attack

This becomes a real vulnerability when deep links are used for OAuth callbacks. A typical flow:

  1. Your app opens a browser to the OAuth provider's authorisation page
  2. The user authenticates
  3. The provider redirects to myapp://callback?code=abc123
  4. Your app receives the code and exchanges it for tokens

If a malicious app has also registered myapp://, it can intercept step 3. The attacker's app receives the authorisation code instead of yours and exchanges it for tokens — gaining access to the user's account.

The fix: verified links

Universal Links (iOS) and App Links (Android) solve this by tying the link to a domain you own. Instead of myapp://callback, you use https://yourdomain.com/callback. The operating system verifies that the app claiming to handle yourdomain.com links is actually published by the domain owner.

For iOS, you host an apple-app-site-association file at your domain:

javascript
https://yourdomain.com/.well-known/apple-app-site-association
json
{
  "applinks": {
    "details": [
      {
        "appIDs": ["TEAMID.com.yourcompany.yourapp"],
        "components": [
          {
            "/": "/callback/*",
            "comment": "OAuth callback handling"
          },
          {
            "/": "/deeplink/*",
            "comment": "General deep links"
          }
        ]
      }
    ]
  }
}

For Android, you host an assetlinks.json file:

javascript
https://yourdomain.com/.well-known/assetlinks.json
json
[
  {
    "relation": ["delegate_permission/common.handle_all_urls"],
    "target": {
      "namespace": "android_app",
      "package_name": "com.yourcompany.yourapp",
      "sha256_cert_fingerprints": [
        "AB:CD:EF:12:34:56:78:90:AB:CD:EF:12:34:56:78:90:AB:CD:EF:12:34:56:78:90:AB:CD:EF:12:34:56:78:90"
      ]
    }
  }
]

Replace the SHA-256 fingerprint with your app's signing certificate fingerprint (use keytool -list -v -keystore your-keystore.jks).

In your Android AndroidManifest.xml, declare the intent filter with autoVerify:

xml
<intent-filter android:autoVerify="true">
    <action android:name="android.intent.action.VIEW" />
    <category android:name="android.intent.category.DEFAULT" />
    <category android:name="android.intent.category.BROWSABLE" />
    <data
        android:scheme="https"
        android:host="yourdomain.com"
        android:pathPrefix="/callback" />
</intent-filter>

The autoVerify="true" attribute tells Android to check the assetlinks.json file at install time. If verification succeeds, your app is the exclusive handler for those URLs. No other app can intercept them.

PKCE — defence in depth for OAuth

Even with verified links, add PKCE (Proof Key for Code Exchange) to your OAuth flow. PKCE ensures that even if an attacker intercepts the authorisation code, they cannot exchange it for tokens.

The mechanism: your app generates a random code_verifier before starting the OAuth flow and sends a hashed version (code_challenge) with the authorisation request. When exchanging the code for tokens, your app sends the original code_verifier. The server verifies it matches the challenge. An attacker who intercepted only the redirect does not have the verifier and cannot complete the exchange.

Most OAuth libraries for Flutter (such as flutter_appauth) support PKCE out of the box. If you are building the flow manually, generate the verifier as a cryptographically random string of at least 43 characters, hash it with SHA-256, and base64url-encode the result.

WebView security

WebViews embed a browser inside your app. They are useful for displaying terms of service, payment flows, OAuth consent screens, and third-party content. They are also one of the most dangerous components you can add, because a WebView is an execution environment for code you do not control.

JavaScript and the bridge

When you enable JavaScript in a WebView — which most developers do by default — any JavaScript running in that page can execute. If the page is your own server and you control every line of its code, this is fine. If the page is a third-party URL, user-supplied input, or any content you do not fully control, you have given that content the ability to run code inside your app's process.

The risk escalates when you add a JavaScript-to-native bridge. In Flutter's webview_flutter package, addJavaScriptChannel exposes a Dart callback that JavaScript in the WebView can invoke:

dart
// Dangerous if the WebView loads untrusted content
controller.addJavaScriptChannel(
  'NativeBridge',
  onMessageReceived: (message) {
    // This runs Dart code triggered by JavaScript in the WebView
    handleMessage(message.message);
  },
);

If the WebView navigates to an attacker-controlled page — through a redirect, an injected iframe, or a compromised third-party script — that page can call NativeBridge.postMessage('...') and trigger your Dart code.

Safe WebView configuration

Treat a WebView as an untrusted execution environment. Minimise what you expose to it, and validate what it loads.

dart
import 'package:webview_flutter/webview_flutter.dart';

class SecureWebView {
  static WebViewController create({
    required String initialUrl,
    bool enableJavaScript = false,
    List<String>? allowedHosts,
  }) {
    final controller = WebViewController()
      ..setJavaScriptMode(
        enableJavaScript
            ? JavaScriptMode.unrestricted
            : JavaScriptMode.disabled,
      )
      ..setNavigationDelegate(
        NavigationDelegate(
          onNavigationRequest: (request) {
            final uri = Uri.tryParse(request.url);
            if (uri == null) {
              return NavigationDecision.prevent;
            }

            // Block javascript: and file: URLs unconditionally
            if (uri.scheme == 'javascript' || uri.scheme == 'file') {
              return NavigationDecision.prevent;
            }

            // If allowedHosts is set, restrict navigation to those domains
            if (allowedHosts != null && allowedHosts.isNotEmpty) {
              if (!allowedHosts.contains(uri.host)) {
                return NavigationDecision.prevent;
              }
            }

            return NavigationDecision.navigate;
          },
        ),
      )
      ..loadRequest(Uri.parse(initialUrl));

    return controller;
  }
}

Key principles:

  • Disable JavaScript unless you need it. If you are loading a static HTML page or a simple terms-of-service document, there is no reason for JavaScript to be enabled.
  • Never load user input directly. A javascript: URL executes code. A file: URL reads local files. Always validate the scheme and host before loading.
  • Restrict navigation to known domains. If the WebView should only load pages from yourdomain.com, enforce that in the navigation delegate. This prevents redirects to attacker-controlled sites.
  • Avoid JavaScript channels with untrusted content. If you must use a bridge, validate every message the JavaScript side sends. Treat it the same way you would treat untrusted API input — deserialise carefully, check types, reject anything unexpected.
  • Be aware of cookie behaviour. On some platforms, WebView shares cookies with the system browser or with other WebViews. Sensitive session cookies can leak across contexts. Use WebViewController.clearLocalStorage() and clear cookies when closing WebViews that handle authentication flows.

Third-party packages — supply chain risk

Every package you add from pub.dev is code you did not write, running inside your app, with full access to everything your app can access. A networking package can read your secure storage. A UI package can make HTTP requests. A logging package can exfiltrate data. The Dart/Flutter sandbox does not restrict packages — they are not sandboxed from your app. They are your app.

How supply chain attacks work

The most instructive example comes from the JavaScript ecosystem. In 2018, the event-stream npm package — downloaded two million times per week — was transferred to a new maintainer who injected code that targeted a specific Bitcoin wallet application and stole cryptocurrency. The malicious code was obfuscated and designed to activate only when the targeted application was present. It took weeks for the community to notice.

The same risk exists in every package ecosystem, including pub.dev. A compromised maintainer account, a social engineering attack that transfers ownership, or a typosquatting package with a name one character off from a popular library — these are real vectors that have been exploited in other ecosystems and could be exploited in Dart.

The damage a malicious package can do in a Flutter app is significant: read tokens from secure storage, intercept HTTP traffic, collect user input, exfiltrate data to a remote server, inject invisible UI elements, or modify the app's behaviour in ways that are difficult to detect during testing.

Mitigations

Audit packages before adding them. Before running flutter pub add, check the package on pub.dev. Look at the publisher — is it a verified publisher? Check the GitHub repository — is it actively maintained, or was the last commit two years ago? Read the issue tracker — are there unresolved security reports? Check the number of likes and pub points. None of these checks is definitive, but together they give you a signal.

Pin versions and commit the lock file. Your pubspec.lock file records the exact version of every dependency. Commit it to source control. When you run flutter pub get, Dart resolves to the locked versions, not the latest. This means a compromised update does not automatically enter your project — you have to explicitly update.

yaml
# pubspec.yaml — use exact versions for critical dependencies
dependencies:
  flutter_secure_storage: 9.2.4  # Pinned, not ^9.2.4
  dio: 5.7.0                      # Pinned

  # Less critical packages can use caret syntax
  intl: ^0.19.0

The caret syntax (^9.2.4) allows automatic minor and patch updates. For packages that handle security-critical functions — storage, networking, authentication — consider pinning to exact versions and updating deliberately.

Prefer verified publishers. Pub.dev marks packages from verified publishers with a blue checkmark. This means the publisher has proven they own the domain associated with their account. It does not guarantee the code is safe, but it does mean the publisher is not anonymous.

Minimise dependencies. Every package is an attack surface. If you need one function from a package that has 200 exports, consider writing that function yourself. The fewer dependencies you have, the smaller your supply chain exposure.

Review updates before applying them. Run flutter pub outdated regularly. When a package has an update, read the changelog before updating. If the update is a major version bump, review the diff on GitHub. This takes time. It is also the only way to catch a supply chain attack before it enters your codebase.

The honest assessment: you cannot audit every line of every dependency. A moderately complex Flutter app has dozens of direct dependencies and hundreds of transitive ones. Full audit is not practical. But you can reduce the surface, pin versions, update deliberately rather than automatically, and pay closer attention to the packages that handle your most sensitive data.

Sensitive data in backups

Android and iOS both offer automatic backup mechanisms that can include your app's data. If your app stores a local database, cached credentials, or user-generated content, that data may be backed up to Google Drive or iCloud without your explicit consent.

This was covered in detail in the platform-specific posts (Posts 9 and 10), but here is the cross-platform checklist:

Android

In your AndroidManifest.xml, either disable backup entirely:

xml
<application
    android:allowBackup="false"
    ... >

Or, for Android 12 and later, use data extraction rules to exclude sensitive files while allowing non-sensitive data to be backed up:

xml
<application
    android:dataExtractionRules="@xml/data_extraction_rules"
    ... >
xml
<!-- res/xml/data_extraction_rules.xml -->
<data-extraction-rules>
    <cloud-backup>
        <exclude domain="sharedpref" path="." />
        <exclude domain="database" path="." />
        <exclude domain="file" path="secure/" />
    </cloud-backup>
</data-extraction-rules>

iOS

For specific files that should not be backed up, set the NSURLIsExcludedFromBackupKey attribute:

swift
var fileURL = getDocumentsDirectory().appendingPathComponent("sensitive.db")
var resourceValues = URLResourceValues()
resourceValues.isExcludedFromBackup = true
try fileURL.setResourceValues(resourceValues)

From the Flutter side, you can use platform channels to set this attribute, or store sensitive data in the Caches directory, which is excluded from backups by default.

The core principle: if your app stores anything locally that would be a problem if someone accessed the user's cloud account, make sure that data is excluded from backups.

Closing the series

Security is not a feature you add at the end of a project. It is a set of decisions you make throughout the build — when you choose where to store a token, when you decide how to handle an API key, when you configure a WebView, when you add a package to your dependencies. Each decision either shrinks or expands the attack surface.

Over twelve posts, this series has covered the surfaces that matter in a Flutter application:

No app is unbreakable. An attacker with physical access, unlimited time, and sufficient expertise can extract anything from any device. That is not the standard you are defending against. The goal is to make the cost of attack exceed the value of the target. A banking app handling millions in transactions warrants every measure in this series. A personal notes app does not. The threat model drives the investment.

What ties these twelve topics together is not a checklist. It is a way of thinking. Every time you write code that touches sensitive data — storing it, transmitting it, displaying it, logging it — ask: who else could access this, and what would they gain? If you can answer that question honestly for every component in your app, you are doing security properly. Not perfectly. Properly. That is the realistic standard, and it is enough.

Related Topics

flutter logging securityflutter clipboard securityflutter deep link hijackingflutter webview securityflutter supply chain attackflutter pub.dev package riskflutter logcat leakflutter PKCE oauthuniversal links flutterapp links flutterflutter secure loggingflutter security checklist

Ready to build your app?

Flutter apps built on Clean Architecture — documented, tested, and yours to own. See which plan fits your project.