№ 03 — Redact
App Store

Faces off.

Native macOS and iOS app that pixelates faces in your photos — automatically, locally, entirely without the cloud — and without an account.

Swift SwiftUI Core ML YOLOv11m-face Apple Neural Engine macOS iOS iPadOS Privacy-First

The Challenge

Publishing demo photos without violating the personality rights of the people in them — that sounds simple, but it isn't. Existing tools are either CLI utilities like deface or uniface with no graphical interface, or App Store apps that secretly run as WebView wrappers around a cloud API.

Neither is fit for workshop or conference photos that need to be anonymised fast, locally, and without any data leaving the device.

Redact is a real native app — with first-class face detection powered by a fine-tuned YOLOv11m model running fully on-device. No cloud, no account, no setup. Drag, drop, done.

The model runs through Core ML on the Apple Neural Engine, images are pixelated, blurred, overlaid with a black bar or an emoji using Core Image. EXIF data can optionally be stripped — GPS, camera info, timestamps. Originals stay untouched; anonymised versions land in an anonymized/ subfolder.

What started as a small tool for personal use grew into a polished multi-platform app: macOS for productive batch processing at the desk, iOS and iPadOS for quick anonymisation on the go.

Project Details

Role Solo developer, design & ML integration
Timeline Several months, ongoing development
Platform macOS 14+ · iOS 18+ · iPadOS 18+
Detection YOLOv11m-face via Core ML
Runtime Core ML · Apple Neural Engine
Privacy 100% on-device, zero cloud
Price Free
Status Available on the App Store

What Redact does

🧠

YOLOv11m-face model

Fine-tune of the YOLO11m model trained specifically for face detection — ~20M parameters, bounding boxes with confidence, embedded NMS. Ships as an .mlpackage inside the app.

⚡️

Apple Neural Engine

Inference via Core ML on the ANE — fast enough to batch-process entire photo folders. No internet, no GPU spin-up, no fan.

🔒

100% on-device

No cloud, no telemetry, no account. Images never leave the device — all processing happens locally on the Apple Neural Engine.

🎨

Four anonymisation modes

Mosaic, Gaussian blur, black bar, or emoji overlay — adjustable strength and padding, with rectangular or elliptical masks.

Manual corrections

Toggle detected faces on/off with a click, add missed faces by dragging. Confidence threshold is adjustable — full control over the output.

📁

Batch & drag & drop

Drop single images or entire folders — JPG, PNG, HEIC, TIFF. Results land in an anonymized/ subfolder, originals stay untouched.

🧭

Before / After

Side-by-side comparison with face overlay. Hover preview in the file list, ⌘↵ to process, ⌘E to export.

🧹

Strip EXIF data

Optionally remove GPS coordinates, camera model, and timestamps on export — useful when anonymised images should actually be anonymous.

🌐

Localised

German and English via Apple String Catalog (xcstrings). Dark mode, splash screen, privacy manifest — all native, all Apple.

Development process

01

Model evaluation

Identified YOLOv11m-face as the best balance of accuracy and speed, converted to Core ML via coremltools and packaged as MLProgram.

02

macOS prototype

SwiftUI app with NavigationSplitView, drag & drop, a Core Image pipeline, and TaskGroup-based batch processing. First version: locally pixelating a folder.

03

iOS port

Shared code in a RedactCore package, platform-specific UI for iOS/iPadOS, memory crashes on large photos fixed via ImageIO downsampling.

04

App Store release

Localisation (DE/EN) via xcstrings, privacy manifest, app icons, splash font — and finally the App Store launch.

Numbers

0
Bytes leaving the device
~38 MB
Embedded ML model
4
Anonymisation modes
3
Platforms (macOS, iOS, iPadOS)

Privacy by design.

Redact is free on the App Store — fully on-device, no cloud, no account. If you're looking for a similar on-device solution for your project, or have feedback, I'd love to hear from you.