15 April 2026
Androidās privacy model gets stricter with every release. Scoped storage, one-time permissions, approximate location, photo picker, privacy dashboard ā each version adds new restrictions that break assumptions you made two years ago. Iāve been through enough migration cycles to know that fighting these changes is a losing strategy. The apps that handle them cleanly are the ones that were already built with privacy-first design. Every permission you donāt request is a permission youāll never have to migrate away from when Google tightens the screws again.
Early in my career, I treated permissions like a checklist ā request everything upfront, handle the denial with a toast, move on. That approach fell apart around API 29 when scoped storage landed and half my file access code stopped working. The lesson was clear: building with minimal data access from the start means less rework when the next API level drops. This guide covers the practical privacy APIs every Android developer needs to understand ā not just what they do, but when to use them and what traps to watch for.
The runtime permission model arrived with API 23, and it fundamentally changed the contract between apps and users. Before Marshmallow, permissions were granted at install time ā the user either accepted all of them or didnāt install the app. There was no granularity. Now, dangerous permissions (camera, location, contacts, microphone) must be requested at runtime, and the user can deny them individually.
The modern way to request permissions is through ActivityResultContracts. The old requestPermissions() and onRequestPermissionsResult() callback pattern is deprecated in practice ā itās fragile, hard to test, and couples your permission logic to the Activity lifecycle. ActivityResultContracts gives you a clean launcher-based API.
class PhotoCaptureFragment : Fragment() {
private val cameraPermissionLauncher = registerForActivityResult(
ActivityResultContracts.RequestPermission()
) { isGranted ->
if (isGranted) {
openCamera()
} else {
showPermissionDeniedMessage()
}
}
private fun onCaptureButtonClick() {
when {
ContextCompat.checkSelfPermission(
requireContext(), Manifest.permission.CAMERA
) == PackageManager.PERMISSION_GRANTED -> {
openCamera()
}
shouldShowRequestPermissionRationale(
Manifest.permission.CAMERA
) -> {
showRationaleDialog {
cameraPermissionLauncher.launch(Manifest.permission.CAMERA)
}
}
else -> {
cameraPermissionLauncher.launch(Manifest.permission.CAMERA)
}
}
}
}
The rationale flow matters more than most developers think. shouldShowRequestPermissionRationale() returns true when the user has denied the permission once but hasnāt selected āDonāt ask again.ā This is your one chance to explain why you need it. But hereās the limitation that trips people up: shouldShowRequestPermissionRationale() returns false in two very different situations ā when the user has never been asked (first launch) and when the user has permanently denied the permission. You canāt distinguish between these two states without tracking it yourself, typically with a SharedPreferences flag that records whether youāve asked before.
When a user permanently denies a permission, you canāt show the system dialog anymore. The only option is to guide them to your appās settings page with Settings.ACTION_APPLICATION_DETAILS_SETTINGS. But Iād argue that if youāre regularly sending users to settings, your permission strategy needs rethinking. The better approach is graceful degradation ā design features so they still work in a reduced capacity without the permission, and make the permission request contextual rather than upfront.
Scoped storage was the most disruptive privacy change in recent Android history. Introduced in API 29 and enforced from API 30, it replaced the old model where READ_EXTERNAL_STORAGE gave your app access to the entire shared file system. Under scoped storage, each app gets its own sandboxed directory on external storage, and accessing other appsā files requires going through controlled APIs.
For media files ā images, video, audio ā you use MediaStore. Your app can read its own media files without any permission. Reading other appsā media requires READ_MEDIA_IMAGES, READ_MEDIA_VIDEO, or READ_MEDIA_AUDIO on API 33+, which replaced the old READ_EXTERNAL_STORAGE. The key insight is that MediaStore isnāt just a database ā itās the access control layer. You query it with ContentResolver and get back content URIs that the system controls.
class GalleryRepository(private val context: Context) {
suspend fun loadRecentPhotos(limit: Int = 50): List<MediaItem> {
return withContext(Dispatchers.IO) {
val photos = mutableListOf<MediaItem>()
val projection = arrayOf(
MediaStore.Images.Media._ID,
MediaStore.Images.Media.DISPLAY_NAME,
MediaStore.Images.Media.DATE_ADDED,
MediaStore.Images.Media.SIZE
)
val sortOrder = "${MediaStore.Images.Media.DATE_ADDED} DESC"
context.contentResolver.query(
MediaStore.Images.Media.EXTERNAL_CONTENT_URI,
projection, null, null, sortOrder
)?.use { cursor ->
val idColumn = cursor.getColumnIndexOrThrow(MediaStore.Images.Media._ID)
val nameColumn = cursor.getColumnIndexOrThrow(MediaStore.Images.Media.DISPLAY_NAME)
var count = 0
while (cursor.moveToNext() && count < limit) {
val id = cursor.getLong(idColumn)
val name = cursor.getString(nameColumn)
val uri = ContentUris.withAppendedId(
MediaStore.Images.Media.EXTERNAL_CONTENT_URI, id
)
photos.add(MediaItem(uri, name))
count++
}
}
photos
}
}
}
For non-media documents ā PDFs, spreadsheets, text files ā you use the Storage Access Framework (SAF). ACTION_OPEN_DOCUMENT launches the system file picker, and the user selects what to share. Your app gets a content URI with temporary access, and you can persist that access with takePersistableUriPermission() if you need to reopen the file later. SAF is more restrictive but more private ā the user explicitly chooses what your app can see.
Now, about MANAGE_EXTERNAL_STORAGE. This permission gives your app broad access to all files, and Google Play has a strict policy about it. Unless your app is a file manager, backup tool, or antivirus scanner, your Play Store submission will be rejected if you request it. Iāve seen teams reach for MANAGE_EXTERNAL_STORAGE because migrating to MediaStore and SAF felt like too much work. Thatās short-term thinking. The Play Store review will catch it, and even if it didnāt, future API levels will likely restrict it further. Do the migration properly.
Location is the most privacy-sensitive data most apps collect, and Android has tightened the rules significantly since API 29. The biggest change came in API 31 (Android 12), which introduced approximate location as a user-facing option. When your app requests ACCESS_FINE_LOCATION, the permission dialog now shows two choices: āPreciseā and āApproximate.ā The user can grant approximate location even if you only asked for precise. Your app must handle both.
class NearbyStoresViewModel(
private val locationClient: FusedLocationProviderClient,
private val storeRepository: StoreRepository
) : ViewModel() {
private val _stores = MutableStateFlow<List<Store>>(emptyList())
val stores: StateFlow<List<Store>> = _stores.asStateFlow()
fun findNearbyStores(context: Context) {
val hasFineLocation = ContextCompat.checkSelfPermission(
context, Manifest.permission.ACCESS_FINE_LOCATION
) == PackageManager.PERMISSION_GRANTED
val hasCoarseLocation = ContextCompat.checkSelfPermission(
context, Manifest.permission.ACCESS_COARSE_LOCATION
) == PackageManager.PERMISSION_GRANTED
if (!hasCoarseLocation) return
viewModelScope.launch {
val location = locationClient.lastLocation.await() ?: return@launch
// Adjust search radius based on precision granted
val radiusKm = if (hasFineLocation) 2.0 else 15.0
_stores.value = storeRepository.findNear(
location.latitude, location.longitude, radiusKm
)
}
}
}
The practical takeaway: always request ACCESS_COARSE_LOCATION alongside ACCESS_FINE_LOCATION, and design your feature to work with approximate location. A store finder can widen its search radius. A weather app works perfectly with city-level precision. Only request precise location when the feature genuinely requires it ā navigation, geofencing with small radii, or AR overlays.
Background location is even more restricted. Since API 29, ACCESS_BACKGROUND_LOCATION is a separate permission that must be requested independently ā you canāt bundle it with foreground location. On API 30+, the system sends users to a separate settings page to grant it, and your app canāt show the system dialog for it. Google Play also requires a declaration explaining why you need background location, and most apps get rejected on the first try. If you need continuous location updates when the app isnāt visible, you must use a foreground service with type location. The system shows an ongoing notification, making the access transparent to the user.
The best practice for location is simple: minimize what you collect. If you only need the city, request ACCESS_COARSE_LOCATION only. If you need one-time location for a check-in, request ACCESS_FINE_LOCATION with the one-time permission option (API 30+). If you need continuous tracking, use a foreground service and explain it clearly to the user.
The system photo picker, introduced in API 33 and backported to API 19 through Google Play Services, is the single best privacy improvement for media access. Hereās the key point: the photo picker requires no storage permission at all. The user selects photos through a system-controlled UI, and your app receives content URIs only for the selected items. You never see the full media library.
class AvatarSelectionFragment : Fragment() {
private val pickMedia = registerForActivityResult(
ActivityResultContracts.PickVisualMedia()
) { uri ->
if (uri != null) {
viewModel.updateAvatar(uri)
}
}
private val pickMultipleMedia = registerForActivityResult(
ActivityResultContracts.PickMultipleVisualMedia(maxItems = 10)
) { uris ->
if (uris.isNotEmpty()) {
viewModel.attachPhotos(uris)
}
}
private fun onSelectAvatarClick() {
pickMedia.launch(
PickVisualMediaRequest(ActivityResultContracts.PickVisualMedia.ImageOnly)
)
}
private fun onAttachPhotosClick() {
pickMultipleMedia.launch(
PickVisualMediaRequest(ActivityResultContracts.PickVisualMedia.ImageAndVideo)
)
}
}
The PickVisualMedia contract handles both single and multiple selection. You can filter by type ā ImageOnly, VideoOnly, ImageAndVideo, or a custom MIME type. The backport through androidx.activity version 1.7.0+ means you can use this on devices running Android 4.4 and above. On devices without the native photo picker, the library falls back to ACTION_OPEN_DOCUMENT automatically. You can check availability with ActivityResultContracts.PickVisualMedia.isPhotoPickerAvailable(context).
The honest tradeoff: the photo picker gives you less control over the selection UI. You canāt customize the look, add a camera capture button inline, or show a grid with pre-selections. If your app is a photo editor or social media app that needs a fully custom gallery experience, youāll still need READ_MEDIA_IMAGES and a custom gallery implementation. But for profile pictures, message attachments, document uploads ā basically 80% of photo selection use cases ā the system picker is the right answer. Less permission, less code, less privacy liability.
Every privacy feature Iāve covered so far is reactive ā itās about complying with restrictions Android imposes. Privacy-first design is proactive. It means making architectural decisions that minimize your appās data footprint before any platform requirement forces you to.
Data minimization is the core principle. Collect only what you need, and nothing more. If your analytics needs to track which screens users visit, track screen names ā not GPS coordinates, device IDs, and contact lists. Iāve worked on projects where the analytics SDK was collecting 40+ data points per event, and when we audited it, we realized we were using exactly 5 of them in our dashboards. The other 35 were liability with zero value.
Local processing is the privacy-friendly default. If you can do the computation on-device, donāt send the data to a server. ML Kit runs on-device for text recognition, face detection, and barcode scanning. CameraX can process frames locally. If your feature works without a network call, thatās one less data transmission to secure, explain in your privacy policy, and potentially leak.
Data retention matters too. If you cache user data for performance, set expiration policies. Room makes this straightforward ā a nightly WorkManager job can clear records older than 30 days. The data youāve deleted canāt be leaked.
class DataRetentionWorker(
context: Context,
params: WorkerParameters
) : CoroutineWorker(context, params) {
override suspend fun doWork(): Result {
val db = AppDatabase.getInstance(applicationContext)
val cutoffTimestamp = System.currentTimeMillis() - TimeUnit.DAYS.toMillis(30)
db.searchHistoryDao().deleteOlderThan(cutoffTimestamp)
db.cachedProfileDao().deleteOlderThan(cutoffTimestamp)
db.analyticsEventDao().deleteOlderThan(cutoffTimestamp)
return Result.success()
}
}
For auditing your appās data access, Android 12+ provides the Privacy Dashboard, which shows users a timeline of when apps accessed location, camera, and microphone. Your app can proactively participate by using AppOpsManager.OnOpNotedCallback to log every permission access internally. This is useful for finding surprise data access ā third-party SDKs that access location without your knowledge, analytics libraries that read the device ID, ad SDKs that access the microphone.
class PrivacyAuditApplication : Application() {
override fun onCreate() {
super.onCreate()
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.R) {
val appOpsCallback = object : AppOpsManager.OnOpNotedCallback() {
override fun onNoted(syncNotedAppOp: SyncNotedAppOp) {
Log.d("PrivacyAudit",
"Op: ${syncNotedAppOp.op}, " +
"Stack: ${Throwable().stackTrace.take(5).joinToString("\n")}"
)
}
override fun onSelfNoted(syncNotedAppOp: SyncNotedAppOp) {
onNoted(syncNotedAppOp)
}
override fun onAsyncNoted(asyncNotedAppOp: AsyncNotedAppOp) {
Log.d("PrivacyAudit",
"Async Op: ${asyncNotedAppOp.op}, " +
"Message: ${asyncNotedAppOp.message}"
)
}
}
val appOps = getSystemService(AppOpsManager::class.java)
appOps.setOnOpNotedCallback(mainExecutor, appOpsCallback)
}
}
}
This callback logs every data operation with a stack trace, so you can trace exactly which code path ā yours or a third-party SDKās ā triggered the access. Iāve caught two SDKs accessing location data this way in a production app, neither of which weād authorized. The audit callback is a debug tool, but the insights it provides should inform your production architecture.
Question 1: You call shouldShowRequestPermissionRationale() and it returns false. The user has used the app before. What does this mean?
Wrong: The user has never been asked for this permission.
Correct: The user has permanently denied the permission (selected āDonāt ask againā). Since the user has used the app before, the false return means you canāt show the system dialog ā you need to direct them to app settings or degrade gracefully.
Question 2: Your app targets API 33 and needs to display the userās photos in a custom gallery. Which permission do you request?
Wrong: READ_EXTERNAL_STORAGE ā it worked fine on older API levels.
Correct: READ_MEDIA_IMAGES. On API 33+, READ_EXTERNAL_STORAGE no longer grants access to media files. The permission was split into READ_MEDIA_IMAGES, READ_MEDIA_VIDEO, and READ_MEDIA_AUDIO. And if you donāt need a custom gallery, skip the permission entirely and use the system photo picker.
Build a PermissionManager class that wraps the ActivityResultContracts permission flow with proper rationale handling, permanent denial detection (using a SharedPreferences flag), and graceful degradation. The class should expose a requestPermission(permission: String, onGranted: () -> Unit, onDenied: () -> Unit, onPermanentlyDenied: () -> Unit) method. Test it with camera and location permissions, and verify that each callback fires in the correct scenario.
Thanks for reading!