Testing comes up in architecture rounds because testable code and good architecture go hand in hand.
The test pyramid has three layers:
I want many unit tests, fewer integration tests, and even fewer UI tests. Unit tests are fast and cheap. UI tests are slow and flaky. If your test suite is mostly UI tests, builds take forever and tests break on CI for no good reason.
Three things:
// Hard to test — creates its own dependency
class OrderProcessor {
private val api = RetrofitClient.create(OrderApi::class.java)
suspend fun process(order: Order) = api.submit(order)
}
// Easy to test — dependency is injected
class OrderProcessor(private val api: OrderApi) {
suspend fun process(order: Order) = api.submit(order)
}
The second version lets me pass a FakeOrderApi in tests. The first one forces me to deal with real network calls.
FakeUserRepository that returns data from an in-memory list instead of hitting the network.Fakes are generally better than mocks for repositories and data sources. They behave like the real thing, so tests catch more bugs. Mocks are better for verifying interactions — like checking that a logger was called when an error happened.
JUnit provides the test framework — @Test, @Before, assertions. MockK is a Kotlin-first mocking library that handles coroutines, extension functions, and top-level functions.
class LoginViewModelTest {
private val authRepository: AuthRepository = mockk()
private lateinit var viewModel: LoginViewModel
@Before
fun setup() {
viewModel = LoginViewModel(authRepository)
}
@Test
fun `login success updates state`() = runTest {
coEvery { authRepository.login("user", "pass") } returns Result.success(User("user"))
viewModel.login("user", "pass")
assertEquals(LoginState.Success, viewModel.state.value)
coVerify { authRepository.login("user", "pass") }
}
}
coEvery stubs suspend functions. coVerify verifies they were called. For non-suspend functions, I use every and verify.
Mockito is Java-first and works with Kotlin through mockito-kotlin extensions. MockK is built for Kotlin from the ground up. It handles suspend functions natively with coEvery/coVerify, can mock object singletons, extension functions, and top-level functions. Mockito needs extra setup for coroutines and can’t mock final classes without mockito-inline.
For Kotlin codebases, I prefer MockK. The API feels natural — lambda-based stubbing, named arguments, and coroutine support without workarounds.
DI separates object creation from object usage. When a class receives its dependencies through the constructor, I can pass real implementations in production and test doubles in tests. Without DI, classes create their own dependencies internally, and I can’t substitute them.
// Without DI — untestable
class PaymentProcessor {
private val gateway = StripeGateway()
private val logger = FirebaseLogger()
fun process(payment: Payment) { /* uses gateway and logger */ }
}
// With DI — testable
class PaymentProcessor(
private val gateway: PaymentGateway,
private val logger: Logger
) {
fun process(payment: Payment) { /* uses gateway and logger */ }
}
In the DI version, I inject FakePaymentGateway and FakeLogger in tests. I verify behavior without hitting Stripe’s API or Firebase. Hilt handles this in production. For tests, @TestInstallIn swaps modules with test versions.
The kotlinx-coroutines-test library provides TestDispatcher for controlling coroutine execution in tests. runTest replaces runBlocking — it uses StandardTestDispatcher by default and auto-advances virtual time.
class SearchViewModelTest {
@Test
fun `debounced search emits results`() = runTest {
val repository = FakeSearchRepository()
val viewModel = SearchViewModel(repository)
viewModel.onQueryChanged("kotlin")
advanceTimeBy(500) // Skip past debounce delay
assertEquals(listOf("Kotlin Coroutines", "Kotlin Flow"), viewModel.results.value)
}
}
advanceTimeBy moves virtual time forward without actually waiting. advanceUntilIdle runs all pending coroutines. I always inject dispatchers into my classes so I can replace Dispatchers.IO with a TestDispatcher in tests.
SavedStateHandle is injected by the framework. In tests, I create one manually with initial values.
@Test
fun `loads user from saved state`() = runTest {
val savedState = SavedStateHandle(mapOf("userId" to "user123"))
val viewModel = ProfileViewModel(savedState, FakeUserRepository())
viewModel.uiState.test {
val state = awaitItem()
assertEquals("user123", state.userId)
}
}
Hilt and the Navigation library populate SavedStateHandle from arguments, so I pass the expected key-value pairs in the constructor. If my ViewModel writes back to SavedStateHandle for process death survival, I read the values back from the same handle to verify they were saved.
For testing Flow emissions, I use Turbine. It provides a test {} extension that collects items and lets me assert them one by one.
@Test
fun `counter increments on each click`() = runTest {
val viewModel = CounterViewModel()
viewModel.count.test {
assertEquals(0, awaitItem()) // Initial value
viewModel.increment()
assertEquals(1, awaitItem())
viewModel.increment()
assertEquals(2, awaitItem())
cancelAndConsumeRemainingEvents()
}
}
awaitItem() suspends until the next emission. awaitError() catches errors. expectNoEvents() asserts nothing was emitted. Turbine makes testing StateFlow and SharedFlow much easier because manually collecting flows in tests is error-prone and timing-dependent.
Compose has a built-in test framework through ComposeTestRule. I use semantic matchers to find nodes and perform assertions or actions.
@get:Rule
val composeRule = createComposeRule()
@Test
fun `login button disabled when fields are empty`() {
composeRule.setContent {
LoginScreen(viewModel = FakeLoginViewModel())
}
composeRule.onNodeWithText("Login").assertIsNotEnabled()
composeRule.onNodeWithTag("email_field").performTextInput("user@test.com")
composeRule.onNodeWithTag("password_field").performTextInput("password")
composeRule.onNodeWithText("Login").assertIsEnabled()
}
Compose testing works through the semantic tree, not the visual tree. onNodeWithText, onNodeWithTag, and onNodeWithContentDescription are the main finders. For custom semantics, I add Modifier.testTag("tag") or Modifier.semantics { } to composables.
Robolectric runs Android framework code on the JVM by replacing Android SDK classes with shadow implementations. I can test code that uses Context, SharedPreferences, Resources, and other framework APIs without a device or emulator.
I use Robolectric for integration tests that need Android framework classes but don’t need full UI rendering. Testing a BroadcastReceiver, verifying Intent construction, or testing a ContentProvider are good use cases. For pixel-perfect UI testing, Espresso or Compose test rules are better.
The tradeoff is speed vs fidelity. Robolectric tests are faster than instrumented tests, but the shadow implementations don’t always match real device behavior exactly. Some edge cases around configuration changes and certain system services behave differently.
Espresso is the testing framework for View-based UI. It uses ViewMatchers to find views, ViewActions to interact with them, and ViewAssertions to verify state. It synchronizes with the UI thread automatically.
Compose testing uses ComposeTestRule with semantic node matchers instead of view matchers. Espresso tests the View hierarchy. Compose tests the semantic tree. Compose tests don’t need IdlingResource because the test framework waits for pending recompositions and animations on its own.
For apps mixing Views and Compose, I can use both in the same test. createAndroidComposeRule<Activity>() gives access to the Compose test API and the Activity for Espresso interactions.
Each layer has its own test strategy:
Room.inMemoryDatabaseBuilder(). For network, MockWebServer serves fake JSON responses. I test that the repository correctly maps API responses to domain models.runTest. I inject fake repositories and assert state changes.The domain layer should have the highest coverage because it contains business rules. If the domain logic is wrong, nothing else matters.
Flaky tests usually come from timing issues, shared state, or external dependencies. I fix them at the source instead of retrying:
advanceUntilIdle() in coroutine tests, waitUntil {} in Compose tests, and IdlingResource in Espresso. Never use Thread.sleep().@Before to set up fresh state.Code coverage measures what percentage of code is executed during tests. Line coverage, branch coverage, and method coverage are the common metrics. JaCoCo is the standard tool for Android projects.
I don’t aim for 100% — it leads to testing getters, setters, and trivial code that adds no value. I aim for high coverage on business logic (use cases, ViewModels, repositories) and lower coverage on UI and framework glue code. 70-80% on domain and presentation layers is a good target. The metric that matters more than the percentage is whether tests catch real bugs when code changes.
Dispatchers.Main? (Set Dispatchers.setMain(testDispatcher) in @Before and Dispatchers.resetMain() in @After. Without this, tests crash because Dispatchers.Main needs Android’s Looper)enqueue(MockResponse()). Use it to test your Retrofit/OkHttp client against controlled responses)Room.inMemoryDatabaseBuilder() to create a database that lives in memory and is destroyed after each test. Test DAO queries with real SQL execution)