Machine Coding Round — What to Expect

Coding Test

Machine Coding Round — What to Expect

A machine coding round gives you a problem statement and 1-2 hours to build a working solution. Some companies watch you live, others just review the final output.

What is a machine coding round and how does it differ from a take-home assignment?

It’s a fixed-time coding session where I build a feature or small app from scratch, usually in 1-2 hours. I’m either on a video call with the interviewer watching or being recorded. The key difference from a take-home is the time constraint and observation — they see my process, not just the result. Take-homes give 3-7 days and evaluate polished output. Machine coding rounds evaluate how I approach a problem and whether my coding habits are clean even when I’m rushing.

What are the common formats?

The main formats are:

Some companies combine these — fix a bug, add a feature, and write tests, all in 90 minutes.

What do evaluators actually watch for?

Process as much as output. They watch how I read requirements — whether I read everything first or jump in. How I structure code — clear architecture vs a monolith. How I debug — reading error messages and tracing the issue vs randomly changing things. How I handle being stuck — staying calm and simplifying vs panicking. And code quality under pressure — naming, separation of concerns, clean structure even when rushing.

A partially working app with clean, well-structured code is better than a fully working app with messy code.

What architecture should I default to?

MVVM with a repository pattern. It’s the standard Android architecture and evaluators expect it.

@HiltViewModel
class FeatureViewModel @Inject constructor(
    private val repository: FeatureRepository
) : ViewModel() {
    val uiState: StateFlow<FeatureUiState> = repository.getData()
        .map { result ->
            when (result) {
                is Resource.Success -> FeatureUiState.Success(result.data)
                is Resource.Error -> FeatureUiState.Error(result.message)
                is Resource.Loading -> FeatureUiState.Loading
            }
        }
        .stateIn(viewModelScope, SharingStarted.WhileSubscribed(5000), FeatureUiState.Loading)
}

Having this pattern memorized saves 10 minutes of setup time. I shouldn’t be thinking about how to wire a ViewModel during the round.

How should I spend the first 10-15 minutes?

Read the entire problem statement before writing any code. I spend the first 10-15 minutes on:

I write a quick mental outline: “Data class, API interface, repository, ViewModel, one composable. That’s my first 40 minutes. Tests in the last 20.” Having a plan prevents wandering.

What’s a good time management strategy for a 90-minute round?

I break it into blocks:

The most common mistake is spending 50 minutes on a perfect data layer and running out of time before the UI works. I get something visible on screen early, then iterate.

How do I handle API calls in a timed round?

I set up Retrofit quickly with the provided API documentation. I define only the endpoints I need. No interceptors, logging, or retry logic unless specifically asked.

interface TaskApi {
    @GET("tasks")
    suspend fun getTasks(): List<TaskDto>

    @POST("tasks")
    suspend fun createTask(@Body task: CreateTaskRequest): TaskDto
}

val api = Retrofit.Builder()
    .baseUrl(BASE_URL)
    .addConverterFactory(MoshiConverterFactory.create())
    .build()
    .create(TaskApi::class.java)

If the API isn’t working or the documentation is unclear, I use mock data and move on. I don’t waste 20 minutes debugging someone else’s API.

How do I display a list quickly in Compose?

This is the most common UI requirement. I have the LazyColumn pattern ready so I can set it up in under 5 minutes.

@Composable
fun TaskListScreen(
    uiState: TaskUiState,
    onTaskClick: (Long) -> Unit
) {
    when (uiState) {
        is TaskUiState.Loading -> CircularProgressIndicator()
        is TaskUiState.Error -> Text("Error: ${uiState.message}")
        is TaskUiState.Success -> {
            LazyColumn {
                items(items = uiState.tasks, key = { it.id }) { task ->
                    ListItem(
                        headlineContent = { Text(task.title) },
                        supportingContent = { Text(task.description) },
                        modifier = Modifier.clickable { onTaskClick(task.id) }
                    )
                }
            }
        }
    }
}

Material 3’s ListItem composable is fast to use and looks clean. I don’t spend time building custom card layouts unless the requirements specifically ask for it.

How do I deal with ambiguous requirements?

I ask questions. In a live round, I ask the interviewer directly. In a recorded round, I document my assumptions. If the requirements say “show a list of users” but don’t specify sorting or pagination, I decide what makes sense and add a comment.

// Assumption: Users sorted by name since the requirement
// didn't specify a sort order
val users = repository.getUsers()
    .map { it.sortedBy { user -> user.name } }

I pick the simplest reasonable interpretation and move forward. Spending 10 minutes overthinking an ambiguous requirement wastes time.

What patterns should I have memorized?

These come up in almost every machine coding round:

Every machine coding task is a combination of these patterns in some configuration. Having them ready saves significant time.

How do I add error handling quickly?

I use the Resource sealed class pattern and wrap my repository calls. This takes 5 minutes to set up and covers all error cases.

sealed interface Resource<out T> {
    data class Success<T>(val data: T) : Resource<T>
    data class Error(val message: String) : Resource<Nothing>
    data object Loading : Resource<Nothing>
}

suspend fun <T> safeApiCall(call: suspend () -> T): Resource<T> {
    return try {
        Resource.Success(call())
    } catch (e: HttpException) {
        Resource.Error("Server error: ${e.code()}")
    } catch (e: IOException) {
        Resource.Error("No internet connection")
    } catch (e: Exception) {
        Resource.Error(e.message ?: "Unknown error")
    }
}

class TaskRepository(private val api: TaskApi) {
    suspend fun getTasks(): Resource<List<Task>> = safeApiCall {
        api.getTasks().map { it.toDomain() }
    }
}

The safeApiCall helper eliminates duplicate try-catch blocks across every repository method. I set it up once and reuse it.

What should I NOT do in a machine coding round?

How do I write a quick unit test when time is short?

I test the ViewModel’s happy path. If I only have time for one test, I test that the ViewModel correctly maps repository data to UI state.

@Test
fun `loads tasks successfully`() = runTest {
    val repository = FakeTaskRepository()
    repository.setTasks(listOf(Task(id = 1, title = "Buy milk")))
    val viewModel = TaskViewModel(repository)

    val state = viewModel.uiState.first { it is TaskUiState.Success }
    val success = state as TaskUiState.Success
    assertThat(success.tasks).hasSize(1)
    assertThat(success.tasks[0].title).isEqualTo("Buy milk")
}

class FakeTaskRepository : TaskRepository {
    private val tasks = MutableStateFlow<List<Task>>(emptyList())

    fun setTasks(list: List<Task>) { tasks.value = list }

    override fun getTasks(): Flow<Resource<List<Task>>> =
        tasks.map { Resource.Success(it) }
}

A single passing test demonstrates that my architecture is testable. It shows the evaluator that my ViewModel doesn’t have hardcoded dependencies.

How do I handle the last 10 minutes?

The last 10 minutes should be cleanup, not feature building. I stop adding new code and focus on:

If the app is partially done, I make sure the parts that work are solid. I comment out incomplete features rather than leaving half-written code. A clean, working subset is always better than a complete but buggy mess.

What separates a strong submission from an average one?

Average submissions have the feature working but with everything crammed into one ViewModel, no error handling, and no tests. Strong submissions have:

I don’t need every feature. I need the features I built to be clean, correct, and well-structured.

Common Follow-ups