Leadership, Ownership & Technical Decision-Making
Leadership, Ownership & Technical Decision-Making
These questions come up in almost every senior-level interview. They test how you lead, take ownership, and make technical decisions under real constraints.
What does technical leadership look like without a management title?
It’s about influence, not authority. I lead by making good decisions, helping the team improve, and taking responsibility for outcomes.
Examples:
- Proposing and driving adoption of a new architecture pattern
- Setting up coding standards or code review guidelines the team follows
- Being the person others come to with questions about a specific domain
- Identifying a problem nobody owns and fixing it — flaky tests, build system issues, documentation gaps
What does it mean to own a feature end-to-end?
I’m responsible from requirements to production. I don’t just write the code and hand it off. I understand the user problem, participate in design, write the implementation, ensure test coverage, monitor the rollout, and follow up on metrics.
- Understanding the business goal behind the feature
- Breaking it into tasks and estimating effort
- Writing the code, handling edge cases, writing tests
- Coordinating with backend, design, and QA
- Monitoring crash rates and user feedback after release
- Following up on adoption — did users actually use it?
If the feature has bugs post-release, I triage them without being asked.
How do you handle ambiguity in requirements?
Ambiguity is normal. The worst response is to wait for someone to clarify everything. I take action while managing the uncertainty.
- Clarify what I can — ask the PM or stakeholder specific questions. “Should the cache expire after 24 hours or persist until the user refreshes?”
- Make reasonable assumptions for what I can’t clarify — document them and share. “I’m assuming we don’t need offline support for v1. Let me know if that’s wrong.”
- Build for flexibility — use interfaces, feature flags, or configuration instead of hardcoding decisions
- Check in early — show a working prototype before going too far in one direction
How do you approach making a trade-off between speed and quality?
It depends on the context. Shipping fast with known shortcuts is fine for experiments and MVPs. Cutting corners on a payment flow or authentication is not.
- Ask what the cost of getting it wrong is. High-risk features (payments, auth, data integrity) always get the quality treatment
- Ask what the cost of being late is. If a competitor is launching the same feature, speed matters more
- Propose a middle ground — ship a simpler version with solid fundamentals and iterate. “I can skip the animation polish but not the error handling”
- Document the shortcuts — if I ship with known tech debt, I create tickets so it doesn’t get forgotten
“I always prioritize quality” is not a real answer. The real answer shows judgment about when to be fast and when to be careful.
How do you decide whether to refactor or ship?
Refactoring has a cost — time, risk of regressions, and opportunity cost. I need a reason to refactor, not just a preference.
Refactor when:
- The current code is blocking new features — I literally can’t add what’s needed without changing the structure
- The area has high bug density — the same module keeps producing bugs
- The team regularly struggles to understand or modify the code
- I’m about to add significant new functionality — refactor before building on a shaky foundation
Ship when:
- The code works correctly, even if it’s not pretty
- The refactor scope is unclear or keeps growing
- The refactor doesn’t unlock anything concrete in the near term
I frame it as “does this refactor unblock something or reduce risk?” not “is this code clean enough?”
Tell me about a time you influenced a decision without having authority.
I build trust and present evidence. I can’t just say “I think we should do X” — I need to show why.
- Situation — “My team was about to adopt a third-party analytics SDK that I thought was too heavy for our app size.”
- Action — “I measured the SDK’s impact — it added 2MB to the APK and 300ms to cold start. I presented these numbers alongside a lighter alternative that covered 90% of our use cases with a third of the footprint.”
- Result — “The team went with the lighter option. The data made the decision easy.”
Numbers and prototypes are more persuasive than opinions. A working proof of concept or benchmarks will win the argument without needing authority.
Describe a time you drove a significant technical decision. How did you get buy-in?
- Situation — “Our app had a single-module Gradle setup and build times had grown to 8 minutes. The team was frustrated.”
- Action — “I researched modularization approaches, wrote a one-page proposal comparing feature modules vs. layer modules, and presented it with build time projections. I started with one module extraction as a proof of concept.”
- Result — “The PoC reduced build times by 30%. The team approved the full plan. We modularized over 3 sprints and got build times down to 3 minutes.”
Research, a clear proposal, a small proof of concept, and measurable results. That’s how I get buy-in.
How do you choose between two libraries or frameworks when both seem viable?
I use a systematic approach, not gut feeling.
- Does it solve the actual problem, or am I adopting it because it’s popular?
- Maintenance health — commit frequency, issue response time, number of maintainers. A library with one maintainer is a risk
- APK size and startup impact — measure it, don’t guess
- Team familiarity — a slightly inferior library the team already knows is often better than the “best” library nobody understands
- Migration cost — how hard is it to remove if I need to? Libraries that touch every layer of the app are risky
- Official support — is this recommended by Google/JetBrains, or is it a community project?
Example — “I chose Ktor over Retrofit for our KMP project because it was multiplatform-compatible and the team was already using it on the backend.”
Tell me about a time you made a technical decision that turned out to be wrong. What did you do?
- Situation — “I chose to implement a custom caching layer instead of using Room because I thought it would be simpler.”
- Action — “After two sprints, the custom cache had grown complex — I was handling migrations, thread safety, and serialization manually. I admitted to the team that Room would have been the better choice and proposed migrating.”
- Result — “The migration took one sprint but saved us ongoing maintenance. I learned that build vs. buy decisions should weigh long-term maintenance cost, not just initial development speed.”
Don’t minimize the mistake. Own it, explain the learning, and show how it changed your decision-making going forward.
How do you approach a large, ambiguous project with no clear solution?
- Start with requirements — what does success look like? Get alignment on the goal before discussing solutions
- Identify constraints — timeline, team size, existing systems, backward compatibility
- Propose 2-3 approaches with tradeoffs — don’t commit to one approach immediately. Present options with honest pros and cons
- Start small — build a spike or prototype for the riskiest part. Validate assumptions before investing heavily
- Communicate progress — regular check-ins with stakeholders to course-correct early
I’m comfortable when the initial direction changes after early prototyping. That’s the point of prototyping.
How do you manage stakeholders who have conflicting priorities?
Different stakeholders want different things. The PM wants features, engineering leadership wants quality, design wants polish.
- Understand each stakeholder’s real concern — the PM has a launch deadline, engineering leadership has seen tech debt compound
- Make tradeoffs visible — “I can ship Feature A with full polish, or Features A and B with basic error handling. Here’s what each option means for the timeline”
- Don’t promise everything to everyone — that’s how you burn out and under-deliver
- Propose a sequence — “Let’s ship A this sprint, B next sprint, and revisit the polish after that”
Be transparent about tradeoffs rather than silently absorbing conflicting expectations.
Tell me about a time you took ownership of something outside your job description.
- Situation — “Our CI pipeline was flaky. Builds failed randomly about 20% of the time and developers had to re-run them manually. Nobody owned CI.”
- Action — “I spent a few days investigating the flaky tests, fixed the top 5 causes (race conditions in test setup, shared state between tests), and added test isolation. I also set up build notifications in Slack.”
- Result — “Build success rate went from 80% to 97%. The team saved about 30 minutes per developer per day. The engineering manager asked me to document the fixes so others could follow the same approach.”
How do you approach architecture decisions for a feature that needs to scale?
I don’t want to over-engineer, but I also don’t want to rewrite everything in six months.
- Start with the simplest architecture that solves the current problem — don’t build for 10 million users when I have 10 thousand
- Identify the parts most likely to change — data sources, UI patterns, business rules. Make those modular and easy to swap
- Design for testability — if the architecture makes testing hard, it’ll make scaling hard too
- Use interfaces at boundaries — repository pattern for data, use cases for business logic. When I need to scale, I swap implementations, not rewrite modules
- Define what “needs to scale” means concretely. “Response time under 200ms with 5x current traffic” is measurable. “It should be scalable” is not
Describe a time you had to balance technical debt with feature delivery.
- Situation — “Our networking layer was built on callbacks. Adding new API calls was error-prone and slow because every call required manual threading, error handling, and retry logic.”
- Action — “I proposed allocating 20% of each sprint to tech debt. For the networking layer, I migrated to Retrofit with coroutines incrementally — one endpoint at a time alongside feature work.”
- Result — “Over three sprints, I migrated 80% of endpoints. Adding new API calls went from a day to an hour. Bug reports from networking issues dropped significantly.”
Tech debt reduction can happen alongside feature work. It doesn’t require a “stop everything and refactor” mandate.
How do you make decisions when your team is split on an approach?
Disagreements are healthy. The problem is when they stall progress.
- Time-box the discussion — give the team a meeting or a document to make their case. Don’t let it drag for weeks
- Define evaluation criteria upfront — maintainability, performance, learning curve, team familiarity. Agree on what matters before debating solutions
- If data can settle it, get the data — build a quick prototype, run a benchmark, or check how a similar problem was solved in a well-known open-source project
- If the team is still split, someone decides — the tech lead, the architect, or whoever owns the area. The decision is made, everyone commits
- Revisit later — if the decision turns out wrong, course-correct without blame
A good decision now beats a perfect decision in three weeks.
Tell me about a time you had to say no to a stakeholder.
- Situation — “The product team wanted to add push notification deep links to every screen in the app within one sprint.”
- Action — “I explained that deep linking requires proper navigation state restoration, which wasn’t built into our navigation setup. I proposed supporting deep links for the 5 most important screens first and building the infrastructure to make future screens easy to add.”
- Result — “The PM was initially frustrated but appreciated the phased approach. I delivered the top 5 screens on time, and adding new deep links became a 30-minute task instead of a multi-day effort.”
“No” should always come with an alternative, not just a refusal.
How do you evaluate whether a new technology is worth adopting?
- Problem first — what specific problem does this solve? If I can’t name one, I’m adopting for the wrong reason
- Maturity — is it stable? Alpha libraries change APIs constantly. Betting production code on an alpha library is risky
- Team readiness — does the team have time to learn it? Adoption during a crunch period is a bad idea
- Migration path — can I adopt incrementally, or is it all-or-nothing? Jetpack Compose works alongside XML. Kotlin Multiplatform requires more upfront investment
- Exit cost — if I need to remove it in a year, how painful is that? Libraries that spread across every module are harder to remove
“I adopted Compose for new screens only, kept XML for existing ones, and set a 6-month review to evaluate whether the team was comfortable enough to migrate more aggressively.”
Common Follow-ups
- How do you handle a situation where your tech lead makes a decision you strongly disagree with?
- What’s the difference between taking ownership and overstepping boundaries?
- How do you decide when to prototype vs. when to plan thoroughly?
- Tell me about a time you delegated a task. How did you ensure quality?
- How do you build trust with a new team when you’re the new person?
- What’s your approach when you inherit a codebase with significant tech debt?
- How do you know when a decision is “good enough” to move forward?