Monotonic Stack and Queue: The Hidden DSA Pattern That Keeps Appearing in Flipkart and Uber Interviews
Monotonic stack and queue problems keep appearing in Flipkart, Uber, and Swiggy interviews. Learn the pattern, the 5 key problems, and how to recognise it mid-interview. FutureJobs by Impacteers.

Posted by
Shahar Banu

Reviewed by
Divyansh Dubey
Published
You're four years into writing production-grade Python. ETL pipelines, window functions, optimised SQL joins — nothing about your work is trivial. Then a Flipkart screening call drops a medium-array problem in your lap, and your brain correctly calculates a nested-loop solution in thirty seconds. It runs. It's logically sound. And then it times out.
The interviewer's expression doesn't change. But you know what that silence means.
This is the exact failure mode that keeps data engineers out of AI Engineering roles at product companies — not lack of intelligence, not lack of Python fluency, but one specific structural gap: the monotonic stack queue DSA interview pattern. It converts O(n²) brute-force solutions into O(n) elegance, and Flipkart, Uber, and Swiggy interviewers use it specifically because it separates engineers who have studied patterns from engineers who haven't.
By the end of this guide, you'll understand what monotonic stacks and queues are, recognise the five canonical problems they solve, know how to identify the pattern mid-interview, and have the one mental model that makes everything click.
What Is a Monotonic Stack? The Definition You Can Actually Use
A monotonic stack is a stack data structure where elements are maintained in either strictly increasing or strictly decreasing order from bottom to top. When a new element violates this ordering, you pop elements from the stack until the property is restored, then push the new element.
That definition alone is quotable but incomplete. Here's the practical version:
A monotonic stack is a stack that enforces order by discarding elements that can no longer contribute to future answers. The insight — and this is the one that makes everything click — is that once an element is "dominated" by a newer element, it will never be the answer to any future query. So you throw it away immediately, and that's what gives you O(n) time.
Increasing variant: stack elements go from smallest at bottom to largest at top. You pop when the incoming element is *smaller*. Decreasing variant: stack elements go from largest at bottom to smallest at top. You pop when the incoming element is *larger*.
A monotonic deque (double-ended queue) extends this to sliding windows — you need to both add elements at the front of the window and remove expired elements from the back. Python's `collections.deque` is your tool here.
If you're currently solving advanced array problems with brute force and hitting TLE, this is almost certainly one of the patterns you're missing.
Why O(n²) Gets You TLE on Problems That Look Simple
Here's why this pattern matters specifically for you as a data engineer: you think in set operations, window functions, and vectorised Pandas logic. That intuition is genuinely powerful. But when an interviewer asks "find the next greater element for every element in an array," the first instinct is a nested loop — for each element, scan right until you find something larger.
That's O(n²). For an array of 100,000 elements (standard LeetCode constraint), that's 10 billion operations. You will time out. Every time.
The monotonic stack solves the same problem in a single pass. You process each element once, push it onto the stack, pop elements that have found their answer, and move on. Every element is pushed once and popped at most once — giving you O(n) total operations regardless of input size.
This is the core trade-off that the FutureJobs Advanced Problem Solving module focuses on: not just getting a correct solution, but understanding *why* the optimal solution has the complexity it has. Interviewers at Flipkart and Uber aren't just checking correctness — they're evaluating whether you can reason about time complexity optimisation when your first approach fails.
Key takeaway: A monotonic stack achieves O(n) by making a simple guarantee — each element enters and exits the stack at most once. The total work across the entire algorithm is O(n), not O(n) per element.
The 5 Canonical Problems You Must Know
These five problems cover the full range of monotonic stack and queue applications that appear in Flipkart interview rounds, Uber screening calls, and Swiggy backend assessments documented across interview forums including LeetCode Discuss and Glassdoor India.
1. Next Greater Element
The problem: For each element in an array, find the first element to its right that is strictly greater. Return -1 if none exists.
This is the foundational monotonic stack problem. Use a decreasing stack. Iterate left to right. For each element, pop the stack while the top is smaller than the current element — those popped elements have found their next greater element. Push the current element.
#66d9ef;font-weight:bold">def next_greater_element(nums):result = [-1] * len(nums) stack = [] # stores indices, decreasing by value
#66d9ef;font-weight:bold">for i, num in enumerate(nums): while stack and nums[stack[-1]] < num: idx = stack.pop() result[idx] = num stack.append(i)
#66d9ef;font-weight:bold">return result
Try It Live
Time: O(n). Space: O(n). Every index enters and leaves the stack exactly once.
2. Daily Temperatures (LeetCode 739)
The problem: Given daily temperatures, find how many days until a warmer temperature. Return 0 if no warmer day exists.
This is next greater element with a slight reframing. Same decreasing stack pattern. The difference is you're storing indices to compute the *distance* between positions, not just the value. This exact problem appears in Uber India's backend screening round, documented across multiple interview reports.
#66d9ef;font-weight:bold">def daily_temperatures(temperatures):result = [0] * len(temperatures) stack = [] # indices
#66d9ef;font-weight:bold">for i, temp in enumerate(temperatures): while stack and temperatures[stack[-1]] < temp: idx = stack.pop() result[idx] = i - idx stack.append(i)
#66d9ef;font-weight:bold">return result
Try It Live
3. Largest Rectangle in Histogram (LeetCode 84)
This is the problem that separates candidates who have studied patterns from those who haven't. It appears at the hard level but becomes approachable the moment you recognise the monotonic stack structure.
The idea: For each bar, find how far left and right it can extend at its own height. The bottleneck is the nearest shorter bar on either side. Use an increasing stack — when you encounter a bar shorter than the stack top, the top bar has found its right boundary.
#66d9ef;font-weight:bold">def largest_rectangle_in_histogram(heights):heights.append(0) # sentinel stack = [-1] max_area = 0
#66d9ef;font-weight:bold">for i, h in enumerate(heights): while stack[-1] != -1 and heights[stack[-1]] >= h: height = heights[stack.pop()] width = i - stack[-1] - 1 max_area = max(max_area, height * width) stack.append(i)
#66d9ef;font-weight:bold">return max_area
Try It Live
Time: O(n). The sentinel value at index -1 handles left-boundary edge cases cleanly.
4. Sliding Window Maximum (LeetCode 239)
This is where the monotonic deque enters. The problem: given an array and window size k, find the maximum element in each sliding window.
Brute force is O(n·k). The monotonic deque gives you O(n). You maintain a deque of indices in decreasing order of their values. For each new element, pop from the back while the back element is smaller (it can never be the maximum while the current element is in the window). Pop from the front when that index has fallen outside the current window.
from collections #66d9ef;font-weight:bold">import deque#66d9ef;font-weight:bold">def max_sliding_window(nums, k): dq = deque() # stores indices, decreasing by value result = []
#66d9ef;font-weight:bold">for i, num in enumerate(nums): # Remove indices outside window while dq and dq[0] < i - k + 1: dq.popleft()
# Maintain decreasing order while dq and nums[dq[-1]] < num: dq.pop()
dq.append(i)
#66d9ef;font-weight:bold">if i >= k - 1: result.append(nums[dq[0]])
#66d9ef;font-weight:bold">return result
Try It Live
This problem is a staple in Flipkart's data engineering and backend interview loops. The sliding window pattern family is worth mastering as a cluster — monotonic deque is its hardest variant.
5. Trapping Rain Water (LeetCode 42)
The problem: Given an elevation map (array of bar heights), compute how much rain water can be trapped.
This can be solved with a monotonic stack by tracking the last element on the stack as a potential "valley." When you find a taller bar, the stack top is the valley floor, and you calculate trapped water based on the height difference and width.
#66d9ef;font-weight:bold">def trap(height):stack = [] water = 0
#66d9ef;font-weight:bold">for i, h in enumerate(height): while stack and height[stack[-1]] < h: bottom = stack.pop() #66d9ef;font-weight:bold">if not stack: break left = stack[-1] width = i - left - 1 bounded_height = min(height[left], h) - height[bottom] water += bounded_height * width stack.append(i)
#66d9ef;font-weight:bold">return water
Try It Live
This problem also appears in the stock span problem variant — where you need to count how many consecutive days a stock price was lower than today's. Same decreasing stack, different output format.
Only 6 Seats Left — Cohort 3
How to Recognise the Monotonic Stack Pattern Mid-Interview
In 2026, some engineers ask whether DSA pattern recognition still matters when GitHub Copilot can generate code. The answer is yes — and more than before. AI tools generate syntactically correct brute-force solutions instantly. Interviewers at Flipkart, Uber, and Swiggy have responded by going deeper on *why* you choose a particular data structure and *whether* you can identify the optimal complexity before you write a line. The engineers clearing these rounds in 2026 aren't the ones who memorised solutions — they're the ones who can explain why a monotonic stack beats a nested loop in this specific context.
Here's the recognition checklist to run mentally during an interview:
Signal 1 — You need something about previous elements for each current element. If the problem asks "for each element, find the nearest X to the left/right," you're almost certainly using a monotonic stack.
Signal 2 — The brute force involves scanning left or right for each element. Any nested loop where the inner loop searches for a boundary condition (next greater, next smaller, nearest wall) is a candidate.
Signal 3 — The problem involves a sliding window with an extreme value query. If you need the max or min of a variable-size window efficiently, you need a monotonic deque.
Signal 4 — "Nearest" or "next" appears in the problem. These words almost always map to monotonic stack patterns.
The mental model that makes all five problems unified: you are using the stack to defer decisions about elements until you have enough information to resolve them. Each element waits on the stack until a new element arrives that resolves it — either as an answer (it found its next greater element) or as irrelevant (it was dominated and discarded). This is fundamentally different from computing answers element-by-element, which is why it achieves O(n).
As you build this intuition, pair it with the broader framework covered in the advanced DSA patterns for product company interviews — monotonic stack sits alongside Union-Find, graph traversal, and dynamic programming as the patterns that appear disproportionately in Flipkart, Uber, and Swiggy screening rounds.
Key takeaway: The monotonic stack pattern applies whenever your brute force scans left or right for a boundary condition. Once you see that signal, the stack variant is O(n). The deque variant handles sliding window extremes. The stock span problem is next greater element with reversed direction.
The Data Engineer's Specific Gap — And Why It's Fixable
As a data engineer with four years of Python and SQL, you're not starting from zero. You understand data structures. You use dictionaries, lists, and deques in production code daily. The gap isn't fundamental — it's structural. You've been optimising queries and pipeline throughput, not algorithmic time complexity in the academic sense.
This matters because product companies like Flipkart and Swiggy don't distinguish between "data engineer Python" and "SDE Python" at the screening stage. The DSA round is the same round. And the patterns it tests — monotonic stack, dynamic programming, graph traversal — are the ones that appear less often in ETL and data warehouse work.
The fastest way for a working professional in India to become interview-ready for AI Engineering roles at product companies is a structured 5-month program combining targeted DSA pattern study with real interview practice. Engineers following this pattern in FutureJobs cohorts reach interview-readiness — defined as solving medium problems consistently under 25 minutes — within 16–20 weeks of structured preparation at 15 hours per week. If you start today, you can be actively interviewing at Flipkart or Swiggy for AI Engineering roles by Q4 2026.
Your four years of data engineering experience is genuinely relevant context. Interviewers at companies building ML platforms — Flipkart's recommendation engine team, Swiggy's demand forecasting team — value engineers who understand data at pipeline scale. But you need to clear the DSA screen first before that context is visible.
The service to product company transition in India follows a consistent pattern: structured DSA prep fills the screening gap, then your domain expertise becomes a genuine differentiator in the technical deep-dive.
According to placement outcome data tracked across 2024–2025, engineers who transition from data engineering or BFSI tech roles to product company AI/ML positions report median salary increases of 50–70%, moving from ₹10–14 LPA to ₹18–24 LPA. The DSA screen is the gate. This pattern is the key to one of the gates.
How FutureJobs Can Help You
The core problem isn't that you don't know enough Python. It's that you've never been systematically taught the structural patterns — monotonic stack, two-pointer, dynamic programming — that product companies test at the screening stage. And a generic DSA course treats you like a first-year student, not a specialist with a specific gap.
FutureJobs' 5-Month DSA & System Design Program is structured around patterns, not problems. The Advanced Problem Solving module addresses exactly the gap described in this article: engineers who solve correctly but time out, who recognise brute force but don't know the O(n) structural upgrade. The curriculum covers monotonic stacks, sliding window deque problems, hard histogram and water-trapping variants, and the interview recognition frameworks that tell you which tool to reach for.
Critically, if your target is an AI Engineering role at Flipkart or Swiggy rather than a pure backend SDE role, the program's Prompt Engineering and LLM integration module is directly relevant. In 2026, AI Engineering interviews at Indian product companies test both DSA fundamentals and practical LLM API integration — this is the only structured program in India that covers both within the same curriculum.
The program runs on evenings and weekends — no quitting your current job at the BFSI company. The pay-after-placement model means your effective upfront cost is ₹4,999/month during the program, with the remainder paid from your new salary. That's structurally different from Scaler or AlmaBetter where you pay ₹1.5–2.44 lakh upfront before you've cleared a single interview round. FutureJobs only succeeds when you get placed.
FutureJobs operates within Impacteers' 25-year recruitment network with 3,000+ hiring partners. Your 1:1 FAANG mentor has cleared the exact interview loops you're targeting.
Only 12 Seats Left — Cohort 3
Frequently Asked Questions
How long does it take to get interview-ready for Flipkart or Swiggy DSA rounds if I'm currently failing at the monotonic stack level?
With 15 hours per week of structured preparation, most engineers reach consistent medium-problem performance within 12–16 weeks. Monotonic stack is a mid-tier pattern — once you've internalised the mental model, the five canonical problems take roughly 2–3 weeks to master. The FutureJobs DSA program reaches this pattern cluster by week 8–10, leaving time for timed mock interviews before the 5-month mark.
I'm a data engineer with 4 years of Python experience. Will I be wasting time on basics in a DSA program?
A well-designed program won't waste your time on arrays and linked lists if you already know them. FutureJobs uses a Skill Gap Analysis assessment at intake to identify your actual entry point. Engineers with strong Python fluency and domain experience typically accelerate through foundational modules and spend more time on hard patterns — monotonic stack, DP, graphs — which is exactly where the product company screening gap sits.
Can I prepare for product company AI Engineering interviews while working full-time at a BFSI company?
Yes — and that's specifically the scenario FutureJobs is built for. All sessions run in the evenings and on weekends. Sessions are recorded for replay if you miss a live class. The 15-hours-per-week schedule is calibrated for working professionals, not students. Engineers in current cohorts hold roles at banks, IT services companies, and data teams exactly like yours.
Is the monotonic stack pattern actually tested at Flipkart and Uber India, or is this overstated?
It's well-documented. Sliding window maximum, daily temperatures, and largest rectangle in histogram appear across Flipkart, Uber India, and Swiggy interview reports on LeetCode Discuss, Glassdoor India, and Pramp community threads. These aren't fringe questions — they're used specifically because they test whether a candidate knows O(n) structural patterns or defaults to O(n²) brute force. They're reliable signal for product companies at the medium-hard screening level.
Final Thoughts
Monotonic stack and queue problems aren't harder than what you already do as a data engineer. They're just structured differently — and you've never had a reason to study that specific structure until now. The O(n) insight is clean: maintain an ordered stack, discard dominated elements immediately, and every element enters and exits at most once. Once you have that mental model, next greater element, daily temperatures, sliding window maximum, and trapping rain water are all variations on the same core idea.
The gap between your current capability and what Flipkart and Uber are testing in 2026 is real — but it's also specific and learnable. It's not a year of study. It's systematic pattern coverage over 16–20 weeks, with timed practice and interview feedback.
Your data engineering background — pipeline scale, data volume intuitions, Python fluency — is a genuine advantage once you clear the screen. The DSA round is the gate. This pattern is one of the keys.
Your next step: Take the FutureJobs Skill Gap Analysis. Find out exactly which patterns you're missing, so you're preparing for your specific gap — not grinding problems that don't connect to your target role.
