Monotonic Stack and Queue: The Hidden DSA Pattern That Keeps Appearing in Flipkart and Uber Interviews
Monotonic stack and queue problems keep appearing in Flipkart, Uber, and Swiggy interviews. Learn the pattern, the 5 key problems, and how to recognise it mid-interview. FutureJobs by Impacteers.

Posted by
Shahar Banu

Reviewed by
Divyansh Dubey
Published
You write Python daily. You build ETL pipelines that move terabytes of data efficiently. You understand time complexity because a poorly written Spark job costs real money. And then a Flipkart screening round hands you what looks like a simple array problem — find the next greater element for each entry — and your brute-force O(n²) solution hits TLE with thirty seconds left on the clock.
The interviewer doesn't say anything. They don't need to.
If you're a data engineer targeting AI engineering roles at product companies in 2026, the monotonic stack queue DSA interview pattern is exactly the gap between your current capability and what Flipkart, Uber, and Swiggy actually screen for. It's not a complex algorithm. It's a structural pattern you haven't formally studied — and once you see it, you'll recognise it everywhere.
By the end of this guide, you'll understand what monotonic stacks and queues are, why they exist, the five canonical problems that keep appearing in Indian product company interviews, and the one mental model that makes the entire pattern click.
What Is a Monotonic Stack — and Why Does It Exist?
A monotonic stack is a stack that maintains its elements in strictly increasing or strictly decreasing order at all times. That's the complete definition. A monotonic stack is a stack data structure where every push operation maintains a monotonic (either entirely non-increasing or entirely non-decreasing) sequence of elements — making it the optimal tool for problems that require tracking relative order relationships across an array in linear time.
The reason it exists is purely about time complexity. A large class of array problems has an obvious O(n²) solution: for each element, scan all subsequent elements to find some relationship. The monotonic stack compresses that inner loop into amortised O(1) per element by maintaining a structure that eliminates impossible candidates as you traverse.
There are two variants. An increasing monotonic stack maintains elements in ascending order from bottom to top — when you push a new element, you pop everything larger. A decreasing monotonic stack maintains elements in descending order — when you push, you pop everything smaller. Which variant you need depends entirely on what relationship you're tracking. If the problem asks "what's the next smaller element," you want increasing. If it asks "next greater," you want decreasing.
If you've worked with Pandas and wondered why vectorised operations outperform row-wise loops, you already understand the intuition: the monotonic stack is the algorithmic equivalent of eliminating redundant comparisons at the structural level rather than optimising individual steps.
Key takeaway: A monotonic stack runs in O(n) time because each element is pushed and popped at most once, giving O(2n) = O(n) total operations — regardless of how many comparisons the naïve O(n²) approach would make.
The 5 Canonical Problems You Need to Know Before Your Flipkart Interview
These five problems cover the monotonic stack and queue pattern comprehensively. They appear repeatedly across Flipkart, Uber, Swiggy, and Razorpay screening rounds, and they're documented across LeetCode discuss threads, Glassdoor interview reports, and engineering blogs from Indian candidates who've cleared these rounds.
1. Next Greater Element
Given an array, for each element find the first element to its right that is greater. Return -1 if none exists. This is the canonical entry point for the monotonic stack pattern.
def next_greater_element(nums):
result = [-1] * len(nums)
stack = [] # stores indices, decreasing stack
for i, num in enumerate(nums):
while stack and nums[stack[-1]] < num:
idx = stack.pop()
result[idx] = num
stack.append(i)
return result
Try It Live
The key insight: maintain a stack of indices whose "next greater" hasn't been found yet. When a larger element arrives, it resolves all smaller pending elements in one sweep. Every element is pushed once and popped once — O(n) total.
2. Daily Temperatures
Given a list of daily temperatures, return a list where each entry is the number of days until a warmer temperature. This is next greater element with distance tracking instead of value tracking. The stack stores indices, and the answer for each popped index is `current_index - popped_index`. This is a documented Flipkart phone screen question.
3. Largest Rectangle in Histogram
Given an array of bar heights, find the largest rectangle that fits within the histogram. This is the hard variant. For each bar, you need the nearest shorter bar to its left and right — two applications of the monotonic stack pattern. The area for bar `i` is `height[i] × (right_boundary[i] - left_boundary[i] - 1)`. Many candidates who breeze through daily temperatures freeze here because they don't recognise it as the same structural pattern applied twice.
4. Sliding Window Maximum
Given an array and window size k, return the maximum element in each sliding window. This is where the monotonic deque interview pattern becomes essential. A deque (double-ended queue) maintains indices in decreasing order of their values. When the window slides, you remove expired indices from the front and maintain the monotonic property from the rear.
from collections import deque
def sliding_window_maximum(nums, k):
dq = deque() # stores indices, decreasing by value
result = []
for i, num in enumerate(nums):
# Remove indices outside window
while dq and dq[0] < i - k + 1:
dq.popleft()
# Remove smaller elements from rear
while dq and nums[dq[-1]] < num:
dq.pop()
dq.append(i)
if i >= k - 1:
result.append(nums[dq[0]])
return result
Try It Live
This is a Uber India interview staple. The naive O(nk) approach gets TLE on large inputs. The deque-based O(n) solution is what the interviewer is waiting to see.
5. Trapping Rain Water
Given an elevation map, compute how much water can be trapped. There's a two-pointer approach, but the monotonic stack approach is worth knowing: the stack tracks bars that could form a container's walls, and each time a taller bar appears, you calculate the trapped water between the current bar, the popped bar, and the new stack top. Understanding both approaches — and being able to articulate the trade-offs — signals strong problem decomposition skills, which matters more in AI engineering roles where you're expected to reason about solutions, not just produce them.
How to Recognise the Pattern Mid-Interview
How do you know when to use a monotonic stack? Ask yourself three questions when you see an array problem that looks O(n²) naively.
Question 1: Does the problem ask for a "next greater," "next smaller," "previous greater," or "previous smaller" relationship for each element? If yes, it's almost certainly a monotonic stack.
Question 2: Does the problem involve a sliding window with a maximum or minimum query that needs to be O(n) overall? If yes, reach for a monotonic deque.
Question 3: Can elements that are "blocked" by a subsequent element be safely discarded? This is the core elimination principle. In next greater element, once you find a larger element, all smaller pending elements are resolved — they can be discarded. If your problem has this property, the monotonic stack is the right structure.
The conversational query version of this is: "How do I know when to use a monotonic stack in a DSA interview?" The answer is: when the problem asks for a next/previous greater/smaller relationship per element, or when a sliding window needs a running max/min — these are the two trigger conditions that make the pattern applicable.
If you've been preparing for Flipkart or Uber interviews and solving array problems in Pandas-style logic — scanning forward and backward with vectorised operations — your instincts are correct. The monotonic stack is the algorithmic formalisation of the same idea. The gap isn't conceptual. It's structural exposure.
Advanced DSA patterns for product interviews like Union-Find, segment trees, and monotonic structures form a connected ecosystem — once you learn to see patterns rather than problems, the prep compounds.
Only 12 Seats Left — Cohort 3
The One Mental Model That Makes Monotonic Stack Click
Here's the mental model that experienced competitive programmers use but rarely explain explicitly: the stack is not storing answers — it is storing candidates whose answers haven't been determined yet.
Every element you push is waiting for something. In next greater element, it's waiting for a larger element to arrive. In trapping rain water, it's waiting for a taller wall. In sliding window maximum, it's waiting to become the window's maximum after larger elements expire.
When the right event occurs — a larger element appears, a window boundary passes — the stack processes all waiting candidates whose fate is now determinable. That's the pop loop. The remaining elements stay on the stack because their fate is still open.
This reframing matters if you're targeting AI engineering roles at product companies. In a Flipkart system design conversation, reasoning about "what information is still pending resolution vs. what can be committed" is a distributed systems thinking pattern. Interviewers at AI product companies notice when a candidate's algorithmic reasoning maps to higher-order engineering concepts.
In 2026, GitHub Copilot can generate a monotonic stack implementation if you describe the problem correctly. But generating correct code and understanding why the structure works — and being able to explain that under interview conditions — are different skills. The latter is what product company interviewers are evaluating when they probe your reasoning after you write the code.
Insider knowledge signal: Uber India's Bangalore engineering interviews, based on documented candidate reports across 2023–2025, consistently include at least one problem from the sliding window maximum or next greater element family in the first technical round. The signal they're looking for isn't just the correct solution — it's whether you narrate the elimination principle unprompted.
Why engineers fail MAANG DSA rounds often comes down to this exact gap: producing correct brute-force solutions without recognising the structural pattern the interviewer is probing for.
The AI-Era Context: Does DSA Still Matter When Copilot Exists?
In 2026, some data engineers ask whether mastering monotonic stack problems is necessary when GitHub Copilot can generate the code from a problem description. The answer is yes — and the reasoning matters specifically for your target role. AI engineering positions at Flipkart, Swiggy, and Uber India require engineers who can evaluate AI-generated code for correctness, reason about time complexity trade-offs in production systems, and make architectural decisions that AI tooling cannot make autonomously. A candidate who can only prompt Copilot into a solution but cannot explain why the solution is O(n) — or when it breaks — will not clear the second half of a technical interview round. The DSA screening exists precisely to test this reasoning layer.
Data engineers transitioning to AI engineering roles have a structural advantage here: you already think about pipeline efficiency and data volume at scale. The question a Flipkart interviewer is really asking with a sliding window maximum problem is not "do you know this trick?" It's "can you reason about throughput, bottlenecks, and efficient data processing?" That's your domain. The monotonic deque DSA pattern is just the algorithmic vocabulary for that reasoning.
According to LinkedIn's 2026 India Tech Hiring Report, AI Engineer and ML Engineer roles in Indian product companies have grown 3.4x since 2023, with Bengaluru, Hyderabad, and Chennai accounting for 68% of posted positions. The median advertised CTC for AI Engineers at mid-stage product companies sits between ₹22–38 LPA for engineers with 4–7 years of combined data and ML engineering experience.
What the Stack Span Problem Teaches You About Interview Thinking
The stock span problem — given daily stock prices, compute for each day how many consecutive preceding days had a price less than or equal to today's price — is a clean illustration of how the monotonic stack generalises. It's the previous greater element variant, processed left to right.
def stock_span(prices):
stack = [] # stores indices
spans = []
for i, price in enumerate(prices):
span = 1
while stack and prices[stack[-1]] <= price:
span += spans[stack.pop()]
stack.append(i)
spans.append(span)
return spans
Try It Live
What makes this problem valuable for interview prep — beyond the solution itself — is the span accumulation pattern. Instead of recomputing the span from scratch each time, you accumulate spans from popped elements. This is a compound elimination: you're not just discarding irrelevant candidates, you're absorbing their resolved information into the current element's answer.
This is a Swiggy and Razorpay screening question variant. Candidates who understand the accumulation pattern handle the follow-up question — "what if prices can be equal?" — without recoding from scratch. That follow-up is the actual test.
Engineers in Chennai, Hyderabad, and Bengaluru consistently report these pattern variations appearing in the second or third question of a 60-minute DSA screen. If you've solved the base problem but haven't seen the variants, you're prepared for 60% of the round. The product company interview preparation India process requires deliberate exposure to these variants — not just the canonical LeetCode problem.
Key takeaway: The stock span problem is the monotonic stack applied to accumulative range queries. Once you understand that spans from popped elements carry forward resolved information, the pattern extends naturally to histogram problems and beyond.
Prerequisite Map: What You Need to Know and What This Unlocks
What you need before this pattern: - Python stack operations: `append()`, `pop()`, `[-1]` indexing — you already have this - `collections.deque` with `appendleft()`, `popleft()` — 10 minutes of documentation - Big O notation intuition: the difference between O(n) and O(n²) at scale — you have this from Spark optimisation - Basic array traversal patterns — you have this
What this pattern unlocks: - Largest rectangle in histogram → leads directly to maximal rectangle in matrix (2D extension) - Sliding window maximum → prerequisite for understanding segment trees and sparse tables - Monotonic deque pattern → directly applicable to task scheduling problems in distributed systems design - Pattern recognition speed: once you can identify monotonic stack problems on sight, your time-to-correct-approach in interviews drops significantly
For data engineers specifically: you've already solved sliding window problems in SQL with window functions (`MAX() OVER (PARTITION BY ... ORDER BY ... ROWS BETWEEN ...)`). The monotonic deque is the explicit algorithmic implementation of what the query planner does internally. This connection isn't in most DSA courses — but it's the bridge that makes the pattern genuinely intuitive for your background.
Explore DSA study plans for working engineers to structure monotonic stack practice alongside your current workload without burning out.
How FutureJobs Can Help You Close This Gap
The dominant constraint for a data engineer targeting AI engineering roles in 2026 is not time spent learning — it's learning the right things in the right sequence. Four years of pipeline engineering gives you technical depth that a fresh ML graduate doesn't have. But four DSA screen rejections tell you clearly that Python fluency in Airflow and dbt doesn't automatically translate to interview-ready algorithmic thinking.
FutureJobs' DSA & System Design with AI program is built for working professionals at exactly this stage. The Advanced Problem Solving module covers monotonic stack and queue patterns, time complexity optimisation frameworks, and hard problem recognition — not as isolated puzzles, but as part of the 15 DSA patterns that cover 80% of what Flipkart, Uber, Swiggy, and Razorpay actually test. The curriculum explicitly connects algorithmic patterns to AI engineering role requirements, because the program is designed for engineers transitioning from data and backend roles into product company AI positions.
The schedule is evening and weekend delivery — built around a full-time job, not despite one. The program spans 5 months with 240+ hours of live instruction, 1:1 FAANG mentor support throughout, and a pay-after-placement model where the effective upfront cost is ₹5,000, with the remainder structured post-offer. That's the structural difference between FutureJobs and programs like Scaler or AlmaBetter that charge ₹1.5–2.44 lakh upfront regardless of outcome — FutureJobs' model means the program's incentive is aligned with your placement, not your enrollment.
For a data engineer with 4 years of experience and a clear target role, the program isn't a beginner DSA course. It's a structured gap-closure path with mentors who've made service-to-product and data-to-AI transitions and can map your specific background to what interviewers at your target companies are evaluating. Over 4,500 learners have enrolled across FutureJobs programs, backed by Impacteers' 25-year recruitment network and 3,000+ hiring partners.
Only 8 Seats Left — Cohort 3
Frequently Asked Questions
How long does it take to get comfortable with monotonic stack problems for a DSA interview?
Most engineers with solid Python fundamentals reach interview-level confidence with monotonic stack problems in 2–3 weeks of focused practice — roughly 8–12 hours total. The pattern has limited variants: next greater/smaller, previous greater/smaller, and sliding window max/min. Once you internalise the elimination principle across the five canonical problems, recognising new instances mid-interview becomes instinctive rather than effortful.
I'm a data engineer with 4 years of experience. Will Flipkart count that as relevant for an AI Engineer role, or will they screen me out?
Flipkart's AI and data engineering roles in 2026 explicitly list data pipeline and warehouse experience as preferred qualifications. Your background is relevant — the screening issue is that the DSA round is identical for AI Engineer and backend SDE roles. Your 4 years count downstream; the problem is clearing round one. That's the specific gap to close, and it's a narrower gap than starting DSA from scratch.
How does FutureJobs' Advanced Problem Solving module differ from just grinding LeetCode for monotonic stack problems?
The FutureJobs module structures patterns in the sequence that product company interviews test them, with mentor-guided sessions where you're evaluated on reasoning narration — not just solution correctness. LeetCode gives you problems. The program gives you the pattern recognition framework and real Flipkart and Uber interview simulations so you know exactly what narration style and approach depth the interviewer expects in a live round.
What is the difference between a monotonic stack and a monotonic deque for DSA interviews?
A monotonic stack processes problems where you need the next or previous greater/smaller element for each index — one directional pass. A monotonic deque is used for sliding window maximum or minimum problems where the window moves and expired elements must be efficiently removed from the front. The deque's double-ended access (O(1) from both ends) is what makes it the correct structure for window problems, while a plain stack cannot remove expired front elements efficiently.
Can I prepare for Flipkart and Uber DSA screens while working full-time in Chennai?
Yes — and the FutureJobs program is specifically scheduled for this. Evening and weekend sessions mean you don't take leave or pause work. The 5-month timeline with 15–20 hours per week is designed for working professionals in Chennai, Hyderabad, and Bengaluru who cannot afford full-time study. Engineers in our network report that structured evening prep over 4–5 months is sufficient to move from TLE on medium problems to clearing product company DSA screens consistently.
Final Thoughts
The monotonic stack and queue pattern is not an obscure competitive programming technique. It's a documented gap between data engineers who solve real production problems daily and the specific algorithmic vocabulary that product company interviewers in India use to screen candidates in 2026. Next greater element, sliding window maximum, trapping rain water, largest rectangle in histogram, stock span — these five problems cover the pattern comprehensively. Learn them in order. Master the elimination principle. Understand why each element is pushed once and popped once. Practise narrating that reasoning aloud, not just writing the code.
Your data engineering background — the pipeline thinking, the throughput optimisation instincts, the SQL window function intuition — maps directly onto what these problems are testing. You are not starting from zero. You are filling a specific structural gap that has a clear boundary and a clear solution.
The next Flipkart or Swiggy screening round that includes a sliding window maximum problem is solvable. You now know the pattern, the structure, and the mental model. The next step is deliberate practice on the variants and interview simulation — which is exactly what a structured program with mentor feedback delivers.
