As artificial intelligence moves from static, prompt-driven systems to autonomous, goal-oriented agents, the underlying data infrastructure must evolve just as rapidly. In this exclusive interview, Tech Achieve Media speaks with Aveekshith Bushan, Vice President and General Manager – APJ at Aerospike, about how GenAI 2.0 is reshaping enterprise architectures, why real-time decisioning has become a non-negotiable requirement, and what CIOs must do now to prepare for an AI-driven future.
TAM: What differentiates GenAI 2.0 from earlier models, and why does it demand real-time infrastructure to deliver on its promise?
Aveekshith Bushan: Artificial intelligence is evolving at an extraordinary pace, with each day bringing new developments. The progression from predictive to generative and now to agentic AI marks a fundamental shift in how these systems operate.
A key change is that AI is becoming increasingly goal-oriented. Traditional generative AI models were largely static, relying on pre-trained data to provide information or generate content. They were reactive rather than proactive, delivering answers but not taking meaningful action.
Agentic AI, by contrast, is designed to achieve objectives autonomously. Consider a simple example: planning a trip. With generative AI, you might receive a list of flight and hotel options. An agentic AI system, however, can manage the entire process. After you specify your goal, traveling to a destination of your choice during a particular week, the AI can:
- Identify the best travel options at a suitable price,
- Book the tickets,
- Secure accommodation, and
- Handle any additional arrangements.
This represents a shift from providing information to executing a sequence of interconnected steps, where each action builds on the last to fulfill a defined objective.
Another significant development lies in data integration. Instead of relying solely on the static knowledge embedded within large language models, agentic systems can access live data from external databases, APIs, tools, and search engines. This allows them to make decisions dynamically, using the most current information available.
In essence, AI is moving into an era of autonomy and action, where systems are not just descriptive but are capable of planning, decision-making, and delivering results with a clear goal in mind.
Building on this shift toward autonomy and goal-oriented AI, the way decisions are made is also evolving. In this new world, decisions rely on streaming data, information coming in continuously and in real time, rather than on legacy infrastructure. If you depend on older systems, you can’t react quickly enough, and when it comes to real-time decision-making, speed is everything. The longer it takes to act, the greater the risk of failure or inefficiency.
Context becomes even more critical in this environment. With generative AI, context was already important; with agentic AI, it’s absolutely essential. Predictive models typically answered straightforward questions, for example, “Is this transaction fraudulent or not?” But agentic AI systems need to make sequential, goal-driven decisions in real time, where every step depends on data from the previous one.
This is why low latency, the ability to access and process large volumes of data instantly, is a non-negotiable requirement. Data isn’t just large; it’s also highly contextual and often time-bound. Each stage of an agentic workflow may require its own dataset that only needs to live temporarily before being discarded.
Take the earlier example of planning a trip. Step one might involve finding all five-star hotels within a certain price range for the chosen week. Step two could narrow the list to hotels that allow pets. Once step two is complete, the data from step one is no longer needed. This concept, known as time to live (TTL), is vital for agentic systems, yet many legacy data platforms don’t support it effectively.
Platforms like Aerospike, however, are built for this. They provide:
- Real-time data access at scale
- Ultra-low latency for rapid decision-making
- Granular control over how long specific data persists
In contrast, traditional systems struggle with scaling and performance. They often require preloading data, lack flexibility to handle streaming inputs, and cannot update datasets dynamically at the speed required by agentic AI. Ultimately, the goal is personalization at scale, which is one of the hallmarks of what can be called Gen AI 2.0. Even the latest large language models, such as ChatGPT-5.0, are far more personalized than their predecessors. They adapt to individual goals, preferences, and context in ways that were simply not possible before.
TAM: How does the shift from static prompts to autonomous, agentic AI change the underlying requirements for data systems?
Aveekshith Bushan:Think of any evolutionary system, whether it’s a human brain or a technological breakthrough, and one thing becomes clear: you can’t skip steps. Each stage lays the groundwork for the next. Without the invention of the light bulb, for instance, we wouldn’t have the computer systems we rely on today. Evolution, whether biological or technological, is sequential.
Autonomous AI systems are no different. They evolve the way humans learn. When you first learn to drive, every decision, when to turn, when to brake, is conscious and sometimes nerve-wracking. Over time, as your brain gathers experience, your subconscious takes over. You make split-second decisions without even thinking about them, often while multitasking.
Autonomous systems function similarly. They process inputs, recognize patterns, and make decisions automatically. The more data they have, the more accurate their predictions become. Even generative AI follows this logic: it works by calculating, word by word, which term is most probable to appear next, building coherent sentences from statistical prediction.
But here’s where autonomy raises the stakes. Whether you’re talking about generative AI or more advanced agentic systems, they must have access to large volumes of real-time data to make reliable decisions. Without it, the results are either incomplete or outright wrong.
Why Real-Time Data Matters
Imagine planning a vacation. The system needs live information: newly available hotels, current prices, up-to-date availability. Without real-time data, you risk inferior recommendations. And it’s not just about the data but latency matters just as much. If a system can’t process information fast enough, the chain of decisions breaks down.
This becomes critical as we move toward true personal AI assistants, and agents that don’t just answer questions but handle end-to-end tasks. You won’t just ask for flight suggestions; you’ll tell your assistant you’re going on vacation, and it will:
- Update your calendar,
- Apply for leave using your company’s HR tools,
- Book your travel and accommodation,
- Even sync your family’s schedules automatically.
Each of these steps relies on accurate data, fed into the system in real time. And here’s the catch: if one step fails, every subsequent step is compromised. This is why “hallucinations” in AI, when a model produces incorrect or fabricated information, are such a concern. In a sequential, goal-driven system, even a single error can cascade downstream.
Just as human decision-making becomes better with experience, autonomous AI systems become better with more data, better infrastructure, and lower latency. Evolution is step-by-step, whether it’s the human brain learning to drive or AI learning to plan your life. And skipping steps isn’t an option.
TAM: What are the risks of deploying GenAI 2.0 on legacy infrastructure?
Aveekshith Bushan: If you want to modernize your data architecture, even before talking about AI models, the first step is addressing the limitations of legacy infrastructure. The core issue with legacy systems is speed and accuracy. They struggle to process large volumes of data in real time, which means recommendations and decisions become slower and less precise. In today’s world, where customers expect instant responses, this is unacceptable. Applications that lag or deliver poor experiences lose users quickly.
A good example comes from the banking sector during India’s UPI revolution. Many traditional banks, constrained by legacy systems, could not compete with new digital-first players that built their platforms from scratch. These newer entrants gained significant market share because they were able to move faster, scale easily, and deliver seamless customer experiences.
The lesson is clear: modernization is no longer optional. But many organizations can’t simply rip and replace their core infrastructure. They’ve invested heavily over decades, and critical systems like core banking platforms are too integral, and too risky, to overhaul in the short term.
So, what’s the alternative? Augmentation. Rather than replacing core systems, enterprises can run a high-performance data platform on top of their existing infrastructure. For example, if a core banking system runs on Finacle with Oracle as its database, you can layer a fast, modern platform like Aerospike on top of it. This approach dramatically accelerates data lookups and enables real-time decision-making without disrupting core operations.
This concept, often called “hollowing the core,” is gaining traction in the enterprise world. By placing Aerospike or similar low-latency platforms above existing systems, organizations can:
- Serve data to modern applications instantly,
- Handle streaming inputs far more efficiently,
- Gradually shift real-time workloads to the modern layer, while keeping legacy systems for reporting, auditing, and compliance.
The result is a hybrid architecture that combines the reliability of proven core systems with the speed and flexibility of modern platforms. Instead of costly, high-risk replacements, enterprises can modernize incrementally, ensuring their infrastructure is ready for today’s real-time, data-driven world.
TAM: What infrastructure components are now considered ‘must-haves’ for supporting GenAI agents?
Aveekshith Bushan: Let’s break this down step by step, because modernizing a data platform isn’t just about adopting AI models but it starts with how data itself moves, is stored, and is made available for decision-making.
1. Streaming data in real time
The first question to ask is: Do you have the infrastructure to stream data as it arrives?
Think of a car driving through traffic. Sensors detect road conditions, weather, and congestion in real time. This data could be textual, audio, or video, and increasingly, it’s all of the above. Modern systems must be able to capture these diverse streams of data and process them immediately.
Typically, data flows from a messaging system (like Kafka, Confluent, Red Panda, or JMS-based systems) into a high-speed data platform such as Aerospike. This “write-heavy” workload involves recording each event rapidly so it can be acted upon without delay.
2. Fast access to data for decision-making
Once data is written, you need extremely fast retrieval. This is critical for use cases like retrieval-augmented generation (RAG), where internal, contextual data is combined with information from public LLMs. For this to work effectively, your data platform must deliver millisecond-level lookups at scale.
3. Always-on availability
In real-time systems, downtime isn’t just inconvenient but catastrophic. If your platform goes down, you’re making decisions on stale data. That might be acceptable in some contexts, like ad targeting, but it’s unacceptable in payments, telecom, or mission-critical applications where regulators are watching.
True 24×7 availability requires a distributed system, both at the data platform level and in the streaming engine, to eliminate single points of failure.
4. Features designed for high-speed environments
Modern data platforms must also support advanced features such as:
- Time-to-live (TTL): Automatically refreshing or expiring data.
- Feature stores: Serving AI models with fresh, queryable data.
- Low-latency architecture: Processing millions of events per second without lag.
These are essential in environments where decisions must be both fast and correct.
5. Edge-to-core architecture
We’re moving from centralized, core-based decisioning to edge-based decisioning. In the past, most computation happened in the data center (“the core”), resulting in stale or delayed decisions. Today, decisions increasingly happen at the edge on your smartphone, in your car, and at the point of payment. The core systems (such as banking data centers) remain in place, but the edge must be real time, low latency, always available, and cost-efficient.
6. Cost efficiency matters
In markets like India, cost is a decisive factor. If an infrastructure upgrade is too expensive, it won’t be approved by CIOs regardless of its technical merits. Aerospike addresses this by significantly reducing infrastructure footprint. For example, a project costing $1 million on a legacy stack could potentially run for $300,000 on Aerospike freeing up capital for other initiatives.
Why this matters for GenAI 2.0
As organizations roll out their next-generation AI initiatives, success will depend on:
- Real-time data streaming,
- Ultra-fast retrieval,
- High availability, and
- Low cost at scale.
Technologies like Aerospike solve these challenges simultaneously enabling enterprises to modernize without ripping out legacy systems, while preparing for a world where AI decisioning must happen instantly, at massive scale, and at the edge.
TAM: In terms of architecture, what defines a “GenAI 2.0 ready” enterprise today, and what steps should CIOs be taking now?
Aveekshith Bushan: Generative AI (GenAI) is fundamentally reshaping how businesses innovate, create, and compete. Over the past year, organizations have moved from experimentation to execution, integrating GenAI into customer engagement, product design, and operational efficiency. Yet, as transformative as the technology is, its success hinges on thoughtful implementation, responsible use, and a deep understanding of both its possibilities and its limits.
The most significant shift GenAI brings is its ability to democratize creativity and insight. With powerful models accessible through APIs and cloud platforms, even small teams can rapidly prototype new solutions, generate rich content, and analyze complex data sets. This levels the playing field and accelerates time-to-market. However, technology alone is not enough. Enterprises must align GenAI deployments with real business needs, fostering cross-functional collaboration between technical experts, domain leaders, and decision-makers.
Challenges remain and the chief among them being accuracy, data privacy, and the need for strong governance. AI-generated content is only as reliable as the data it is trained on, and without robust oversight, organizations risk amplifying bias or making decisions on flawed insights. Establishing transparent policies and clear human-in-the-loop review mechanisms is no longer optional; it’s essential for maintaining trust with customers and regulators alike.
Equally important is recognizing that GenAI is not a replacement for human intuition but an amplifier of it. While AI can produce insights at scale, the judgment to act on those insights must still come from people who understand the broader context, values, and long-term impact of their decisions. Leaders who cultivate this balance, embracing automation without surrendering critical thinking, will be best positioned to capture GenAI’s full potential.
In 2025 and beyond, the organizations that thrive will be those that treat GenAI not as a plug-and-play tool but as a strategic capability. This means investing in upskilling teams, building ethical frameworks, and continuously refining AI systems to meet evolving market realities. Generative AI is not just changing how businesses operate; it’s redefining how they imagine what’s possible.
TAM: Final thoughts.
Aveekshith Bushan: One important trend to recognize, particularly on the data side, is that even the underlying data models are evolving. Traditionally, interactions were fairly straightforward. For example, if you were purchasing something online, the process resembled a simple Q&A flow: you ask a question, receive a response, then ask another. It was reactive, context-specific, and largely transactional.
Now, that paradigm is shifting. Relationships between entities are becoming just as important as the individual data points themselves. This is where graph data models come into play, allowing us to ask richer questions like, “Show me everyone connected to everyone else, and how those relationships influence the query at hand.” This additional relational context fundamentally changes how systems understand and serve information.
In parallel, there is a growing move toward vector data platforms. In retrieval-augmented generation (RAG) pipelines, for instance, organizations increasingly rely on fast, high-performance data stores that support vectorized search. Rather than exact matches on keys or indexes, these platforms enable probabilistic queries, using embeddings, clustering algorithms, and similarity searches to find information most relevant to the context.
Together, these shifts make data platforms and databases far more nuanced than they were even a generation ago. As context deepens, from simple transactions to relationship-driven and vector-based models, the underlying infrastructure must evolve to keep up. This is a significant step forward from the reactive, single-turn Q&A approach of earlier systems, including the first iterations of large language models.