Wednesday, March 19, 2025
spot_img
More
    HomeFuture Tech FrontierAI-First but Not AI-Only: Satyam Upadhyay, Co-Founder, Tradomate.one

    AI-First but Not AI-Only: Satyam Upadhyay, Co-Founder, Tradomate.one

    Generative AI (GenAI) is transforming industries at an unprecedented pace, driving innovation and redefining workflows across various sectors. To explore the nuances of this transformative technology, we spoke with Satyam Upadhyay, Co-Founder of Tradomate.one, a trailblazer in AI-driven solutions. Satyam shares his expert insights on the current state of GenAI adoption, the challenges faced by industries, and the strategies organizations can implement to unlock its full potential. From ensuring data readiness and ROI scalability to mitigating AI hallucinations and envisioning the future of multimodal systems, his perspectives illuminate a roadmap for businesses navigating the GenAI revolution.

    TAM: How do you perceive the current state of Generative AI adoption across industry verticals?

    Satyam Upadhyay: Gen AI adoption typically progresses from early adoption to scale. Tech-driven sectors like FinTech and media have embraced Gen AI effectively compared to traditional sectors such as manufacturing or retail. These traditional industries are still in the early stages of adoption because they have more use cases to address. On the other hand, tech-heavy sectors have already solved many initial challenges, making adoption smoother. Traditional industries, being more brick-and-mortar and supply-chain-centric, naturally face a slower transition.

    In my experience, having worked in advanced analytics for over a decade, this trend is evident. Gen AI started with broad-based models, like natural language processing, around 2017–2018. Since then, we’ve advanced to large language models and now to smaller, specialized models tailored for specific tasks. The primary challenge, however, lies in integrating these technologies into core workflows. While there have been excellent proofs of concept (POCs) and significant progress in enterprise-focused applications—such as chatbots and 4G integration—widespread scalability for addressing major pain points remains elusive. To achieve this, industries need to focus on adopting Gen AI across various verticals, ensuring seamless integration into their processes.

    TAM: What are 3 important things one must consider while considering GenAI to solve problems in organizations?

    Satyam Upadhyay: One of the key considerations for adopting Gen AI is ensuring data readiness. AI is only as effective as the quality of the data it learns from. Therefore, it’s crucial to provide high-quality, unbiased data for these models to function optimally.

    The second consideration is the return on investment (ROI) and scalability. At its core, Gen AI adoption must align with the “faster, better, cheaper” framework. Does it reduce costs? Does it improve efficiency? Does it create new revenue streams? If it doesn’t fulfill at least one of these criteria, it might be worth reconsidering, as adopting Gen AI simply for the sake of using the technology can lead to inefficient implementation across workflows.

    Finally, security and compliance are critical. Governance will play a pivotal role in determining whether Gen AI can solve problems effectively within an organization. Without proper governance, Gen AI can hallucinate extensively, exposing sensitive data and introducing biases. Ensuring robust governance is essential, perhaps even more so than ethical and responsible use, when it comes to successful adoption.

    TAM: Like many have cloud first approach while deciding Infra now, what is the probability leaders will have AI first approach to solve problems?

    Satyam Upadhyay: Yes, I believe organizations can and should aim to be AI-first. The idea is to experiment with AI, see the outcomes, and then adjust accordingly. I agree with that sentiment, but there’s an important caveat, it can be AI-first, but it should not be AI-only.

    Unlike cloud adoption, where everything eventually moved to the cloud, AI adoption is different. For example, during the shift to the cloud, organizations didn’t opt to keep half their infrastructure on-premises while moving the rest to the cloud—it was typically an all-in approach. However, with AI, such an “all-in” approach isn’t advisable.

    AI-first is an excellent strategy, but AI-only isn’t, because AI is meant to augment human decision-making, not replace it entirely. Human intuition remains invaluable and is what truly sets individuals apart. Moreover, there are challenges to consider, AI is compute-intensive and expensive.

    When adopting AI, businesses need to assess cost and infrastructure while adhering to the “faster, better, cheaper” framework. Scalability and ROI should always remain central to the decision-making process. That said, the benefits of AI are undeniable—it offers unparalleled speed and accuracy and has the potential to unlock new revenue streams that can drive business growth. However, organizations must remain cautious and avoid common pitfalls during AI adoption.

    TAM: What strategies or best practices can businesses adopt to mitigate the risks associated with AI hallucinations while deploying Generative AI solutions effectively?

    Satyam Upadhyay: Taking a step back, let’s consider why hallucinations happen in AI models. All the large language models (LLMs) available today, be it OpenAI, Gemini, Anthropic, or others, generate responses based on probabilities, not true understanding. This means that when there’s a lack of proper context or ambiguous queries, the models can produce misleading outputs. Additionally, since the training data for these models isn’t always transparent, inherent biases can creep in.

    For example, in our company, Tradomate, we use a GenAI-based screener where users might ask for “the best stocks.” But the definition of “best” varies from person to person—one might focus solely on price movements, while another might consider both price and company fundamentals. There’s no universal answer. If you ask an LLM like ChatGPT for “the best stocks,” it will generate a response based on its probabilistic understanding, which may or may not align with your specific criteria. This is an example of how hallucinations occur—it’s all about probabilities and the absence of clear context.

    To mitigate such issues, here are some strategies:

    1. Integrate External Knowledge Sources:
      Enhance the AI’s understanding by connecting it to structured or unstructured data. For instance, you could provide access to a Google Drive folder for context or share transcripts of recordings if you’re summarizing interviews. This way, the AI can create a more accurate and coherent output based on real-world data.
    2. Use Fine-Tuned Models:
      Training models specifically for a particular domain can help reduce hallucinations. For certain tasks, small language models (SLMs) can be a better choice than large language models (LLMs). SLMs are tailored, with fewer hyperparameters compared to the massive ones in LLMs, making them more precise for specialized tasks.
    3. Keep Humans in the Loop:
      AI should augment human efforts, not replace them. For example, in legal contracts like MSAs, an AI might extract clauses and generate summaries, but a human should still review the output. This oversight significantly improves efficiency. While a task might take 80 hours without AI, using AI could reduce it to 20 hours, saving 60 hours. However, the human review ensures accuracy and mitigates risks.
    4. Leverage Prompt Engineering:
      Crafting prompts strategically can lead to better outputs. For example, asking the AI to provide a probability score or structuring the prompt to deliver responses in a specific format can help avoid misleading results.
    5. Implement Reinforcement Learning:
      Utilizing feedback loops to refine AI outputs over time can also minimize hallucinations. Continual adjustments based on real-world performance improve the system’s reliability.

    While hallucinations are a challenge, employing techniques like integrating external data, using fine-tuned or smaller models, maintaining human oversight, and applying effective prompt engineering can significantly mitigate the risks. AI should always be seen as a tool to enhance human capabilities, not as a standalone replacement.

    TAM: Where do you see the most significant growth opportunities for GenAI in the next few years?

    Satyam Upadhyay: The biggest growth opportunity we see in Gen AI is the rise of AI agents – intelligent assistants capable of handling specific tasks within a larger ecosystem. For example, in the fintech space, these agents can create personalized investment plans, rebalance portfolios, and even execute trades. Beyond finance, AI agents can streamline workflows in areas like legal services. For instance, they can extract legal clauses from a playbook, send emails to stakeholders, follow up on tasks, and track SLAs for contracts with clients. Essentially, they break down complex workflows into smaller, AI-powered tasks, creating autonomous workflows that drive efficiency.

    This concept isn’t entirely new, robo-advisors in the US are already leveraging these capabilities. However, there’s immense potential to expand their application further across industries.

    AI-Driven Creativity

    Another exciting frontier is AI-driven creativity. While this is a sensitive area for many, especially in gaming, movies, and design, AI can enhance creative outputs when used responsibly and ethically. By operating within frameworks of responsible AI governance, we can unlock its potential without compromising integrity. For example, personalized AI assistants and co-pilots have become much more advanced, enabling tailored investment plans, automated portfolio rebalancing, and other creative applications. In markets like the US, tools like Waterfall are already showcasing these advancements, and it’s only a matter of time before similar innovations take hold in India.

    Multimodal AI

    Another area with immense growth potential is multimodal AI, which seamlessly processes and integrates text, images, videos, and code in real time. Currently, AI models often specialize in just one domain such as text, images, videos, or code. However, the future lies in developing systems that work collaboratively across these formats, akin to an ensemble approach.

    This is particularly important because data sources are rarely limited to one format. By combining insights from multiple data types, multimodal AI can unlock new possibilities and drive innovation across industries. The growth opportunities in AI are vast, spanning intelligent agents, AI-driven creativity, and multimodal systems, all set to redefine how we work and create.

    Author

    RELATED ARTICLES

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here

    Most Popular

    spot_img
    spot_img