Friday, February 7, 2025
spot_img
More
    HomeFuture Tech FrontierDriving AI Adoption with Robust Infrastructure: Shailesh Shukla, CEO, Aryaka

    Driving AI Adoption with Robust Infrastructure: Shailesh Shukla, CEO, Aryaka

    Shailesh Shukla, CEO and Chair of the Board of Directors at Aryaka, spoke to Tech Achieve Media about key developments and trends in AI adoption and the role of Aryaka in this transformative landscape. He articulated is the rapid pace of AI adoption. The companies providing and enabling AI adoption for enterprises through global infrastructure are poised to be significant beneficiaries and enablers of this trend. He emphasized that Aryaka is well-positioned in this space due to its real, ready-to-deploy infrastructure.

    Shukla highlighted that Aryaka’s unified SASE (Secure Access Service Edge) as a service, combined with AI capabilities, is immediately available for deployment. Aryaka’s real, operational infrastructure significantly reduces the friction for global AI deployment, making it a powerful differentiator. He noted that this readiness and capability are what will propel Aryaka from a current $100 million Annual Recurring Revenue (ARR) player to a billion-dollar ARR player in the next few years. 

    Shukla also shared Aryaka’s strategic plans for the Indian market. The company, he said, recognizes the significant growth potential in the manufacturing, industrial, technology services, airline, and transportation logistics sectors in India. Aryaka already has a substantial presence in Bangalore, with over 300 team members dedicated to engineering innovation and customer support, and four points of presence across the country. Shukla expressed excitement about offering Aryaka’s managed services, co-managed services, and self-service unified SASE as a service platform, including AI, to Indian enterprises.

    [Excerpts from the interview]

    TAM: What is the role of AI in transforming traditional business operations and decision-making processes across various industries?

    Shailesh Shukla: As you know, every decade or so, a seminal technology comes to the market and shapes industries for the next couple pf decades. A great example of this is the internet, followed by mobile, and then cloud computing. AI, broadly speaking, and generative AI in particular, is revolutionizing traditional business operations in three key ways. First, it enables enterprises to automate repetitive tasks. Second, it optimizes processes, using automation and insights to drive these processes more efficiently. Finally, it leverages the vast amount of global data accumulated on the internet to enhance decision-making.

    The power of publicly available large language models (LLMs) like those from OpenAI lies in their use of internet data. However, the real potential emerges when these LLMs incorporate enterprise-specific data from CRM systems, ERP systems, and data warehouses. Combining public data with domain-specific data unleashes the full power of AI. This technology, known as Retrieval-Augmented Generation (RAG), enhances classic generative AI models with enterprise-specific data.

    RAG models are exceptionally powerful for enterprises, enabling them to automate tasks, optimize processes, and leverage insights in a targeted and relevant manner. This technology is transforming industries such as finance, business services, retail, transportation and logistics, and manufacturing. It enhances efficiency through predictive analytics, personalizes customer interactions, and drives operational automation.

    In our view, this is another seminal technological shift, akin to the transformations brought about by the internet, mobile, and cloud computing. I am incredibly excited about the current market developments. Every day, new applications, opportunities, LLMs, or GPU-as-a-service companies emerge. These are our future customers, and I am thrilled about the possibilities ahead.

    TAM: As enterprises increasingly adopt AI to drive innovation, what are the most pressing challenges they face in terms of networking and security?

    Shailesh Shukla: The adoption of AI parallels the way cloud adoption occurred. Let’s explore these parallels:

    1. Bandwidth Requirements: Moving data and applications from on-premises to the cloud or an AI-as-a-service environment necessitates extremely high bandwidth to access the infrastructure as if it’s within your data center. This strains existing network infrastructures.
    2. Latency and Performance: Hosting GPUs or AI algorithms on-premises is often impractical for large enterprises. This typically occurs in a public cloud or AI-specific cloud, making latency and performance critical issues, similar to those encountered during cloud adoption.
    3. Global Scalability: Enterprises operate globally, so AI capabilities must be available on a global scale, not just locally. This global scalability is crucial.
    4. Security and Vulnerability: Transferring proprietary data to AI-as-a-service environments increases vulnerability. Attackers are using AI for targeted attacks, such as prompt injection in LLMs, making security a critical concern. Ensuring the protection of AI-oriented applications and data, both in transit and at rest, is paramount.
    5. Regulation Compliance: Regulations like GDPR are expanding to include AI applications, especially since AI algorithms use data from various sources. Ensuring that the right data is used and protected throughout the AI model-building process is a significant challenge.

    These challenges—access, performance, security, and regulation—must be addressed as enterprises adopt AI.

    TAM: There are several data protection laws coming in. Won’t they prove to be counterproductive to the potential that AI holds since it means organisations need to be selective about the data being fed?

    Shailesh Shukla: It’s a balance. Ideally, having both public and private data fed into an LLM would enable the best possible decisions. However, in industries like healthcare, you don’t want personal information, such as yours or mine, being used to drive specific algorithms or decisions. While some data can be aggregated and anonymized, the risk of data leakage remains.

    Firstly, obtaining the right permissions is crucial because privacy is critical. Secondly, it’s essential to ensure that data and knowledge are used correctly. Thirdly, preventing data leakage is paramount. For example, at RSA, we saw a significant focus on DLP (Data Leakage Prevention). With AI, the focus is shifting to KLP (Knowledge Leakage Prevention), as AI requires various data types, including proprietary information. Ensuring that this knowledge doesn’t leak from the AI cloud company is vital.

    Currently, we are in the early stages of AI development. New regulations will likely emerge. In the U.S., for instance, the Biden administration has issued an executive order to regulate the use of private data and reduce AI hallucinations. Similar efforts are underway in Europe and India. Over time, more sophisticated regulations will be developed. In summary, it’s a balance, and it’s challenging to define the exact cut-off point. 

    TAM: The integration of AI into global networks necessitates significant changes in infrastructure and security protocols. What are the critical considerations for enterprises to ensure seamless and secure AI deployment?

    Shailesh Shukla: First, you need a global network infrastructure to support high performance, scale, and reduced latency in a highly available manner. This is a baseline requirement; without it, you cannot effectively adopt AI.

    Second, compliance with regulations regarding data and application protection is critical and must be supported.

    Third, security is crucial, and within security, there are three key points:

    1. Access Control: You need intelligent infrastructure to ensure the right users access the right AI applications. This will likely lead to the emergence of AI access brokers, similar to cloud access security brokers, to prevent unauthorized access.
    2. Threat Protection: Users accessing AI applications face new vulnerabilities such as prompt injection and data poisoning. These attacks are becoming more common, exposing supply chain vulnerabilities and leading to ransomware attacks. Enterprises need robust threat protection to adopt AI safely and securely.
    3. Intellectual Property (IP) Protection: As RAG models become prevalent, enterprises feed their own data from ERP and CRM systems to augment public LLMs. This makes the RAG model more powerful but also increases the risk of IP leakage. Implementing Knowledge Leakage Prevention (KLP) alongside Data Leakage Prevention (DLP) is essential.

    These constraints—network infrastructure, regulatory compliance, and security—must be addressed for broader AI adoption.

    TAM: How does Aryaka AI>Perform address these issues to ensure secure and efficient performance at a global scale? Could you provide specific examples or case studies that highlight how this solution has transformed AI workload management for enterprises?

    Shailesh Shukla: This week, we launched Aryaka AI Perform, an extension of our global infrastructure to enable safe and performant AI use for enterprises. It has four key components:

    1. Optimized Performance for AI: Aryaka is one of the few players outside the big hyperscalers with our own global network infrastructure, featuring 45+ private points of presence in over 110 countries. We call this the Aryaka Zero Trust WAN. Unlike the public internet, our network minimizes latency, jitter, and packet loss while enhancing security.
    2. Global Reach: Enterprises operate across multiple regions, so our extensive network infrastructure and partnerships with carriers in 100+ countries provide instant global reach for AI workloads and users.
    3. Scale and Flexibility: Our single-pass architecture allows for specific AI-related access control requirements to be immediately enforced globally. This centralized policy with distributed control ensures consistent policy enforcement for users worldwide.
    4. Simplified Management: MyAryaka, our single portal, offers full visibility and control over global policies, observability, networking, and security. This simplifies management for enterprise users, providing a unified interface accessible from anywhere.

    Our customers, such as NVIDIA, Cathay Pacific Airlines, Cadence, World Fuel Services, and Black & Decker, rely on Aryaka for global networking and security. They are now extending these benefits to their AI workloads, regardless of whether they are hosted on AWS, Azure, GCP, CoreWeb, or in their own data centers.

    This is not just marketing; it’s real and already underway. While I can’t disclose specific customer names due to confidentiality, the use cases include optimized AI access, reduced latency, superior performance, global reach, and full security. 

    TAM: Key trends and advancements in AI and networking that you believe will shape the next decade? How should businesses prepare to leverage these developments?

    Shailesh Shukla: Let’s use the analogy of gold mining. During the 1849 gold rush in San Francisco, it wasn’t just the gold miners who got rich. The real wealth was gained by those supplying the tools and infrastructure, like picks, shovels, and Levi’s jeans, which were designed for miners’ needs. Similarly, as enterprises rush into AI, it’s the infrastructure providers who stand to gain the most. This is why NVIDIA is the world’s most valuable public company today, providing the infrastructure for AI, not the apps or user-facing services. Aryaka is positioned similarly, offering global network infrastructure and security.

    Let’s talk about the trends:

    1. Edge Computing: AI applications require training in the cloud due to the need for large compute and storage resources. However, once trained, inferencing can be done at the edge, closer to the data source, to reduce latency and improve responsiveness. This trend benefits Aryaka by necessitating powerful, low-latency, high-performance, and secure connectivity.
    2. 5G: With the global deployment of 5G, infrastructure capacity and availability have significantly increased. This development directly benefits us, as it complements the services we provide.
    3. AI-Driven Automation: AI is increasingly used to automate both mundane and sophisticated tasks, driving better operational processes. Access to AI applications, which are often hosted elsewhere, becomes crucial, and our infrastructure supports this need.
    4. Security: There are three aspects to consider:
      • AI for Security: Using AI algorithms to enhance security measures, which we already offer.
      • Security for AI: Protecting against data and knowledge leakage, prompt injection attacks, and data poisoning to ensure AI applications and data are secure.
      • Secure AI Access: Ensuring that access to AI applications is provided in the most secure manner.

    Author

    RELATED ARTICLES

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here

    Most Popular

    spot_img
    spot_img