AI Amplifies Cyber Threats 10x in Banking Sector
Artificial intelligence is dramatically transforming the cybersecurity landscape, particularly within the banking and financial services industry. According to experts, AI is not only sharpening the sophistication of cyber attacks but also multiplying their frequency and scale at an alarming rate.
The Amplified Hacker: Breaking Traditional Constraints
At a recent Times Techies discussion focused on AI and cybersecurity, Neeraj Naidu, Chief Information Security Officer at Kotak Mahindra Bank, highlighted a critical shift. He stated that with AI, a hacker's capabilities are amplified tenfold. Previously, attackers faced significant constraints related to language barriers, geographical limitations, and the sheer effort required to launch campaigns. AI effectively removes these boundaries, enabling a single attacker to simultaneously run coordinated campaigns across multiple countries, languages, and systems.
The consequence is a dual threat: attacks are becoming both smarter and more numerous. Vijay Rajagopal, Country Head for BFSI and Fintech Go-to-Market at AWS, emphasized that AI has made cyber fraud "faster, cheaper, and easier." This transformation is vividly illustrated in phishing attempts. Where poorly constructed grammar or awkward phrasing once exposed scams, AI-generated emails now appear clean and professional, making them far more convincing.
Voice and Style Replication: New Frontiers of Fraud
The threat extends beyond text. Rajagopal pointed out that listening to short public audio clips is often sufficient for AI to mimic the voices of senior executives. This poses a substantial risk, especially in organizations that still depend on voice calls for critical approvals and authorizations.
Satyavathi Divadari, Deputy CISO at Freshworks, echoed these concerns, noting the rise of AI-driven voice and video fraud that creates immense uncertainty. "What to believe, what not to believe," she described, highlighting the difficulty teams face in distinguishing genuine instructions from sophisticated fakes.
Babitha B P, Deputy CISO at State Bank of India, revealed an even more nuanced danger. Attackers are no longer merely copying language; they are replicating specific writing styles. "When RBI writes or regulators write, they have a style," she explained. AI allows cybercriminals to mimic these patterns meticulously, crafting phishing emails that appear authentic even to trained and vigilant staff.
The Defender's Dilemma: Speed, Scale, and Precision
Diwakar Dayal, Managing Director for India & SAARC at cybersecurity solutions provider SentinelOne, outlined the asymmetric advantage cybercriminals currently hold. Operating without the regulatory and ethical guardrails that constrain legitimate organizations, they can adopt and deploy new technologies like AI much faster than defenders. AI, he noted, grants them unprecedented speed, massive scale, and surgical precision, significantly complicating the task of cybersecurity professionals.
The AI-Powered Defense Strategy
Confronted with this new reality of amplified threats, the expert panel unanimously agreed that traditional cybersecurity processes are insufficient. The adoption of AI on the defensive side is now unavoidable and essential for survival.
Augmenting Human Capability
Neeraj Naidu explained how AI assists security teams by analyzing user behavior, detecting anomalies, and reducing noise within vast datasets. Tasks that would take human analysts days or weeks to complete can be executed by AI with remarkable speed and accuracy. However, he stressed that while machines handle the volume, human oversight must remain responsible for final decisions.
Vijay Rajagopal proposed a graded, strategic approach to defense. Routine, low-level threats can be managed autonomously by AI agents. More complex scenarios, such as unusual bulk payment activities across borders, can be flagged for human review. For high-stakes operations involving regulatory compliance, human judgment and discretion remain irreplaceable. "It's a spectrum, how you use autonomous agents," he summarized.
Evolving from Rules to Behavior
Babitha B P illustrated the evolution in fraud detection, moving from static, rule-based systems to dynamic models that learn and adapt to individual customer behavior patterns. When a transaction deviates from the established norm, automated checks are triggered. Customers often experience this as verification calls or real-time alerts, many of which are powered by AI-driven Interactive Voice Response (IVR) systems. "It is not the human who is making the call," she clarified, underscoring the automation at play.
The Emerging Risk of Uncontrolled AI Use
Beyond external threats, a new internal vulnerability is emerging: the uncontrolled and unmonitored use of AI tools by employees.
The Challenge of Shadow AI
Babitha emphasized that visibility is the cornerstone of security. Banks must have clear oversight regarding which employees are using which AI systems and what sensitive data is being shared with them. Satyavathi Divadari termed this phenomenon the rise of shadow AI. Beyond officially sanctioned corporate tools, thousands of unauthorized AI applications are in active use across organizations.
Traditional security controls often fail to detect when sensitive information is copied between browser tabs or pasted into external, unvetted AI tools, creating significant data leakage risks.
Securing the AI Ecosystem
Diwakar Dayal offered a note of cautious optimism, stating that solutions to securely integrate AI already exist. "Organizations like ours are ensuring that you don't have to block AI access to anybody. We make sure that AI access – both inbound and outbound – is filtered to a point that is up to the policy of the organization," he explained. The goal is not to stifle innovation but to enable secure and governed adoption, allowing businesses to harness AI's benefits while mitigating its inherent risks.