Deepfake Video Targets Bombay Stock Exchange CEO
At the start of this year, a video appeared on social media platforms in India featuring Sundararaman Ramamurthy, the chief executive of the Bombay Stock Exchange (BSE), seemingly advising investors on which stocks to purchase.
The video promised substantial returns to viewers who followed the advice presented.
However, the individual speaking was not Ramamurthy himself; it was a deepfake video created using artificial intelligence technology.
"It was in the public domain where many people could see it, and get cheated into buying or selling stocks, as if I'd recommended them," explains Ramamurthy.
"When we see an incident like this, we immediately lodge a complaint. We go to Instagram and other places where it's posted to get the video taken down. And we regularly write to the market warning people not to believe in fake videos."
"We don't know how many people have seen this video, it's really difficult to find out, so we can't really judge if it's had a big impact or not.
"What we want is for it to have had no impact at all. No one should incur a loss because they believe something that is untrue."

Rising Incidence of Deepfake Attacks Globally
Ramamurthy and the Bombay Stock Exchange are not isolated cases in facing deepfake threats.
"The latest data shows that over the past two years or so, we've seen an increase of almost 3,000% in the number of deepfakes being utilized," says Karim Toubba, chief executive of US-based password security company LastPass.
Toubba himself experienced a deepfake attack in 2024.
"One of our employees in Europe received an audio message and a text message from someone alleging to be me, urgently requesting some help from me," he recounts.
Fortunately, the employee was cautious.
"The message was on WhatsApp, which for us is not a sanctioned communication channel," says Toubba. "Also, we have corporate sanctioned mobile devices and this came in via his personal phone. So that made him think this was potentially a little murky, a little fishy."
The employee promptly reported the incident to LastPass's cybersecurity team, preventing any damage.
Major Corporate Deepfake Fraud: The Arup Case
British engineering firm Arup was less fortunate. In 2024, it suffered one of the most sophisticated deepfake attacks recorded in the corporate sector.
According to Hong Kong police, an Arup employee in Hong Kong received a message purportedly from the firm's chief financial officer (CFO), who was based in London, concerning a "confidential transaction."
The employee then participated in a video call with the CFO and other staff members. Based on this call, the employee transferred $25 million (£18.5 million) of Arup's funds to five separate bank accounts as instructed. It was later revealed that the individuals on the call, including the CFO, were deepfakes.
"You would never want to simply jump on a video call with someone and transfer $25m," says Stephanie Hare, a tech researcher and co-presenter of the BBC's AI Decoded TV programme.
"Companies are having to take extra steps to secure these types of communications. That's the brave new world we're in now."
Advancements in AI and Deepfake Technology
The rapid development of artificial intelligence means deepfake videos are becoming increasingly realistic.
"Deepfakes are becoming very, very easy to do," says Matt Lovell, co-founder and CEO of UK-based cybersecurity company CloudGuard. "To generate video and audio quality of extremely accurate specifications - it takes minutes."
"For, say, a simple, single individual-led attack, you're looking at $500 to $1,000 with the use of largely free tools," says Lovell. "For a more sophisticated attack, you're looking at between $5,000 and $10,000."

Technological Responses to Deepfake Threats
While deepfake videos grow more advanced, so do the detection tools designed to counter them. Companies now employ verification software capable of analyzing facial expressions, head movements, and even blood flow patterns to determine authenticity.
"In your cheeks or just underneath your eyelids, we'll be looking for changes in blood flow when a person is talking or presenting," Lovell explains. "That's really where we can tease out whether it's AI-generated or it's real."
Despite these advancements, firms face an ongoing challenge to stay ahead of fraudsters.
"It's a race, between who can deploy a technology and who can thwart that technology as quickly as possible," says LastPass's Toubba. "Luckily, there seems to be quite a bit of money flowing into this, which will only accelerate the pace with which organisations will develop technologies to detect and ultimately block these things."
However, Matt Lovell offers a more cautious perspective.
"Attack vectors are accelerating faster than we can accelerate defence automation and protection," he states. "Are people moving fast enough to respond to the speed the threat is developing? Absolutely not."
Growing Demand for Cybersecurity Expertise
Stephanie Hare highlights the increasing need for skilled professionals to combat deepfake fraud.
"We have a shortage of cybersecurity professionals worldwide, We need more people to get into this."
She also notes a gradual shift in corporate priorities regarding security.
"In the past it was not considered a priority to secure your operations in quite the same way as it is now," she points out.
"Now that we have these types of risks, with the leaders at companies, with CEOs, being deepfaked, I think company executives will be spending more time with their chief information security officers and teams than before. And that is a good thing."







