More

    ChatGPT: Can it help AI-led cyberattacks?

    TechnologyCybersecurityChatGPT: Can it help AI-led cyberattacks?

    ChatGPT, a text-based artificial intelligence (AI) bot has made headlines for its use of advanced AI. From accurately fixing a coding bug, generating cooking recipes, and creating 3D animations, to composing entire songs, ChatGPT has showcased the mindblowing power of AI to unlock a world of incredible new possibilities and new risks.

    Science, technology, and all its components have strongly benefited the human race over generations. By definition, it is the search for new knowledge to improve the quality of life. The relentless quest to mimic and decipher the human mind has ushered in an era of Artificial Intelligence. However, everything has the potential of being good or bad depending on the people who are behind it. 

    Since its launch on November 2022, tech experts and commentators worldwide have been concerned about the impact AI-generated content tools will have on cybersecurity. 

    In the recent Black Hat and Defcon security conferences, a demonstration of hacking humans with AI-as-a-service revealed howAI can actually craft better phishing emails and devilishly effective spear phishing messages than people.

    Researchers using OpenAI’s GPT-3 platform in combination with other AI-as-a-service products focused on personality analysis generated phishing emails that were generated customized using their colleagues’ backgrounds and characters. Eventually, the researchers developed a pipeline that refined the emails before hitting their targets. To their surprise, the platform also automatically supplied specifics, such as mentioning a Singaporean law when instructed to generate content for people in Singapore.

    The makers of ChatGPT have clearly suggested that the AI-driven tool has the in-built ability to challenge incorrect premises and reject inappropriate requests. While the system apparently has inbuilt guardrails designed to prevent any kind of criminal activities, however, with a few tweaks, it generated a near flawless phishing email that sounded ‘Weirdly Human’. 

    This could mean more trouble for markets that are highly vulnerable to phishing attacks such as the Philippines. In fact, the enormity of phishing campaigns in the country prompted the government to begin investigations. This resulted in the approval of the SIM Card Registration Act, a law that mandates users to register personal information upon SIM card activation and purchase in an effort to encourage responsibility and give law enforcement an identification tool for resolving crimes.

    Sean Duca, vice president and regional chief security officer for Asia Pacific & Japan at Palo Alto Networks, said, “Considering the looming threats of an ever smarter and technologically advanced hacking landscape, the cybersecurity industry must be equally resourced to fight such AI-powered exploits. In the long run, the industry’s vision cannot be that a swarm of human threat hunters try to sporadically fix this with guesswork.” 

    The need of the hour is to take intelligent action to neutralize these evolving threats. On the positive side, Autonomous Response is today significantly addressing threats without human intervention. However, as AI-powered attacks become a part of everyday life, businesses, governments and individuals impacted by such automated malware must increasingly rely on emerging technologies such as AI and ML to generate their own automated responses.

    As AI continues to be developed, businesses will continue to face a number of challenges in navigating the AI cybersecurity landscape. In particular, there is considerable focus on finding the balance between machines, humans, and ethical considerations. 

    “Establishing corporate policies is critical to doing business ethically while improving cybersecurity. We need to establish effective governance and legal frameworks that enable greater trust in AI technologies being implemented around us to be safe, reliable, and contribute to a just and sustainable world. The delicate balance between AI and humans will therefore emerge as a key factor towards successful cybersecurity in which trust, transparency, and accountability supplement the benefits of machines,” Duca concluded.

    Related Posts