Artificial intelligence language-based models like ChatGPT, Bard and Bing AI can be used by scammers to target individuals and organisations, according to the Indian Computer Emergency Response Team (CERT-In).
A 'threat actor' could use the application to write malicious codes, exploit vulnerability, and conduct scanning to construct malware or ransomware for a targeted system, warns the cybersecurity watchdog.
"AI based applications can generate output in the form of text as written by human. This can be used to disseminate fake news, scams, generate misinformation, create phishing messages, or produce deep fake texts," cautions CERT-In.
A 'threat actor' can ask for a promotional email, a shopping notification, or a software update in their native language and get a well-crafted response in English, which can be used for phishing campaigns, the watchdog explains.
Scammers may create fake websites and web pages to host and distribute malwares to users through malicious links or attachments using a domain similar to AI-based applications, says CERT-In.
"Cybercriminals could use AI language models to scrape information from the internet such as articles, websites, news and posts, and potentially taking Personal Identifiable Information (PII) without explicit consent from the owners to build corpus of text data," the advisory says.
The warning comes at a time when GPT (Generative Pre-trained Transformer) architecture-based applications are gaining popularity in the cyber world. The applications are designed for understanding and generating human-like natural languages, codes, and embedding.
Microsoft has invested billions of dollars in OpenAI, the owner of ChatGPT. Tech giant Google on Wednesday launched Bard in over 180 countries including India.
To minimise the adversarial threats arising from AI-based applications, CERT-In shared the following safety measures:
Educate developers and users about the risks and threats associated with interacting with AI language models.
Verify domains and URLs impersonating AI language-based applications, avoid clicking on suspicious links.
AI language-based applications are based on learning on a large set of internet data. The model can collect all accessible data including sensitive information also. Hence, implement appropriate controls to preserve the security and privacy of data. Do not submit any sensitive information, such as login credentials, financial information or copyright data to such applications.
Ensure that the text generated is not being used for illegal, unethical activities or for dissemination of misinformation.
Use content filtering and moderation techniques within organisation to prevent the dissemination of malicious links, inappropriate content, or harmful information through such applications.
As the malicious codes written by threat actors may bypass the existing detection mechanism, making detection more challenging. Enhance and implement the relevant monitoring and security measures to detect these threat activities.
Secure and conduct regular security audits and assessments of the systems and infrastructure to identify potential vulnerabilities and information disclosures.
Enable multi-factor authentication (MFA) to prevent unauthorised access of accounts directly to AI-based applications and protect user accounts from being compromised.
Organisations may continuously monitor user interactions with AI language-based applications for any suspicious or malicious activity within their infrastructure.
Organisations may prepare an incident response plan and establish the set of activities that may be followed, in case of an incident.
Stay up-to-date on the latest security threats and vulnerabilities and take appropriate action to protect yourself and your data.