BusinessHeadlineNews

NITDA Issues Urgent Warning Over New ChatGPT Vulnerabilities Targeting Nigerian Users

Seven newly discovered flaws in GPT-4o and GPT-5 could expose users to data leaks, unsafe commands, and long-term AI manipulation, cybersecurity agency cautions.

The National Information Technology Development Agency (NITDA) has released a critical cybersecurity alert warning Nigerians about newly discovered vulnerabilities in ChatGPT that could lead to data leaks and unauthorized actions.

The notice, issued through the agency’s Computer Emergency Readiness and Response Team (CERRT.NG), comes amid growing reliance on AI tools for research, business operations, and government services.

According to NITDA, cybersecurity researchers recently identified seven security flaws affecting OpenAI’s GPT-4o and GPT-5 models. The vulnerabilities allow attackers to manipulate ChatGPT through indirect prompt injection, a technique where malicious instructions are hidden inside normal web content.

The agency said attackers can embed harmful commands inside webpages, comments, or even manipulated URLs. When ChatGPT is asked to browse, summarise, or analyze such content, the system may unknowingly execute the hidden instructions.

“By embedding hidden instructions in webpages, comments, or crafted URLs, attackers can cause ChatGPT to execute unintended commands simply through normal browsing, summarization, or search actions,” the advisory noted.

NITDA added that some of the flaws enable hackers to bypass ChatGPT’s safety filters by masking dangerous content within trusted domains. Others exploit markdown-rendering bugs, allowing malicious directives to slip through undetected.

More seriously, the agency warned that attackers can poison the system’s memory, causing ChatGPT to retain harmful instructions that influence future conversations, even long after the original attack.

While OpenAI has addressed parts of the problem, NITDA said the risk remains because large language models still struggle to distinguish legitimate input from concealed malicious data.

Potential Risks for Users

The agency warned that exploitation of these vulnerabilities could lead to:

  • Unauthorized commands executed by the AI
  • Accidental disclosure of sensitive or private data
  • Manipulated or misleading answers
  • Long-term AI behaviour changes caused by memory poisoning

CERRT.NG further stressed that users could trigger these attacks without clicking anything, as simple browsing or automated summarisation exposes ChatGPT to harmful embedded content.

Recommended Safety Measures

NITDA urged Nigerian users, businesses, and government institutions to take several precautions:

  • Limit or disable browsing and webpage summarisation from untrusted websites
  • Enable ChatGPT’s browsing or memory features only when absolutely necessary
  • Ensure all deployed GPT-4o and GPT-5 systems are regularly updated
  • Strengthen internal cybersecurity controls in environments where AI tools are integrated

Background

This is not the first time NITDA has raised concerns over emerging technology vulnerabilities. A few months ago, the agency issued a nationwide alert over a critical flaw affecting eSIM technology used in smartphones, tablets, wearables, and IoT devices.

The issue, linked to the GSMA’s Generic Test Profile (version 6.0 and below) reportedly exposed more than 2 billion devices to possible attacks involving eSIM cloning, extraction of cryptographic keys, intercepted communication, and stealth control of devices.

Share this:

Opeyemi Owoseni

Opeyemi Oluwatoni Owoseni is a broadcast journalist and business reporter at TV360 Nigeria, where she presents news bulletins, produces and hosts the Money Matters program, and reports on the economy, business, and government policy. With a strong background in TV and radio production, news writing, and digital content creation, she is passionate about delivering impactful stories that inform and engage the public.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *