CISA Acting Director Uploaded Sensitive Documents to ChatGPT, Triggering Security Alerts
CISA Director Uploaded Sensitive Docs to ChatGPT, Sparking Security Review

In a significant security incident that has raised concerns about artificial intelligence usage in government agencies, the acting director of the Cybersecurity and Infrastructure Security Agency (CISA), Madhu Gottumukkala, uploaded sensitive contracting documents into a public version of ChatGPT last summer. This action triggered multiple automated security warnings and prompted an internal review by Department of Homeland Security officials, according to a Politico report.

Security Warnings and Internal Review

According to Department of Homeland Security officials, cybersecurity sensors at CISA flagged the uploads during August 2025. One official specifically noted there were multiple such warnings in the first week of August alone. Senior officials at DHS subsequently led an internal review to assess whether any harm to government security had occurred from these exposures, though the conclusions of this review remain unclear.

Nature of the Uploaded Documents

While none of the files Gottumukkala input into ChatGPT were classified, the material included CISA contracting documents marked "for official use only" — a government designation for information considered sensitive and not intended for public release. This classification indicates material that, while not classified, contains information that could potentially harm government operations if disclosed.

Special Permission and Agency Response

The incident drew particular attention because Gottumukkala had requested special permission from CISA's Office of the Chief Information Officer to use the AI tool soon after arriving at the agency in May 2025. At that time, ChatGPT was blocked for other DHS employees, making his access an exception to standard agency policy.

In an emailed statement, CISA's Director of Public Affairs Marci McCarthy clarified that Gottumukkala "was granted permission to use ChatGPT with DHS controls in place" and that "this use was short-term and limited." McCarthy added that the agency remains committed to harnessing AI and other cutting-edge technologies to drive government modernization.

Timeline Discrepancy and Security Implications

The CISA statement appeared to dispute Politico's reporting timeline, stating that "Acting Director Dr. Madhu Gottumukkala last used ChatGPT in mid-July 2025 under an authorized temporary exception granted to some employees." The statement emphasized that CISA's security posture continues to block access to ChatGPT by default unless granted specific exceptions.

This incident carries significant security implications because any material uploaded into the public version of ChatGPT that Gottumukkala was using is shared with ChatGPT-owner OpenAI. This means the information could potentially be used to help answer prompts from other users of the application, which OpenAI reports has more than 700 million total active users worldwide.

Contrast with Approved DHS AI Tools

The incident highlights the contrast between public AI tools and those specifically configured for government use. Other AI tools now approved for use by DHS employees, such as DHS's self-built AI-powered chatbot called DHSChat, are configured to prevent queries or documents input into them from leaving federal networks, creating a more secure environment for sensitive government work.

Internal Reactions and Official Statements

One official familiar with the situation stated that Gottumukkala "forced CISA's hand into making them give him ChatGPT, and then he abused it," suggesting internal tensions regarding the incident. All federal officials receive training on proper handling of sensitive documents, and DHS policy requires security officials to investigate both the cause and effect of any exposure of official-use documents.

According to DHS procedures, such investigations must determine the appropriateness of any administrative or disciplinary action. Depending on circumstances, consequences could range from mandatory retraining or formal warnings to more serious measures such as suspension or revocation of security clearance.

Post-Incident Review Meetings

After DHS detected the upload activity, Gottumukkala met with senior officials to review what he had uploaded into ChatGPT. DHS's then-acting general counsel, Joseph Mazzara, was involved in efforts to assess any potential harm to the department, while Antoine McCord, DHS's chief information officer, also participated in the review process.

Additionally, Gottumukkala held meetings in August 2025 with CISA's chief information officer, Robert Costello, and its chief counsel, Spencer Fisher, specifically addressing the incident and proper handling of "for official use only" material.

Broader Context of Gottumukkala's Tenure

Gottumukkala has led CISA in an acting capacity since May 2025, when he was appointed by DHS Secretary Kristi Noem as deputy director. His tenure has coincided with ongoing leadership challenges at the agency, as Donald Trump's nominee to head CISA, DHS special adviser Sean Plankey, was blocked last year by Senator Rick Scott over a Coast Guard shipbuilding contract, with no new confirmation hearing date yet set.

The ChatGPT incident represents just one of several security-related matters during Gottumukkala's leadership. Earlier this summer, at least six career staff were placed on leave after Gottumukkala failed a counterintelligence polygraph exam that he had pushed to take — an exam that DHS has since called "unsanctioned." During Congressional testimony last week, when asked if he was aware of the failed test, Gottumukkala twice told Representative Bennie Thompson that he did not "accept the premise of that characterization."

More recently, Gottumukkala attempted to oust CISA's CIO Robert Costello, though other political appointees at the agency intervened to block this move. These incidents collectively paint a picture of a challenging tenure for the acting director of an agency tasked with securing federal networks against sophisticated, state-backed hackers from adversarial nations including Russia and China.

The incident raises important questions about AI governance in government agencies, particularly regarding the balance between technological innovation and security protocols. As government agencies increasingly explore AI applications, this case demonstrates the need for clear policies, proper training, and robust security measures when handling sensitive government information through emerging technologies.