OpenAI is preparing to roll out an identity verification system that may become mandatory for organizations seeking access to its upcoming advanced AI models. This new requirement, highlighted in a support document recently added to the company’s website, is part of a broader effort to enhance safety and control.
Referred to as the “Verified Organization” process, this initiative will require developers to verify their identity using a government-issued ID from countries supported by OpenAI’s API infrastructure. Each ID can only be used to verify one organization every 90 days, and eligibility may not extend to all applicants, the company noted.
OpenAI explains that the goal is to improve responsible access to powerful tools. “We are committed to ensuring that AI remains accessible while also being used responsibly,” the support page states. “While the vast majority of developers follow our policies, a small group has deliberately misused our APIs. Introducing identity checks helps us reduce unsafe applications of our technology.”
This move likely reflects OpenAI’s increasing focus on safeguarding its platform as its tools evolve and become more capable. The company has previously shared detailed reports on efforts to curb malicious activities, including those allegedly linked to North Korean actors.
Another reason for the verification system may be to deter intellectual property breaches. Earlier this year, Bloomberg reported that OpenAI was investigating potential unauthorized data extraction by a group associated with DeepSeek, an AI lab based in China. The breach allegedly occurred in late 2024 and may have been used to train competing models, in violation of OpenAI’s terms of service.
Following concerns about misuse, OpenAI had already restricted access to its tools from China in the summer of 2024.
Sources
(1) https://help.openai.com/en/articles/10910291-api-organization-verification
(2) https://www.theverge.com/news/601195/openai-evidence-deepseek-distillation-ai-data
(3) https://www.reuters.com/technology/artificial-intelligence/openai-cut-access-tools-developers-china-other-regions-chinese-state-media-says-2024-06-25/
(4) https://cdn.openai.com/threat-intelligence-reports/disrupting-malicious-uses-of-our-models-february-2025-update.pdf
