

South Korean researchers reportedly managed to jailbreak Google’s Gemini 3 Pro within just five minutes, raising serious concerns about the model’s safety. According to reports, the team successfully bypassed the AI chatbot’s protection mechanisms and forced it to generate harmful and dangerous outputs. The demonstration highlights multiple security weaknesses within the system, which could be misused if exploited by malicious actors.
The Maeil Business Newspaper stated that a South Korean startup, Aim Intelligence, performed the jailbreak. The company specializes in red-teaming—stress-testing AI models to uncover vulnerabilities in their safety systems. Jailbreaking involves using prompt-based, non-invasive techniques to make an AI perform actions outside its intended use.
The publication reported that Aim Intelligence not only broke into Gemini 3 Pro but also pushed it to produce dangerous biological and chemical threat-related instructions, revealing a critical security gap. The researchers claimed these outputs were detailed and concerning.
Additionally, the team was able to make the model generate a website containing hazardous content and even create a presentation mocking its own security failures, titled “Excused Stupid Gemini 3.”
Experts explained that modern AI systems can sometimes attempt to bypass restrictions or use evasive strategies, making safety enforcement more challenging. It remains unclear whether these vulnerabilities were officially reported to Google or whether the company has already taken steps to address them.










.png&w=3840&q=75)

Comments (0)
No comments yet
Be the first to comment!