top of page
Search
All Posts


Securing AI-Generated Code Through Runtime Verification
AI code generators like Copilot and ChatGPT boost speed but raise trustability risks, embedding flaws that static tools often miss. Nestria AI’s Copilot-RV adds a runtime verification layer, monitoring code execution in real time. Tested on 1,247 AI programs, it achieved 94.3% detection with 3.7% overhead, helping enterprises adopt AI securely.

Nestria AI Research Team
Sep 192 min read
Â
Â


Case Study: Prompt Injection in LLM Chatbots - How a Jinja2 CVE Enables Reverse Shell Attacks
As enterprises adopt GenAI, LLM chatbots are becoming core user interfaces—but they introduce hidden security risks. Our research reveals how a known vulnerability in Jinja2, used with Flask, can be exploited through prompt injection to achieve remote code execution and reverse shell access. This highlights a broader class of application-layer vulnerabilities in GenAI stacks. Nestria helps enterprises detect and defend against these AI-native threats before they’re exploited.

Nestria AI Research Team
Aug 14 min read
Â
Â


Nestria AI Joins CyberSG TIG Catalogue as a Key Innovator in AI Security
Nestria AI Joins CyberSG TIG Catalogue as a Key Innovator in AI Security

Prity Jha
Jul 31 min read
Â
Â


Agentic AI in Finance: A Dream for Traders, A Nightmare for CISOs
Agentic AI is transforming finance—from trading to compliance—but it's introducing new risks CISOs can't ignore. Shadow agents, model tampering, and tool misuse are creating unseen vulnerabilities in BFSI systems. Traditional security tools fall short in this dynamic, autonomous landscape. This blog explores the key threats and shows how Nestria provides real-time protection, auditability, and compliance controls to secure AI agents before they become a liability.

Nestria AI Research Team
Jul 12 min read
Â
Â


10 Ways Your AI Agents Can Be Hacked
AI agents are powerful—but dangerously exposed. As they reason, act, and collaborate, new threats emerge: spoofing, prompt injection, tool abuse, and more. Traditional security won’t stop them. Discover the top 10 ways your agents can be hacked—and how Nestria AI protects them at runtime, in memory, and across your entire AI stack.

Nestria AI Research Team
Jun 303 min read
Â
Â


Nestria AI joins NVIDIA Inception
Nestria AI has joined NVIDIA Inception to accelerate the development of next-gen agentic AI security. By leveraging NVIDIA’s GPU infrastructure and AI frameworks, we aim to fast-track real-time threat detection, secure multi-agent orchestration, and AI supply chain integrity. This partnership supports our mission to secure AI across high-performance and edge environments. Learn more: hello@nestria.ai | https://www.linkedin.com/company/nestria-ai/

Prity Jha
Jun 231 min read
Â
Â
bottom of page