AI Excessive Agency
September 16, 2025 |
AI Security, GenAI/LLM Powered Applications, Prompt Injection
Investigating how AI systems can exceed their intended boundaries and how insecure agent design can lead to vulnerabilities. In this post, we'll explore the concept of Excessive Agency through a real-world example.
cd ./ai-excessive-agency-1/
Evasion Attack on AI Classifier
April 18, 2025
AI Security, Machine Learning, Adversarial Attacks
Exploring how adversarial examples can be crafted to evade AI-based classification systems. In this post, we'll dive into the theory behind evasion attacks and set up our experimental environment. This also includes the attack using Adversarial Robustness Toolbox (ART).
cd ./evasion-attack-on-ai-classifier-1/