Security and safety research, technical deep-dives, and notes from the lab.
Copyright risk in AI training data can't be solved with policies alone. This article explains why enforceable, pre-training controls matter and how ARIMLABS approaches the problem.
ARIMLABS conducted a cross-jurisdictional analysis of AI governance frameworks, revealing why document-centric governance breaks down in autonomous and agentic AI systems.
ARIMLABS has created an extensive mapping of two major AI compliance frameworks - AICM and the NIST AI RMF - identifying how their controls align, overlap, and where gaps exist.
ARIMLABS uncovers key vulnerabilities in LLM-based browsing agents and proposes the first end-to-end threat model with practical defenses.
ARIMLABS successfully exploited a class pollution technique to bypass LangChain's security measures and achieve remote code execution.