Let AI Agents Work Without Risking Your Machine
Secure Linux sandboxes for AI agent execution.
Full network, filesystem, and resource controls.
No Docker. No Kubernetes. No overhead.
The AI Agent Security Problem
AI agents are powerful. They write code, install packages, and make API calls. But where do they run?
Run Them Bare?
Letting an AI agent execute code directly on your machine means it has access to your files, your network, your credentials. One bad prompt and it's rm -rf / or worse.
Reach for Docker?
Docker was built for microservices, not AI agents. It means running a daemon, pulling images, managing volumes, and accepting ~50MB overhead per container. No built-in network filtering. No MCP integration.
Orchestrate with K8s?
Kubernetes is a production platform, not a sandbox. You don't need a cluster, service meshes, and YAML manifests just to let an AI agent safely run a Python script.
Sandbox Security, Without the Baggage
Vitund-Sandbox gives your AI agents their own isolated Linux environment in milliseconds — with the same security guarantees as containers, but none of the infrastructure overhead.
Agents Can't Escape
Every sandbox is a locked-down Linux environment with its own process tree, filesystem, and network stack. Agents can't see or touch anything on the host.
Learn moreYou Control the Network
Agents only reach the domains you allow. Need access to the OpenAI API but nothing else? One flag. All traffic is logged so you see exactly what they're doing.
Learn moreResources Can't Run Away
Set hard limits on memory, CPU, and processes. If an agent tries to fork-bomb or eat all your RAM, the sandbox caps it instantly. Your host stays healthy.
Learn moreWorks With Your LLM
Built-in MCP server lets Claude, or any MCP-compatible LLM, create and manage sandboxes as tools. Your AI agents can self-provision their own secure environments.
Learn moreAgent Work Persists
Unlike throwaway containers, sandbox files survive restarts. Agents pick up where they left off. Need a clean slate? Wipe on demand.
Learn moreNo Infrastructure Tax
No Docker daemon. No container images. No Kubernetes cluster. One install, one systemd service. Sandboxes spin up in ~5ms with ~2MB overhead.
Learn more~5ms
Sandbox startup
vs ~500ms for Docker
~2MB
Memory overhead
vs ~50MB for Docker
5
Isolation layers
Defense in depth
0
Runtime dependencies
No daemon, no images
Use It Your Way
Command line, LLM integration, or programmatic control. Pick the interface that fits your workflow.
Command Line
Create, manage, and monitor sandboxes from your terminal.
sudo sj-sandbox-ctl create my-agent \
--memory 512 --allow '*.openai.com'
sudo sj-sandbox-ctl exec my-agent -- \
python3 agent.py MCP for LLMs
Let Claude or any MCP client create and use sandboxes as tools.
{
"mcpServers": {
"vitund-sandbox": {
"command": "sj-sandbox-mcp",
"args": ["--transport", "stdio"]
}
}
} Python API
Programmatic control for building agent orchestration systems.
client = APIClient()
await client.connect()
await client.create_sandbox(
"agent-007", memory_mb=512,
allowed_domains=["api.openai.com"]
)
result = await client.exec(
"agent-007", ["python3", "run.py"]
) Under the Hood
Five independent Linux primitives, layered for defense-in-depth. Each one is a real security boundary — not a configuration option.
cgroups v2
Resource limits
Namespaces
Process isolation
OverlayFS
Filesystem layers
Seccomp BPF
Syscall filtering
Net Proxy
Domain filtering