Changelog

View on GitHub →

Fixes

  • Set the default managed ingest endpoint to https://ingest.iron.sh/v1/logs so OTel logs are routed to the correct path (#74).
  • Bumped various dependencies
View on GitHub →

New: LLM Judge Transform

A new judge transform calls an LLM to produce an allow/deny decision for requests that match its URL rules. Each entry under transforms: is an independent judge instance with its own natural-language policy and LLM backend. Operators can deploy zero, one, or many judges scoped to different rules.

Anthropic and OpenAI are supported in v1. The judge can only reject: it never approves a request the static allowlist would have denied, and static deny always wins. On LLM error, timeout, open circuit breaker, or malformed model output, the configured fallback applies (deny by default).

- name: judge
  config:
    name: "github-write-guard"
    fallback: "deny"
    timeout: "8s"
    max_concurrent: 100
    circuit_breaker:
      consecutive_failures: 5
      cooldown: "10s"
    rules:
      - host: "api.github.com"
        methods: ["POST", "PATCH", "DELETE", "PUT"]
    provider:
      type: "anthropic"
      model: "claude-haiku-4-5-20251001"
      api_key_env: "ANTHROPIC_API_KEY"
      max_tokens: 256
    prompt: |
      Natural-language policy describing what is allowed for requests that
      match the rules above.

Every matched request adds structured fields to the audit trace: judge.instance, judge.decision, judge.reason, judge.duration_ms, judge.input_tokens, judge.output_tokens, and judge.fallback_applied or judge.circuit_breaker_tripped when those fire.

For more information, see the LLM Judge reference.

Thanks to Brex for their CrabTrap project, which inspired this design.

View on GitHub →

New: version Subcommand

The proxy now ships a version subcommand that prints the build version populated via -ldflags at build time. --version and -v are also accepted for convention.

iron-proxy version

Fix: Transform Annotations as Nested OTEL Values

Transform annotations were previously JSON-encoded into a string before being attached to OTEL audit log records, which caused them to render as stringified JSON rather than structured data. Annotations are now emitted as a proper log.MapValue/log.SliceValue tree, matching the slog-based audit logger and the OTEL log data model's native support for nested AnyValue.

View on GitHub →

New: SNI-Only TLS Mode

A new tls.mode config option accepts "mitm" (default) or "sni-only". In sni-only mode the HTTPS listener peeks the TLS ClientHello SNI and TCP-passthroughs to the upstream without terminating TLS, so clients don't need to trust a proxy CA.

The transform pipeline still runs with a host-only synthetic request: method, path, headers, and body are empty, so host-based allowlist rules are the only things that can match. Body-inspecting transforms like secrets and grpc still run but have nothing to act on. The CONNECT/SOCKS5 tunnel's TLS branch also switches to passthrough in sni-only mode.

tls:
  mode: "sni-only"
View on GitHub →

New: AWS SSM Parameter Store Secret Source

Secrets can now be resolved directly from AWS Systems Manager Parameter Store. Use source.type: aws_ssm with a parameter name or ARN, and iron-proxy will fetch the value before applying the existing replace or inject secret transform behavior.

SSM sources support optional region, with_decryption, json_key, and ttl fields. with_decryption defaults to true, which is the expected setting for SecureString parameters. json_key extracts a field from JSON parameter values, and ttl enables periodic refresh without restarting the proxy.

transforms:
  - name: secrets
    config:
      secrets:
        - source:
            type: aws_ssm
            name: "/myapp/api-key"
            region: "us-east-1"
            with_decryption: true
            json_key: "api_key"
            ttl: "15m"
          replace:
            proxy_value: "proxy-token-789"
            match_headers: ["Authorization"]
          rules:
            - host: "api.example.com"