Rust Support

OpenTelemetry exporter for AWS Lambda that writes telemetry data to stdout in OTLP format.

Crates.io docs.rs License: MIT

Installation

Install the required dependencies using cargo add:

cargo add otlp-stdout-client
cargo add opentelemetry --features trace
cargo add opentelemetry-sdk --features trace,rt-tokio
cargo add opentelemetry-otlp --features http-proto,trace

This will automatically add the latest compatible versions of each crate to your Cargo.toml.

Basic Usage

The following example shows a simple AWS Lambda function that:

  • Handles API Gateway proxy requests
  • Creates and configures an OpenTelemetry tracer with StdoutClient to send OTLP data to stdout
  • Creates a span for each request
  • Returns a “Hello!” message

The key integration is using StdoutClient::default() with the OTLP exporter, which redirects all telemetry data to stdout instead of making HTTP calls.

use aws_lambda_events::event::apigw::ApiGatewayProxyRequest;
use lambda_runtime::{service_fn, Error, LambdaEvent};
use opentelemetry::trace::TraceError;
use opentelemetry_otlp::{WithExportConfig, WithHttpConfig};
use otlp_stdout_client::StdoutClient;

async fn init_tracer() -> Result<opentelemetry_sdk::trace::TracerProvider, TraceError> {
    let exporter = opentelemetry_otlp::SpanExporter::builder()
        .with_http()
        .with_http_client(StdoutClient::default())
        .build()?;
    
    Ok(opentelemetry_sdk::trace::TracerProvider::builder()
        .with_simple_exporter(exporter)
        .build())
}

#[tokio::main]
async fn main() -> Result<(), Error> {
    let provider = init_tracer().await?;
    opentelemetry::global::set_tracer_provider(provider.clone());
    
    let handler = service_fn(|event: LambdaEvent<ApiGatewayProxyRequest>| async {
        let tracer = opentelemetry::global::tracer("lambda-handler");
        let span = tracer.start("process-request");
        let _guard = span.enter();
        
        // Your handler logic here
        Ok::<_, Error>(serde_json::json!({ "message": "Hello!" }))
    });

    lambda_runtime::run(handler).await?;
    provider.force_flush();
    Ok(())
}

Configuration

Configuration is handled through environment variables:

VariableDescriptionDefault
OTEL_EXPORTER_OTLP_PROTOCOLProtocol for OTLP data (http/protobuf or http/json)http/protobuf
OTEL_EXPORTER_OTLP_COMPRESSIONCompression type (gzip or none)none
OTEL_SERVICE_NAMEName of your serviceFalls back to AWS_LAMBDA_FUNCTION_NAME or “unknown-service”

Best Practices

  • Always call force_flush() on the trace provider before your Lambda function exits to ensure all telemetry is written to stdout
  • Consider enabling compression if you’re generating large amounts of telemetry data
  • Set OTEL_SERVICE_NAME to easily identify your service in the telemetry data

Troubleshooting

Common issues and solutions:

  1. No Data in Logs
    • Verify force_flush() is called before the function exits
    • Check that the StdoutClient is properly configured in the exporter
    • Ensure spans are being created and closed properly
  2. JSON Parsing Errors
    • Verify the correct protocol is set in OTEL_EXPORTER_OTLP_PROTOCOL
    • Check for valid JSON in your attributes and events

Table of contents