Exporters
Send telemetry to the OpenTelemetry Collector to make sure it’s exported correctly. Using the Collector in production environments is a best practice. To visualize your telemetry, export it to a backend such as Jaeger, Zipkin, Prometheus, or a vendor-specific backend.
Available exporters
The registry contains a list of exporters for Java.
Among exporters, OpenTelemetry Protocol (OTLP) exporters are designed with the OpenTelemetry data model in mind, emitting OTel data without any loss of information. Furthermore, many tools that operate on telemetry data support OTLP (such as Prometheus, Jaeger, and most vendors), providing you with a high degree of flexibility when you need it. To learn more about OTLP, see OTLP Specification.
This page covers the main OpenTelemetry Java exporters and how to set them up.
OTLP
Collector Setup
Note
If you have a OTLP collector or backend already set up, you can skip this section and setup the OTLP exporter dependencies for your application.
To try out and verify your OTLP exporters, you can run the collector in a docker container that writes telemetry directly to the console.
In an empty directory, create a file called collector-config.yaml
with the
following content:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
exporters:
debug:
verbosity: detailed
service:
pipelines:
traces:
receivers: [otlp]
exporters: [debug]
metrics:
receivers: [otlp]
exporters: [debug]
logs:
receivers: [otlp]
exporters: [debug]
Now run the collector in a docker container:
docker run -p 4317:4317 -p 4318:4318 --rm -v $(pwd)/collector-config.yaml:/etc/otelcol/config.yaml otel/opentelemetry-collector
This collector is now able to accept telemetry via OTLP. Later you may want to configure the collector to send your telemetry to your observability backend.
Dependencies
If you want to send telemetry data to an OTLP endpoint (like the OpenTelemetry Collector, Jaeger or Prometheus), there are multiple OTLP options available, each catering to different use cases. For most users, the default artifact will suffice and be the most simple:
dependencies {
implementation("io.opentelemetry:opentelemetry-exporter-otlp:1.41.0")
implementation("io.opentelemetry:opentelemetry-sdk:1.41.0")
implementation("io.opentelemetry.semconv:opentelemetry-semconv:1.26.0-alpha")
}
<project>
<dependencies>
<dependency>
<groupId>io.opentelemetry</groupId>
<artifactId>opentelemetry-exporter-otlp</artifactId>
</dependency>
<dependency>
<groupId>io.opentelemetry</groupId>
<artifactId>opentelemetry-sdk</artifactId>
</dependency>
<dependency>
<groupId>io.opentelemetry.semconv</groupId>
<artifactId>opentelemetry-semconv</artifactId>
<version>1.26.0-alpha</version>
</dependency>
</dependencies>
</project>
Under the hood, there are two protocol options supported, each with different “sender” implementations.
grpc
- gRPC implementation of OTLP exporters, represented byOtlpGrpcSpanExporter
,OtlpGrpcMetricExporter
,OtlpGrpcLogRecordExporter
.http/protobuf
- HTTP with protobuf encoded payload implementation of OTLP exporters, represented byOtlpHttpSpanExporter
,OtlpHttpMetricExporter
,OtlpHttpLogRecordExporter
.
A sender is an abstraction which allows different gRPC / HTTP client implementations to fulfill the OTLP contract. Regardless of the sender implementation, the same exporter classes are used. A sender implementation is automatically used when it is detected on the classpath. The sender implementations are described in detail below:
{groupId}:{artifactId}
- Sender description.io.opentelemetry:opentelemetry-exporter-sender-okhttp
- The default sender, included automatically withopentelemetry-exporter-otlp
and bundled with the OpenTelemetry Java agent. This includes an OkHttp based implementation for both thegrpc
andhttp/protobuf
versions of the protocol, and will be suitable for most users. However, OkHttp has a transitive dependency on kotlin which is problematic in some environments.io.opentelemetry:opentelemetry-exporter-sender-jdk
- This sender includes a JDK 11+ HttpClient based implementation for thehttp/protobuf
version of the protocol. It requires zero additional dependencies, but requires Java 11+. To use, include the artifact and explicitly exclude the defaultio.opentelemetry:opentelemetry-exporter-sender-okhttp
dependency.io.opentelemetry:opentelemetry-exporter-sender-grpc-managed-channel
- This sender includes a grpc-java based implementation for thegrpc
version of the protocol. To use, include the artifact, explicitly exclude the defaultio.opentelemetry:opentelemetry-exporter-sender-okhttp
dependency, and include one of the gRPC transport implementations.
Usage
Next, configure the exporter to point at an OTLP endpoint.
If you use SDK autoconfiguration all you need to do is update your environment variables:
env OTEL_EXPORTER_OTLP_ENDPOINT=http://example:4317 java -jar ./build/libs/java-simple.jar
Note, that in the case of exporting via OTLP you do not need to set
OTEL_TRACES_EXPORTER
, OTEL_METRICS_EXPORTER
and OTEL_LOGS_EXPORTER
since
otlp
is their default value
In the case of [manual configuration] you can update the example app like the following:
package otel;
import io.opentelemetry.api.OpenTelemetry;
import io.opentelemetry.api.trace.propagation.W3CTraceContextPropagator;
import io.opentelemetry.context.propagation.ContextPropagators;
import io.opentelemetry.exporter.otlp.logs.OtlpGrpcLogRecordExporter;
import io.opentelemetry.exporter.otlp.metrics.OtlpGrpcMetricExporter;
import io.opentelemetry.exporter.otlp.trace.OtlpGrpcSpanExporter;
import io.opentelemetry.sdk.OpenTelemetrySdk;
import io.opentelemetry.sdk.logs.SdkLoggerProvider;
import io.opentelemetry.sdk.logs.export.BatchLogRecordProcessor;
import io.opentelemetry.sdk.metrics.SdkMeterProvider;
import io.opentelemetry.sdk.metrics.export.PeriodicMetricReader;
import io.opentelemetry.sdk.resources.Resource;
import io.opentelemetry.sdk.trace.SdkTracerProvider;
import io.opentelemetry.sdk.trace.export.BatchSpanProcessor;
import io.opentelemetry.semconv.ServiceAttributes;
import org.springframework.boot.Banner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
@SpringBootApplication
public class DiceApplication {
public static void main(String[] args) {
SpringApplication app = new SpringApplication(DiceApplication.class);
app.setBannerMode(Banner.Mode.OFF);
app.run(args);
}
@Bean
public OpenTelemetry openTelemetry() {
Resource resource =
Resource.getDefault().toBuilder()
.put(ServiceAttributes.SERVICE_NAME, "dice-server")
.put(ServiceAttributes.SERVICE_VERSION, "0.1.0")
.build();
SdkTracerProvider sdkTracerProvider =
SdkTracerProvider.builder()
.addSpanProcessor(
BatchSpanProcessor.builder(OtlpGrpcSpanExporter.builder().build()).build())
.setResource(resource)
.build();
SdkMeterProvider sdkMeterProvider =
SdkMeterProvider.builder()
.registerMetricReader(
PeriodicMetricReader.builder(OtlpGrpcMetricExporter.builder().build()).build())
.setResource(resource)
.build();
SdkLoggerProvider sdkLoggerProvider =
SdkLoggerProvider.builder()
.addLogRecordProcessor(
BatchLogRecordProcessor.builder(OtlpGrpcLogRecordExporter.builder().build())
.build())
.setResource(resource)
.build();
OpenTelemetry openTelemetry =
OpenTelemetrySdk.builder()
.setTracerProvider(sdkTracerProvider)
.setMeterProvider(sdkMeterProvider)
.setLoggerProvider(sdkLoggerProvider)
.setPropagators(ContextPropagators.create(W3CTraceContextPropagator.getInstance()))
.buildAndRegisterGlobal();
return openTelemetry;
}
}
Console
To debug your instrumentation or see the values locally in development, you can use exporters writing telemetry data to the console (stdout).
If you followed the Getting Started or Manual Instrumentation guides, you already have the console exporter installed.
The LoggingSpanExporter
, the LoggingMetricExporter
and the
SystemOutLogRecordExporter
are included in the
opentelemetry-exporter-logging
artifact.
If you use SDK autoconfiguration all you need to do is update your environment variables:
env OTEL_TRACES_EXPORTER=logging OTEL_METRICS_EXPORTER=logging OTEL_LOGS_EXPORTER=logging java -jar ./build/libs/java-simple.jar
Jaeger
Backend Setup
Jaeger natively supports OTLP to receive trace data. You can run Jaeger in a docker container with the UI accessible on port 16686 and OTLP enabled on ports 4317 and 4318:
docker run --rm \
-e COLLECTOR_ZIPKIN_HOST_PORT=:9411 \
-p 16686:16686 \
-p 4317:4317 \
-p 4318:4318 \
-p 9411:9411 \
jaegertracing/all-in-one:latest
Usage
Now following the instruction to setup the OTLP exporters.
Prometheus
To send your metric data to Prometheus, you can either
enable Prometheus’ OTLP Receiver
and use the OTLP exporter or you can use the Prometheus exporter, a
MetricReader
that starts an HTTP server that collects metrics and serialize to
Prometheus text format on request.
Backend Setup
Note
If you have Prometheus or a Prometheus-compatible backend already set up, you can skip this section and setup the Prometheus or OTLP exporter dependencies for your application.
You can run Prometheus in a docker container,
accessible on port 9090
by following these instructions:
Create a file called prometheus.yml
with the following content:
scrape_configs:
- job_name: dice-service
scrape_interval: 5s
static_configs:
- targets: [host.docker.internal:9464]
Run Prometheus in a docker container with the UI accessible on port 9090
:
docker run --rm -v ${PWD}/prometheus.yml:/prometheus/prometheus.yml -p 9090:9090 prom/prometheus --enable-feature=otlp-write-receive
Note
When using Prometheus’ OTLP Receiver, make sure that you set the OTLP endpoint
for metrics in your application to http://localhost:9090/api/v1/otlp
.
Not all docker environments support host.docker.internal
. In some cases you
may need to replace host.docker.internal
with localhost
or the IP address of
your machine.
Dependencies
Install the
opentelemetry-exporter-prometheus
artifact as a dependency for your application:
dependencies {
implementation 'io.opentelemetry:opentelemetry-exporter-prometheus:1.41.0-alpha'
}
<project>
<dependencies>
<dependency>
<groupId>io.opentelemetry</groupId>
<artifactId>opentelemetry-exporter-prometheus</artifactId>
</dependency>
</dependencies>
</project>
Update your OpenTelemetry configuration to use the exporter and to send data to your Prometheus backend:
package otel;
import io.opentelemetry.exporter.prometheus.PrometheusHttpServer;
import io.opentelemetry.sdk.metrics.SdkMeterProvider;
import io.opentelemetry.sdk.resources.Resource;
public class PrometheusExporter {
public static SdkMeterProvider create(Resource resource) {
int prometheusPort = 9464;
SdkMeterProvider sdkMeterProvider =
SdkMeterProvider.builder()
.registerMetricReader(PrometheusHttpServer.builder().setPort(prometheusPort).build())
.setResource(resource)
.build();
return sdkMeterProvider;
}
}
With the above you can access your metrics at http://localhost:9464/metrics. Prometheus or an OpenTelemetry Collector with the Prometheus receiver can scrape the metrics from this endpoint.
Zipkin
Backend Setup
Note
If you have Zipkin or a Zipkin-compatible backend already set up, you can skip this section and setup the Zipkin exporter dependencies for your application.
You can run Zipkin on in a Docker container by executing the following command:
docker run --rm -d -p 9411:9411 --name zipkin openzipkin/zipkin
Dependencies
To send your trace data to Zipkin, you can use the
ZipkinSpanExporter
.
Install the
opentelemetry-exporter-zipkin
artifact as a dependency for your application:
dependencies {
implementation 'io.opentelemetry:opentelemetry-exporter-zipkin:1.41.0-alpha'
}
<project>
<dependencies>
<dependency>
<groupId>io.opentelemetry</groupId>
<artifactId>opentelemetry-exporter-zipkin</artifactId>
</dependency>
</dependencies>
</project>
Update your OpenTelemetry configuration to use the exporter and to send data to your Zipkin backend:
package otel;
import io.opentelemetry.exporter.zipkin.ZipkinSpanExporter;
import io.opentelemetry.sdk.resources.Resource;
import io.opentelemetry.sdk.trace.SdkTracerProvider;
import io.opentelemetry.sdk.trace.export.BatchSpanProcessor;
public class ZipkinExporter {
public static SdkTracerProvider create(Resource resource) {
SdkTracerProvider sdkTracerProvider =
SdkTracerProvider.builder()
.addSpanProcessor(
BatchSpanProcessor.builder(
ZipkinSpanExporter.builder()
.setEndpoint("http://localhost:9411/api/v2/spans")
.build())
.build())
.setResource(resource)
.build();
return sdkTracerProvider;
}
}
Custom exporters
Finally, you can also write your own exporter. For more information, see the SpanExporter Interface in the API documentation.
Batching span and log records
The OpenTelemetry SDK provides a set of default span and log record processors, that allow you to either emit spans one-by-on (“simple”) or batched. Using batching is recommended, but if you do not want to batch your spans or log records, you can use a simple processor instead as follows:
package otel;
import io.opentelemetry.sdk.logs.SdkLoggerProvider;
import io.opentelemetry.sdk.logs.export.BatchLogRecordProcessor;
import io.opentelemetry.sdk.logs.export.LogRecordExporter;
import io.opentelemetry.sdk.resources.Resource;
import io.opentelemetry.sdk.trace.SdkTracerProvider;
import io.opentelemetry.sdk.trace.export.BatchSpanProcessor;
import io.opentelemetry.sdk.trace.export.SpanExporter;
public class BatchExporter {
public static void create(
Resource resource, SpanExporter spanExporter, LogRecordExporter logExporter) {
SdkTracerProvider sdkTracerProvider =
SdkTracerProvider.builder()
.addSpanProcessor(BatchSpanProcessor.builder(spanExporter).build())
.setResource(resource)
.build();
SdkLoggerProvider sdkLoggerProvider =
SdkLoggerProvider.builder()
.addLogRecordProcessor(BatchLogRecordProcessor.builder(logExporter).build())
.setResource(resource)
.build();
}
}
package otel;
import io.opentelemetry.sdk.logs.SdkLoggerProvider;
import io.opentelemetry.sdk.logs.export.LogRecordExporter;
import io.opentelemetry.sdk.logs.export.SimpleLogRecordProcessor;
import io.opentelemetry.sdk.resources.Resource;
import io.opentelemetry.sdk.trace.SdkTracerProvider;
import io.opentelemetry.sdk.trace.export.SimpleSpanProcessor;
import io.opentelemetry.sdk.trace.export.SpanExporter;
public class SimpleExporter {
public static void create(
Resource resource, SpanExporter spanExporter, LogRecordExporter logExporter) {
SdkTracerProvider sdkTracerProvider =
SdkTracerProvider.builder()
.addSpanProcessor(SimpleSpanProcessor.builder(spanExporter).build())
.setResource(resource)
.build();
SdkLoggerProvider sdkLoggerProvider =
SdkLoggerProvider.builder()
.addLogRecordProcessor(SimpleLogRecordProcessor.create(logExporter))
.setResource(resource)
.build();
}
}
Feedback
Was this page helpful?
Thank you. Your feedback is appreciated!
Please let us know how we can improve this page. Your feedback is appreciated!