You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add signal-specific configuration for topic and encoding.
The topics are already signal-specific by default,
it just hasn't been possible to explicitly configure
a different topic for each signal. Thus if you set the
`topic: foo`, it would be used for all signals, which is
never going to work with the receiver.
Similarly, while the default encoding is the same for all
signals (i.e. otlp_proto), some encodings are available
only for certain signals, e.g. azure_resource_logs is
(obviously) only available for logs. This means you could
not use the same receiver for multiple signals unless they
each used the same encoding.
To address both of these issues we introduce signal-specific
configuration: `logs::topic`, `metrics::topic`, `traces::topic`,
`logs::encoding`, `metrics::encoding`, and `traces::encoding`.
The existing `topic` and `encoding` configuration have been
deprecated. If the new fields are set, they will take precedence;
otherwise if the deprecated fields are set they will be used.
The defaults have not changed.
Fixesopen-telemetry#32735
-`resolve_canonical_bootstrap_servers_only` (default = false): Whether to resolve then reverse-lookup broker IPs during startup
29
-
-`topic` (default = otlp_spans for traces, otlp_metrics for metrics, otlp_logs for logs): The name of the kafka topic to read from.
30
-
Only one telemetry type may be used for a given topic.
31
-
-`encoding` (default = otlp_proto): The encoding of the payload received from kafka. Supports encoding extensions. Tries to load an encoding extension and falls back to internal encodings if no extension was loaded. Available internal encodings:
32
-
-`otlp_proto`: the payload is deserialized to `ExportTraceServiceRequest`, `ExportLogsServiceRequest` or `ExportMetricsServiceRequest` respectively.
33
-
-`otlp_json`: the payload is deserialized to `ExportTraceServiceRequest``ExportLogsServiceRequest` or `ExportMetricsServiceRequest` respectively using JSON encoding.
34
-
-`jaeger_proto`: the payload is deserialized to a single Jaeger proto `Span`.
35
-
-`jaeger_json`: the payload is deserialized to a single Jaeger JSON Span using `jsonpb`.
36
-
-`zipkin_proto`: the payload is deserialized into a list of Zipkin proto spans.
37
-
-`zipkin_json`: the payload is deserialized into a list of Zipkin V2 JSON spans.
38
-
-`zipkin_thrift`: the payload is deserialized into a list of Zipkin Thrift spans.
39
-
-`raw`: (logs only) the payload's bytes are inserted as the body of a log record.
40
-
-`text`: (logs only) the payload are decoded as text and inserted as the body of a log record. By default, it uses UTF-8 to decode. You can use `text_<ENCODING>`, like `text_utf-8`, `text_shift_jis`, etc., to customize this behavior.
41
-
-`json`: (logs only) the payload is decoded as JSON and inserted as the body of a log record.
42
-
-`azure_resource_logs`: (logs only) the payload is converted from Azure Resource Logs format to OTel format.
27
+
-`logs`
28
+
-`topic` (default = otlp_logs): The name of the Kafka topic from which to consume logs.
29
+
-`encoding` (default = otlp_proto): The encoding for the Kafka topic. See below for supported encodings.
30
+
-`metrics`
31
+
-`topic` (default = otlp_metrics): The name of the Kafka topic from which to consume metrics.
32
+
-`encoding` (default = otlp_proto): The encoding for the Kafka topic. See below for supported encodings.
33
+
-`traces`
34
+
-`topic` (default = otlp_spans): The name of the Kafka topic from which to consume traces.
35
+
-`encoding` (default = otlp_proto): The encoding for the Kafka topic. See below for supported encodings.
36
+
-`topic` (Deprecated [v0.123.0]: use `logs::topic`, `traces::topic`, or `metrics::topic`).
37
+
If this is set, it will take precedence over the default value for those fields.
38
+
-`encoding` (Deprecated [v0.123.0]: use `logs::encoding`, `traces::encoding`, or `metrics::encoding`).
39
+
If this is set, it will take precedence over the default value for those fields.
43
40
-`group_id` (default = otel-collector): The consumer group that receiver will be consuming messages from
44
41
-`client_id` (default = otel-collector): The consumer client ID that receiver will use
45
42
-`initial_offset` (default = latest): The initial offset to use if no offset was previously committed. Must be `latest` or `earliest`.
@@ -104,14 +101,40 @@ The following settings can be optionally configured:
104
101
-`randomization_factor`: A random factor used to calculate next backoff. Randomized interval = RetryInterval * (1 ± RandomizationFactor)
105
102
-`max_elapsed_time`: The maximum amount of time trying to backoff before giving up. If set to 0, the retries are never stopped.
106
103
107
-
Example:
104
+
### Supported encodings
105
+
106
+
The Kafka receiver supports encoding extensions, as well as the following built-in encodings:
107
+
108
+
-`otlp_proto`: the payload is deserialized to `ExportTraceServiceRequest`, `ExportLogsServiceRequest` or `ExportMetricsServiceRequest` respectively.
109
+
-`otlp_json`: the payload is deserialized to `ExportTraceServiceRequest``ExportLogsServiceRequest` or `ExportMetricsServiceRequest` respectively using JSON encoding.
110
+
-`jaeger_proto`: the payload is deserialized to a single Jaeger proto `Span`.
111
+
-`jaeger_json`: the payload is deserialized to a single Jaeger JSON Span using `jsonpb`.
112
+
-`zipkin_proto`: the payload is deserialized into a list of Zipkin proto spans.
113
+
-`zipkin_json`: the payload is deserialized into a list of Zipkin V2 JSON spans.
114
+
-`zipkin_thrift`: the payload is deserialized into a list of Zipkin Thrift spans.
115
+
-`raw`: (logs only) the payload's bytes are inserted as the body of a log record.
116
+
-`text`: (logs only) the payload are decoded as text and inserted as the body of a log record. By default, it uses UTF-8 to decode. You can use `text_<ENCODING>`, like `text_utf-8`, `text_shift_jis`, etc., to customize this behavior.
117
+
-`json`: (logs only) the payload is decoded as JSON and inserted as the body of a log record.
118
+
-`azure_resource_logs`: (logs only) the payload is converted from Azure Resource Logs format to OTel format.
119
+
120
+
### Example configurations
121
+
122
+
#### Minimal configuration
123
+
124
+
By default, the receiver does not require any configuration. With the following configuration,
125
+
the receiver will consume messages from the default topics from localhost:9092 using the
126
+
`otlp_proto` encoding:
127
+
108
128
109
129
```yaml
110
130
receivers:
111
131
kafka:
112
-
protocol_version: 2.0.0
113
132
```
114
-
Example of connecting to kafka using sasl and TLS:
133
+
134
+
#### TLS and authentication
135
+
136
+
In this example the receiver is configured to connect to Kafka using TLS for encryption,
137
+
and SASL/SCRAM for authentication:
115
138
116
139
```yaml
117
140
receivers:
@@ -124,39 +147,30 @@ receivers:
124
147
tls:
125
148
insecure: false
126
149
```
127
-
Example of header extraction:
150
+
151
+
#### Header extraction
152
+
153
+
By default the receiver will ignore Kafka message headers. It is possible to extract
154
+
specific headers and attach them as resource attributes to decoded data.
128
155
129
156
```yaml
130
157
receivers:
131
158
kafka:
132
-
topic: test
133
159
header_extraction:
134
160
extract_headers: true
135
161
headers: ["header1", "header2"]
136
162
```
137
163
138
-
- If we feed following kafka record to `test` topic and use above configs:
139
-
```yaml
140
-
{
141
-
event: Hello,
142
-
headers: {
143
-
header1: value1,
144
-
header2: value2,
145
-
}
146
-
}
164
+
If we produce a Kafka message with headers "header1: value1" and "header2: value2"
165
+
with the above configuration, the receiver will attach these headers as resource
166
+
attributes with the prefix "kafka.header.", i.e.
167
+
147
168
```
148
-
we will get a log record in collector similar to:
149
-
```yaml
150
-
{
151
-
...
152
-
body: Hello,
153
-
resource: {
154
-
kafka.header.header1: value1,
155
-
kafka.header.header2: value2,
156
-
},
157
-
...
169
+
"resource": {
170
+
"attributes": {
171
+
"kafka.header.header1": "value1",
172
+
"kafka.header.header2": "value2",
173
+
}
158
174
}
175
+
...
159
176
```
160
-
161
-
- Here you can see the kafka record header `header1` and `header2` being added to resource attribute.
162
-
- Every **matching** kafka header key is prefixed with `kafka.header` string and attached to resource attributes.
0 commit comments