Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [outputes.0] task_id=4 assigned to thread #1 Can you please enable debug log level and share the log? Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [debug] [http_client] not using http_proxy for header Match kube. Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:23] [ warn] [engine] failed to flush chunk '1-1648192103.858183.flb', retry in 7 seconds: task_id=5, input=tail.0 > output=es.0 (out_id=0)
TLS error: unexpected EOF Issue #6165 fluent/fluent-bit Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [outputes.0] task_id=16 assigned to thread #0 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [outputes.0] task_id=14 assigned to thread #0 Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY [2022/03/24 04:19:21] [debug] [task] created task=0x7f7671e38680 id=1 OK Fri, Mar 25 2022 3:08:33 pm | [2022/03/25 07:08:33] [debug] [task] created task=0x7ff2f183a480 id=11 OK Hi @yangtian9999. [2022/03/24 04:20:49] [debug] [upstream] KA connection #102 to 10.3.4.84:9200 has been assigned (recycled) hi @yangtian9999 Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [ warn] [engine] failed to flush chunk '1-1648192110.850147571.flb', retry in 9 seconds: task_id=9, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [retry] new retry created for task_id=3 attempts=1 When there are lots of messages in the request / chunk and the rejected message is at the end of the list then you never see the cause in the fluent-bit logs. Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [input chunk] update output instances with new chunk size diff=650 [2022/03/24 04:20:49] [debug] [http_client] not using http_proxy for header If you see network-related messages, this may be an issue we already fixed in 1.8.15.
Number of `FluentD` or `Collector` pods are showing `failed to flush Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY If that doesn't help answer your questions, you can connect to the Promtail pod to . There same issues [2022/03/24 04:20:20] [debug] [input:tail:tail.0] scanning path /var/log/containers/.log [2022/03/24 04:20:00] [debug] [out coro] cb_destroy coro_id=4 Deployed, Graylog using Helm Charts. Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [retry] new retry created for task_id=9 attempts=1 [2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records Shgun8. Describe the bug I have a pretty basic setup where I'm trying to write to a Cassandra backend and Loki just isn't creating any chunks. Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [input chunk] update output instances with new chunk size diff=695 Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"MuMmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [outputes.0] task_id=3 assigned to thread #1 [2022/03/24 04:19:24] [ warn] [engine] failed to flush chunk '1-1648095560.205735907.flb', retry in 10 seconds: task_id=0, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [out coro] cb_destroy coro_id=4 For debugging you could use tcpdump: sudo tcpdump -i eth0 tcp port 24224 -X -s 0 -nn. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"5OMmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:21:06] [debug] [http_client] not using http_proxy for header "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"5eMmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Spamming errors in the log about failed connections to the - Github Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [input chunk] update output instances with new chunk size diff=681 [2022/03/24 04:20:20] [debug] [input:tail:tail.0] inode=103386717 removing file name /var/log/containers/hello-world-7mwzw_argo_main-4a2ecde2fd5310129cdf3e3c7eacc17fc1ae0eb6b5e88bed0fdf8fd7fd1100f4.log 2021-04-26 15:58:10 +0000 [warn]: #0 failed to flush the buffer. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Another solution can be to convert your Angular site to a PWA (Progressive Web App). with the updated value.yaml file. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"IOMoun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [outputes.0] task_id=8 assigned to thread #0 Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [http_client] not using http_proxy for header
Log ingestion to Elastic Cloud not working with ECS Fargate Retry_Limit False, Environment name and version (e.g. Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [upstream] KA connection #35 to 10.3.4.84:9200 is now available Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [retry] re-using retry for task_id=1 attempts=2 Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [outputes.0] task_id=13 assigned to thread #0 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"A-Mmun8BI6SaBP9l_8rZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=3070975 file has been deleted: /var/log/containers/hello-world-hxn5d_argo_wait-be32f13608de76af5bd4616dc826eebc306fb25eeb340049de8d3b8e5d40ba4b.log "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"y-Mnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. . [2022/03/24 04:19:50] [debug] [retry] re-using retry for task_id=2 attempts=3 [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/traefik-5dd496474-84cj4_kube-system_traefik-686ff216b0c3b70ad7c33ceddf441433ae1fbf9e01b3c57c59bab53e69304722.log, inode 34105409 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"M-Mmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. retry_time=5929 Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=69464185 watch_fd=13 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"zOMnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=69336502 removing file name /var/log/containers/hello-world-bjfnf_argo_wait-8f0faa126a1c36d4e0d76e1dc75485a39ecc2d43a4efc46ae7306f4b86ea9964.log Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [outputes.0] task_id=11 assigned to thread #1 Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [http_client] not using http_proxy for header Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [retry] new retry created for task_id=10 attempts=1 [warn]: temporarily failed to flush the buffer. [2022/03/24 04:19:38] [debug] [retry] re-using retry for task_id=0 attempts=2 Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [out coro] cb_destroy coro_id=4 [2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [outputes.0] task_id=0 assigned to thread #1 fluentbit_output_proc_records_total. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","id":"-Mmun8BI6SaBP9l_8nZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/svclb-traefik-twmt7_kube-system_lb-port-443-ab3854479885ed2d0db7202276fdb1d2142db002b93c0c88d3d9383fc2d8068b.log, inode 34105877 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104048677 file has been deleted: /var/log/containers/hello-world-hxn5d_argo_main-ce2dea5b2661227ee3931c554317a97e7b958b46d79031f1c48b840cd10b3d78.log [2022/03/24 04:19:34] [debug] [upstream] KA connection #103 to 10.3.4.84:9200 has been assigned (recycled) [2022/03/24 04:20:04] [debug] [outputes.0] task_id=1 assigned to thread #0 [2022/03/24 04:21:20] [debug] [input:tail:tail.0] inode=1772851 with offset=0 appended as /var/log/containers/hello-world-89knq_argo_wait-a7f77229883282b7aebce253b8c371dd28e0606575ded307669b43b272d9a2f4.log Fri, Mar 25 2022 3:08:22 pm | [2022/03/25 07:08:22] [debug] [out coro] cb_destroy coro_id=2 Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=3070975 removing file name /var/log/containers/hello-world-hxn5d_argo_wait-be32f13608de76af5bd4616dc826eebc306fb25eeb340049de8d3b8e5d40ba4b.log Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input chunk] update output instances with new chunk size diff=657 [engine] failed to flush chunk '1-1612396545.855856569.flb', retry in 1485 seconds: task_id=143, input=forward.0 > output=tcp.0 but sometimes this is the last thing I see of the chunk. Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [ warn] [engine] failed to flush chunk '1-1648192103.858183.flb', retry in 30 seconds: task_id=5, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [retry] new retry created for task_id=7 attempts=1 Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [outputes.0] HTTP Status=200 URI=/_bulk [2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/svclb-traefik-twmt7_kube-system_lb-port-80-10ce439b02864f9075c8e41c716e394a6a6cda391ae753798cde988271ff35ef.log, inode 67186751 Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104226845 file has been deleted: /var/log/containers/hello-world-dsfcz_argo_wait-3a9bd9a90cc08322e96d0b7bcc9b6aeffd7e5e6a71754073ca1092db862fcfb7.log Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. The output plugins group events into chunks. outputs: | "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"0-Mmun8BI6SaBP9liJWQ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:20:36] [ warn] [http_client] cannot increase buffer: current=512000 requested=544768 max=512000
[ warn] [engine] failed to flush chunk #3220 - Github Fluentbit stops sending data to output. Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [out coro] cb_destroy coro_id=7 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"2OMmun8BI6SaBP9liJWQ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. After reallocating, I found that there are a lot of failed to flush the buffer errors logged by the two OOMkilled fluentd pods before. Fri, Mar 25 2022 3:08:47 pm | [2022/03/25 07:08:47] [debug] [retry] re-using retry for task_id=2 attempts=4 [2022/03/24 04:20:20] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=69179340 watch_fd=6
Fluentbit is stuck, flb chunks are not flushed and not sending - Github Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [ warn] [engine] failed to flush chunk '1-1648192119.62045721.flb', retry in 11 seconds: task_id=13, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [upstream] KA connection #36 to 10.3.4.84:9200 is now available Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [outputes.0] HTTP Status=200 URI=/_bulk
failed to flush chunk Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [upstream] KA connection #35 to 10.3.4.84:9200 is now available First check Troubleshooting targets section above. Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:33 pm | [2022/03/25 07:08:33] [ warn] [engine] failed to flush chunk '1-1648192113.5409018.flb', retry in 8 seconds: task_id=11, input=tail.0 > output=es.0 (out_id=0) Have a question about this project? Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY [2022/03/24 04:20:26] [debug] [out coro] cb_destroy coro_id=5 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"1uMmun8BI6SaBP9liJWQ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [ warn] [engine] failed to flush chunk '1-1648192097.600252923.flb', retry in 26 seconds: task_id=0, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [retry] re-using retry for task_id=6 attempts=2 Fri, Mar 25 2022 3:08:42 pm | [2022/03/25 07:08:42] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY {"took":2250,"errors":true,"items":[{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"-uMmun8BI6SaBP9l_8nZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Match kube. Fri, Mar 25 2022 3:08:47 pm | [2022/03/25 07:08:47] [debug] [outputes.0] HTTP Status=200 URI=/_bulk "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"_eMmun8BI6SaBP9l_8nZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:21:20] [debug] [input:tail:tail.0] inode=1772861 with offset=0 appended as /var/log/containers/hello-world-wpr5j_argo_wait-76bcd0771f3cc7b5f6b5f15f16ee01cc0c671fb047b93910271bc73e753e26ee.log Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [debug] [outputes.0] HTTP Status=200 URI=/_bulk "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"JuMoun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:19:49] [debug] [retry] re-using retry for task_id=1 attempts=3 [2022/03/24 04:19:29] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [input chunk] update output instances with new chunk size diff=650 Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:46 pm | [2022/03/25 07:08:46] [debug] [http_client] not using http_proxy for header "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"aeMnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamicall Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [input chunk] update output instances with new chunk size diff=695
msg="failed to flush user" err="open /data/loki/chunks - Github Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=35359369 removing file name /var/log/containers/hello-world-swxx6_argo_wait-dc29bc4a400f91f349d4efd144f2a57728ea02b3c2cd527fcd268e3147e9af7d.log "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"JOMmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Tried to remove everything and leave only in_tail and out_es. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"0OMmun8BI6SaBP9liJWQ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"AeMmun8BI6SaBP9l_8rZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"YOMnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. I'm using fluentd logging on k8s for application logging, we are handling 100M (around 400 tps) and getting this issue. Describe the bug. Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [input chunk] update output instances with new chunk size diff=1085 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"OuMmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text].
could not pack/validate JSON response #1679 - Github Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [task] created task=0x7ff2f1839b20 id=6 OK * aws: utils: fix mem leak in flb_imds_request (fluent#2532) Signed-off-by: Wesley Pettit <wppttt@amazon.com> * io: fix EINPROGRESS check, also check . Retry_Limit False. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"0-Mnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:46 pm | [2022/03/25 07:08:46] [debug] [out coro] cb_destroy coro_id=16 While trying to solve the issue mentioned here: #1502 I was able to connect to our kubernetes node and apply there the tune2fs -O large_dir /dev/sda. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"JOMoun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/workflow-controller-bb7c78c7b-w2n5c_argo_workflow-controller-7f4797ff53352e50ff21cf9625ec02ffb226172a2a3ed9b0cee0cb1d071a2990.log, inode 34598688
eks fluent-bit to elasticsearch timeout - Stack Overflow Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [http_client] not using http_proxy for header [2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/helm-install-traefik-j2ncv_kube-system_helm-4554d6945ad4a135678c69aae3fb44bf003479edc450b256421a51ce68a37c59.log, inode 622082 Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [upstream] KA connection #35 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [retry] re-using retry for task_id=15 attempts=2 And it helps in 100% cases. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [ warn] [engine] failed to flush chunk '1-1648192120.74298017.flb', retry in 9 seconds: task_id=14, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:33 pm | [2022/03/25 07:08:33] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY [2022/03/24 04:19:54] [debug] [outputes.0] HTTP Status=200 URI=/_bulk "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"feMnun8BI6SaBP9l3vN-","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_glob add(): /var/log/containers/hello-world-wpr5j_argo_main-55a61ed18250cc1e46ac98d918072e16dab1c6a73f7f9cf0a5dd096959cf6964.log, inode 35326802
Collect kubernetes logs with fluentbit and elasticsearch - GitHub Pages Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [http_client] not using http_proxy for header I use 2.0.6,no matter set Type _doc or Replace_Dots On,i still see mass warn log above. [2022/03/24 04:19:34] [debug] [outputes.0] task_id=0 assigned to thread #1 Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=69479190 removing file name /var/log/containers/hello-world-dsfcz_argo_main-13bb1b2c7e9d3e70003814aa3900bb9aef645cf5e3270e3ee4db0988240b9eff.log 1.8.12 all got same error, @dezhishen I set the "Write_Operation upsert", then pod error, did not start fluent-bit normally. Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [outputes.0] task_id=9 assigned to thread #1 Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [task] created task=0x7ff2f1839ee0 id=8 OK The text was updated successfully, but these errors were encountered: Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [ warn] [engine] failed to flush chunk '1-1648192098.623024610.flb', retry in 16 seconds: task_id=1, input=tail.0 > output=es.0 (out_id=0) [2022/03/24 04:20:20] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=1756313 watch_fd=8 <source> type forward bind :: port 24000 </source> ~ <match fluent_bit> type . This is the total record count of all unique chunks sent by this output. The most common reason i've seen for "failed to flush chunk" is that the batch upload queue on the ES cluster is full. Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [outputes.0] task_id=8 assigned to thread #0 [2022/03/24 04:20:06] [ warn] [http_client] cannot increase buffer: current=512000 requested=544768 max=512000 Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [task] created task=0x7ff2f183a0c0 id=9 OK [2022/03/22 03:57:48] [ warn] [engine] failed to flush chunk '1-1647920426.171646994.flb', retry in 632 seconds: task_id=233, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=104048905 watch_fd=14