Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

packetbeat-1.3.1-x86_64 Out Of Memory #2867

Closed
kira8565 opened this issue Oct 27, 2016 · 7 comments
Closed

packetbeat-1.3.1-x86_64 Out Of Memory #2867

kira8565 opened this issue Oct 27, 2016 · 7 comments

Comments

@kira8565
Copy link

kira8565 commented Oct 27, 2016

Please post all questions and issues on https://discuss.elastic.co/c/beats
before opening a Github Issue. Your questions will reach a wider audience there,
and if we confirm that there is a bug, then you can open a new issue.

For confirmed bugs, please report:

  • Version: packetbeat-1.3.1-x86_64
  • Operating System: Ubuntu-14.04
  • Steps to Reproduce:

    Here is My PacketBeat config


################### Packetbeat Configuration Example ##########################

# This file contains an overview of various configuration settings. Please consult
# the docs at https://www.elastic.co/guide/en/beats/packetbeat/current/packetbeat-configuration.html
# for more details.

# The Packetbeat shipper works by sniffing the network traffic between your
# application components. It inserts meta-data about each transaction into
# Elasticsearch.

############################# Sniffer #########################################

# Select the network interfaces to sniff the data. You can use the "any"
# keyword to sniff on all connected interfaces.
interfaces:
  device: any

############################# Protocols #######################################
protocols:
  dns:
    # Configure the ports where to listen for DNS traffic. You can disable
    # the DNS protocol by commenting out the list of ports.
    ports: [53]

    # include_authorities controls whether or not the dns.authorities field
    # (authority resource records) is added to messages.
    # Default: false
    include_authorities: true
    # include_additionals controls whether or not the dns.additionals field
    # (additional resource records) is added to messages.
    # Default: false
    include_additionals: true

    # send_request and send_response control whether or not the stringified DNS
    # request and response message are added to the result.
    # Nearly all data about the request/response is available in the dns.*
    # fields, but this can be useful if you need visibility specifically
    # into the request or the response.
    # Default: false
    # send_request:  true
    # send_response: true

  http:
    # Configure the ports where to listen for HTTP traffic. You can disable
    # the HTTP protocol by commenting out the list of ports.
    ports: [80, 8080, 8000, 5000, 8002,29200]

    # Uncomment the following to hide certain parameters in URL or forms attached
    # to HTTP requests. The names of the parameters are case insensitive.
    # The value of the parameters will be replaced with the 'xxxxx' string.
    # This is generally useful for avoiding storing user passwords or other
    # sensitive information.
    # Only query parameters and top level form parameters are replaced.
    # hide_keywords: ['pass', 'password', 'passwd']

  memcache:
    # Configure the ports where to listen for memcache traffic. You can disable
    # the Memcache protocol by commenting out the list of ports.
    ports: [11211]

    # Uncomment the parseunknown option to force the memcache text protocol parser
    # to accept unknown commands.
    # Note: All unknown commands MUST not contain any data parts!
    # Default: false
    # parseunknown: true

    # Update the maxvalue option to store the values - base64 encoded - in the
    # json output.
    # possible values:
    #    maxvalue: -1  # store all values (text based protocol multi-get)
    #    maxvalue: 0   # store no values at all
    #    maxvalue: N   # store up to N values
    # Default: 0
    # maxvalues: -1

    # Use maxbytespervalue to limit the number of bytes to be copied per value element.
    # Note: Values will be base64 encoded, so actual size in json document
    #       will be 4 times maxbytespervalue.
    # Default: unlimited
    # maxbytespervalue: 100

    # UDP transaction timeout in milliseconds.
    # Note: Quiet messages in UDP binary protocol will get response only in error case.
    #       The memcached analyzer will wait for udptransactiontimeout milliseconds
    #       before publishing quiet messages. Non quiet messages or quiet requests with
    #       error response will not have to wait for the timeout.
    # Default: 200
    # udptransactiontimeout: 1000

  mysql:
    # Configure the ports where to listen for MySQL traffic. You can disable
    # the MySQL protocol by commenting out the list of ports.
    ports: [3306]

  pgsql:
    # Configure the ports where to listen for Pgsql traffic. You can disable
    # the Pgsql protocol by commenting out the list of ports.
    ports: [5432]

  redis:
    # Configure the ports where to listen for Redis traffic. You can disable
    # the Redis protocol by commenting out the list of ports.
    ports: [6379]

  thrift:
    # Configure the ports where to listen for Thrift-RPC traffic. You can disable
    # the Thrift-RPC protocol by commenting out the list of ports.
    ports: [9090]

  mongodb:
    # Configure the ports where to listen for MongoDB traffic. You can disable
    # the MongoDB protocol by commenting out the list of ports.
    ports: [27017]

############################# Processes #######################################

# Configure the processes to be monitored and how to find them. If a process is
# monitored then Packetbeat attempts to use it's name to fill in the `proc` and
# `client_proc` fields.
# The processes can be found by searching their command line by a given string.
#
# Process matching is optional and can be enabled by uncommenting the following
# lines.
#
procs:
  enabled: true
  monitored:
    - process: filebeat
      cmdline_grep: filebeat
#
#    - process: pgsql
#      cmdline_grep: postgres
#
#    - process: nginx
#      cmdline_grep: nginx
#
#    - process: app
#      cmdline_grep: gunicorn

###############################################################################
############################# Libbeat Config ##################################
# Base config file used by all other beats for using libbeat features

############################# Output ##########################################

# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.
output:

  ### Elasticsearch as output
#  elasticsearch:
    # Array of hosts to connect to.
    # Scheme and port can be left out and will be set to the default (http and 9200)
    # In case you specify and additional path, the scheme is required: http://localhost:9200/path
    # IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
 #   hosts: ["localhost:9200"]

    # Optional protocol and basic auth credentials.
    #protocol: "https"
    #username: "admin"
    #password: "s3cr3t"

    # Number of workers per Elasticsearch host.
    #worker: 1

    # Optional index name. The default is "packetbeat" and generates
    # [packetbeat-]YYYY.MM.DD keys.
    #index: "packetbeat"

    # A template is used to set the mapping in Elasticsearch
    # By default template loading is disabled and no template is loaded.
    # These settings can be adjusted to load your own template or overwrite existing ones
    #template:

      # Template name. By default the template name is packetbeat.
      #name: "packetbeat"

      # Path to template file
      #path: "packetbeat.template.json"

      # Overwrite existing template
      #overwrite: false

    # Optional HTTP Path
    #path: "/elasticsearch"

    # Proxy server url
    #proxy_url: http://proxy:3128

    # The number of times a particular Elasticsearch index operation is attempted. If
    # the indexing operation doesn't succeed after this many retries, the events are
    # dropped. The default is 3.
    #max_retries: 3

    # The maximum number of events to bulk in a single Elasticsearch bulk API index request.
    # The default is 50.
    #bulk_max_size: 50

    # Configure http request timeout before failing an request to Elasticsearch.
    #timeout: 90

    # The number of seconds to wait for new events between two bulk API index requests.
    # If `bulk_max_size` is reached before this interval expires, addition bulk index
    # requests are made.
    #flush_interval: 1

    # Boolean that sets if the topology is kept in Elasticsearch. The default is
    # false. This option makes sense only for Packetbeat.
    #save_topology: false

    # The time to live in seconds for the topology information that is stored in
    # Elasticsearch. The default is 15 seconds.
    #topology_expire: 15

    # tls configuration. By default is off.
    #tls:
      # List of root certificates for HTTPS server verifications
      #certificate_authorities: ["/etc/pki/root/ca.pem"]

      # Certificate for TLS client authentication
      #certificate: "/etc/pki/client/cert.pem"

      # Client Certificate Key
      #certificate_key: "/etc/pki/client/cert.key"

      # Controls whether the client verifies server certificates and host name.
      # If insecure is set to true, all server host names and certificates will be
      # accepted. In this mode TLS based connections are susceptible to
      # man-in-the-middle attacks. Use only for testing.
      #insecure: true

      # Configure cipher suites to be used for TLS connections
      #cipher_suites: []

      # Configure curve types for ECDHE based cipher suites
      #curve_types: []

      # Configure minimum TLS version allowed for connection to logstash
      #min_version: 1.0

      # Configure maximum TLS version allowed for connection to logstash
      #max_version: 1.2


  ### Logstash as output
   logstash:
    # The Logstash hosts
    hosts: ["myip:myport"]

    # Number of workers per Logstash host.
    #worker: 1

    # The maximum number of events to bulk into a single batch window. The
    # default is 2048.
    #bulk_max_size: 2048

    # Set gzip compression level.
    #compression_level: 3

    # Optional load balance the events between the Logstash hosts
    #loadbalance: true

    # Optional index name. The default index name depends on the each beat.
    # For Packetbeat, the default is set to packetbeat, for Topbeat
    # top topbeat and for Filebeat to filebeat.
    #index: packetbeat

    # Optional TLS. By default is off.
    #tls:
      # List of root certificates for HTTPS server verifications
      #certificate_authorities: ["/etc/pki/root/ca.pem"]

      # Certificate for TLS client authentication
      #certificate: "/etc/pki/client/cert.pem"

      # Client Certificate Key
      #certificate_key: "/etc/pki/client/cert.key"

      # Controls whether the client verifies server certificates and host name.
      # If insecure is set to true, all server host names and certificates will be
      # accepted. In this mode TLS based connections are susceptible to
      # man-in-the-middle attacks. Use only for testing.
      #insecure: true

      # Configure cipher suites to be used for TLS connections
      #cipher_suites: []

      # Configure curve types for ECDHE based cipher suites
      #curve_types: []


  ### File as output
  #file:
    # Path to the directory where to save the generated files. The option is mandatory.
    #path: "/tmp/packetbeat"

    # Name of the generated files. The default is `packetbeat` and it generates files: `packetbeat`, `packetbeat.1`, `packetbeat.2`, etc.
    #filename: packetbeat

    # Maximum size in kilobytes of each file. When this size is reached, the files are
    # rotated. The default value is 10 MB.
    #rotate_every_kb: 10000

    # Maximum number of files under path. When this number of files is reached, the
    # oldest file is deleted and the rest are shifted from last to first. The default
    # is 7 files.
    #number_of_files: 7


  ### Console output
  # console:
    # Pretty print json event
    #pretty: false


############################# Shipper #########################################

shipper:
  # The name of the shipper that publishes the network data. It can be used to group
  # all the transactions sent by a single shipper in the web interface.
  # If this options is not defined, the hostname is used.
  #name:

  # The tags of the shipper are included in their own field with each
  # transaction published. Tags make it easy to group servers by different
  # logical properties.
  #tags: ["service-X", "web-tier"]

  # Uncomment the following if you want to ignore transactions created
  # by the server on which the shipper is installed. This option is useful
  # to remove duplicates if shippers are installed on multiple servers.
  #ignore_outgoing: true

  # How often (in seconds) shippers are publishing their IPs to the topology map.
  # The default is 10 seconds.
  #refresh_topology_freq: 10

  # Expiration time (in seconds) of the IPs published by a shipper to the topology map.
  # All the IPs will be deleted afterwards. Note, that the value must be higher than
  # refresh_topology_freq. The default is 15 seconds.
  #topology_expire: 15

  # Internal queue size for single events in processing pipeline
  #queue_size: 1000

  # Configure local GeoIP database support.
  # If no paths are not configured geoip is disabled.
  #geoip:
    #paths:
    #  - "/usr/share/GeoIP/GeoLiteCity.dat"
    #  - "/usr/local/var/GeoIP/GeoLiteCity.dat"


############################# Logging #########################################

# There are three options for the log ouput: syslog, file, stderr.
# Under Windos systems, the log files are per default sent to the file output,
# under all other system per default to syslog.
logging:

  # Send all logging output to syslog. On Windows default is false, otherwise
  # default is true.
  #to_syslog: true

  # Write all logging output to files. Beats automatically rotate files if rotateeverybytes
  # limit is reached.
  #to_files: false

  # To enable logging to files, to_files option has to be set to true
  files:
    # The directory where the log files will written to.
    #path: /var/log/mybeat

    # The name of the files where the logs are written to.
    #name: mybeat

    # Configure log file size limit. If limit is reached, log file will be
    # automatically rotated
    rotateeverybytes: 10485760 # = 10MB

    # Number of rotated log files to keep. Oldest files will be deleted first.
    #keepfiles: 7

  # Enable debug output for selected components. To enable all selectors use ["*"]
  # Other available selectors are beat, publish, service
  # Multiple selectors can be chained.
  #selectors: [ ]

  # Sets log level. The default log level is error.
  # Available log levels are: critical, error, warning, info, debug
  #level: error

Here is The Error Log


fatal error: runtime: out of memory

runtime stack:
runtime.throw(0x9610a1, 0x16)
    /usr/local/go/src/runtime/panic.go:566 +0x95
runtime.sysMap(0xc424880000, 0x757270000, 0x7fbf333e8300, 0x13606d8)
    /usr/local/go/src/runtime/mem_linux.go:219 +0x1d0
runtime.(*mheap).sysAlloc(0xe32760, 0x757270000, 0x7fbf00000001)
    /usr/local/go/src/runtime/malloc.go:407 +0x37a
runtime.(*mheap).grow(0xe32760, 0x3ab937, 0x0)
    /usr/local/go/src/runtime/mheap.go:726 +0x62
runtime.(*mheap).allocSpanLocked(0xe32760, 0x3ab937, 0x2000)
    /usr/local/go/src/runtime/mheap.go:630 +0x4f2
runtime.(*mheap).alloc_m(0xe32760, 0x3ab937, 0x7f0100000000, 0x7fbf2bffed70)
    /usr/local/go/src/runtime/mheap.go:515 +0xe0
runtime.(*mheap).alloc.func1()
    /usr/local/go/src/runtime/mheap.go:579 +0x4b
runtime.systemstack(0x7fbf2bffed78)
    /usr/local/go/src/runtime/asm_amd64.s:314 +0xab
runtime.(*mheap).alloc(0xe32760, 0x3ab937, 0x10100000000, 0xc42013f730)
    /usr/local/go/src/runtime/mheap.go:580 +0x73
runtime.largeAlloc(0x75726d610, 0xc42013f701, 0xc4213f9a80)
    /usr/local/go/src/runtime/malloc.go:774 +0x93
runtime.mallocgc.func1()
    /usr/local/go/src/runtime/malloc.go:669 +0x3e
runtime.systemstack(0xc420019500)
    /usr/local/go/src/runtime/asm_amd64.s:298 +0x79
runtime.mstart()
    /usr/local/go/src/runtime/proc.go:1079

goroutine 69 [running]:
runtime.systemstack_switch()
    /usr/local/go/src/runtime/asm_amd64.s:252 fp=0xc4213f9720 sp=0xc4213f9718
runtime.mallocgc(0x75726d610, 0x8abc20, 0x94ac01, 0x7)
    /usr/local/go/src/runtime/malloc.go:670 +0x903 fp=0xc4213f97c0 sp=0xc4213f9720
runtime.makeslice(0x8abc20, 0x75726d61, 0x75726d61, 0x1, 0x1, 0x30)
    /usr/local/go/src/runtime/slice.go:57 +0x7b fp=0xc4213f9818 sp=0xc4213f97c0
github.com/elastic/beats/packetbeat/protos/mongodb.opReplyParse(0xc4213f9990, 0xc4212dd320, 0xc4219ab980)
    /go/src/github.com/elastic/beats/packetbeat/protos/mongodb/mongodb_parser.go:94 +0x37c fp=0xc4213f9940 sp=0xc4213f9818
github.com/elastic/beats/packetbeat/protos/mongodb.mongodbMessageParser(0xc42158a5d0, 0xc4222a2044)
    /go/src/github.com/elastic/beats/packetbeat/protos/mongodb/mongodb_parser.go:55 +0x335 fp=0xc4213f99c0 sp=0xc4213f9940
github.com/elastic/beats/packetbeat/protos/mongodb.(*Mongodb).doParse(0xc420142a80, 0xc420610280, 0xc4223fb740, 0xc421a81e18, 0x0, 0x6000000000000)
    /go/src/github.com/elastic/beats/packetbeat/protos/mongodb/mongodb.go:161 +0x150 fp=0xc4213f9a80 sp=0xc4213f99c0
github.com/elastic/beats/packetbeat/protos/mongodb.(*Mongodb).Parse(0xc420142a80, 0xc4223fb740, 0xc421a81e18, 0xc420142a00, 0x876340, 0xc420610280, 0x0, 0x0)
    /go/src/github.com/elastic/beats/packetbeat/protos/mongodb/mongodb.go:110 +0xf0 fp=0xc4213f9ac0 sp=0xc4213f9a80
github.com/elastic/beats/packetbeat/protos/tcp.(*TcpStream).addPacket(0xc4213f9bd8, 0xc4223fb740, 0xc4213483a8)
    /go/src/github.com/elastic/beats/packetbeat/protos/tcp/tcp.go:109 +0x176 fp=0xc4213f9b38 sp=0xc4213f9ac0
github.com/elastic/beats/packetbeat/protos/tcp.(*Tcp).Process(0xc42120f0b0, 0xc4213483a8, 0xc4223fb740)
    /go/src/github.com/elastic/beats/packetbeat/protos/tcp/tcp.go:194 +0x386 fp=0xc4213f9c28 sp=0xc4213f9b38
github.com/elastic/beats/packetbeat/decoder.(*DecoderStruct).process(0xc421348000, 0xc4223fb740, 0x2c, 0x225, 0xdf6e60)
    /go/src/github.com/elastic/beats/packetbeat/decoder/decoder.go:183 +0x18f fp=0xc4213f9c60 sp=0xc4213f9c28
github.com/elastic/beats/packetbeat/decoder.(*DecoderStruct).DecodePacketData(0xc421348000, 0xc4222a2024, 0x245, 0x245, 0xc4213f9f00)
    /go/src/github.com/elastic/beats/packetbeat/decoder/decoder.go:101 +0x230 fp=0xc4213f9db0 sp=0xc4213f9c60
github.com/elastic/beats/packetbeat/sniffer.(*SnifferSetup).Run(0xc420134be0, 0x0, 0x0)
    /go/src/github.com/elastic/beats/packetbeat/sniffer/sniffer.go:356 +0x422 fp=0xc4213f9f38 sp=0xc4213f9db0
github.com/elastic/beats/packetbeat/beat.(*Packetbeat).Run.func1(0xc4200ae900)
    /go/src/github.com/elastic/beats/packetbeat/beat/packetbeat.go:232 +0x3e fp=0xc4213f9f88 sp=0xc4213f9f38
runtime.goexit()
    /usr/local/go/src/runtime/asm_amd64.s:2086 +0x1 fp=0xc4213f9f90 sp=0xc4213f9f88
created by github.com/elastic/beats/packetbeat/beat.(*Packetbeat).Run
    /go/src/github.com/elastic/beats/packetbeat/beat/packetbeat.go:238 +0x43

goroutine 1 [chan receive, 22 minutes]:
github.com/elastic/beats/packetbeat/beat.(*Packetbeat).Run(0xc4200ae900, 0xc420134320, 0xc4212ffe18, 0x1)
    /go/src/github.com/elastic/beats/packetbeat/beat/packetbeat.go:247 +0xc1
github.com/elastic/beats/libbeat/beat.(*Beat).Run(0xc420134320)
    /go/src/github.com/elastic/beats/libbeat/beat/beat.go:197 +0x196
github.com/elastic/beats/libbeat/beat.Run(0x951583, 0xa, 0x9461c0, 0x5, 0xdfeac0, 0xc4200ae900, 0xc42013f380)
    /go/src/github.com/elastic/beats/libbeat/beat/beat.go:107 +0x141
main.main()
    /go/src/github.com/elastic/beats/packetbeat/main.go:15 +0xd3

goroutine 17 [syscall, 22 minutes, locked to thread]:
runtime.goexit()
    /usr/local/go/src/runtime/asm_amd64.s:2086 +0x1

goroutine 5 [syscall, 22 minutes]:
os/signal.signal_recv(0x0)
    /usr/local/go/src/runtime/sigqueue.go:116 +0x157
os/signal.loop()
    /usr/local/go/src/os/signal/signal_unix.go:22 +0x22
created by os/signal.init.1
    /usr/local/go/src/os/signal/signal_unix.go:28 +0x41

goroutine 66 [select, 22 minutes, locked to thread]:
runtime.gopark(0x99fa40, 0x0, 0x948fb9, 0x6, 0x18, 0x2)
    /usr/local/go/src/runtime/proc.go:259 +0x13a
runtime.selectgoImpl(0xc42002a730, 0x0, 0x18)
    /usr/local/go/src/runtime/select.go:423 +0x11d9
runtime.selectgo(0xc42002a730)
    /usr/local/go/src/runtime/select.go:238 +0x1c
runtime.ensureSigM.func1()
    /usr/local/go/src/runtime/signal1_unix.go:304 +0x2f3
runtime.goexit()
    /usr/local/go/src/runtime/asm_amd64.s:2086 +0x1

goroutine 7 [running]:
    goroutine running on other thread; stack unavailable
created by github.com/elastic/beats/libbeat/publisher.(*messageWorker).init
    /go/src/github.com/elastic/beats/libbeat/publisher/worker.go:55 +0x103

goroutine 8 [running]:
    goroutine running on other thread; stack unavailable
created by github.com/elastic/beats/libbeat/publisher.newBulkWorker
    /go/src/github.com/elastic/beats/libbeat/publisher/bulk.go:42 +0x221

goroutine 9 [running]:
    goroutine running on other thread; stack unavailable
created by github.com/elastic/beats/packetbeat/procs.NewProcess
    /go/src/github.com/elastic/beats/packetbeat/procs/procs.go:141 +0xee

goroutine 10 [select]:
github.com/elastic/beats/packetbeat/publish.(*PacketbeatPublisher).Start.func1(0xc42049aa40)
    /go/src/github.com/elastic/beats/packetbeat/publish/publish.go:61 +0x158
created by github.com/elastic/beats/packetbeat/publish.(*PacketbeatPublisher).Start
    /go/src/github.com/elastic/beats/packetbeat/publish/publish.go:68 +0x5c

goroutine 11 [select]:
github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor.func1(0xc42049aac0, 0xc42049aa80)
    /go/src/github.com/elastic/beats/libbeat/common/cache.go:219 +0x123
created by github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor
    /go/src/github.com/elastic/beats/libbeat/common/cache.go:227 +0x97

goroutine 12 [select]:
github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor.func1(0xc42049ab40, 0xc42049ab00)
    /go/src/github.com/elastic/beats/libbeat/common/cache.go:219 +0x123
created by github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor
    /go/src/github.com/elastic/beats/libbeat/common/cache.go:227 +0x97

goroutine 13 [select]:
github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor.func1(0xc42049abc0, 0xc42049ab80)
    /go/src/github.com/elastic/beats/libbeat/common/cache.go:219 +0x123
created by github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor
    /go/src/github.com/elastic/beats/libbeat/common/cache.go:227 +0x97

goroutine 14 [chan receive, 22 minutes]:
github.com/elastic/beats/packetbeat/protos/thrift.(*Thrift).publishTransactions(0xc42005a070)
    /go/src/github.com/elastic/beats/packetbeat/protos/thrift/thrift.go:1075 +0x93
created by github.com/elastic/beats/packetbeat/protos/thrift.(*Thrift).Init
    /go/src/github.com/elastic/beats/packetbeat/protos/thrift/thrift.go:263 +0x23b

goroutine 15 [select]:
github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor.func1(0xc42049ac80, 0xc42049ac40)
    /go/src/github.com/elastic/beats/libbeat/common/cache.go:219 +0x123
created by github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor
    /go/src/github.com/elastic/beats/libbeat/common/cache.go:227 +0x97

goroutine 16 [select]:
github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor.func1(0xc42049ad00, 0xc42049acc0)
    /go/src/github.com/elastic/beats/libbeat/common/cache.go:219 +0x123
created by github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor
    /go/src/github.com/elastic/beats/libbeat/common/cache.go:227 +0x97

goroutine 50 [select]:
github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor.func1(0xc42049ad80, 0xc42049ad40)
    /go/src/github.com/elastic/beats/libbeat/common/cache.go:219 +0x123
created by github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor
    /go/src/github.com/elastic/beats/libbeat/common/cache.go:227 +0x97

goroutine 51 [select]:
github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor.func1(0xc42049b5c0, 0xc42049b580)
    /go/src/github.com/elastic/beats/libbeat/common/cache.go:219 +0x123
created by github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor
    /go/src/github.com/elastic/beats/libbeat/common/cache.go:227 +0x97

goroutine 52 [select]:
github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor.func1(0xc42049b640, 0xc42049b600)
    /go/src/github.com/elastic/beats/libbeat/common/cache.go:219 +0x123
created by github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor
    /go/src/github.com/elastic/beats/libbeat/common/cache.go:227 +0x97

goroutine 67 [chan receive, 22 minutes]:
github.com/elastic/beats/libbeat/service.HandleSignals.func1(0xc42134c000, 0xc4211ee2c0, 0xc4209dc980)
    /go/src/github.com/elastic/beats/libbeat/service/service.go:29 +0x44
created by github.com/elastic/beats/libbeat/service.HandleSignals
    /go/src/github.com/elastic/beats/libbeat/service/service.go:32 +0x195

Additionaly,I run filebeat(master) and metricbeat(master) on this machine too,and both of this beat killed by operation system too. it is strange

@tsg tsg added the question label Oct 27, 2016
@tsg
Copy link
Contributor

tsg commented Oct 27, 2016

A few question:

  • How long does it take before crashing with OOM?
  • Before reaching OOM, do you see events reaching Logstash?
  • How much free memory is there on that server?
  • Can you try with the file output only (without Logstash), to check if it goes into OOM in that case?

@kira8565
Copy link
Author

kira8565 commented Oct 27, 2016

  • How long does it take before crashing with OOM? (I Start Packet Beat on 11:08 and my logserver in 11:36 recieve the last message from packet beat, i think it might be crash, But I didn't notice)
  • Before reaching OOM, do you see events reaching Logstash(Yes,And)
  • How much free memory is there on that server (My server has 16G memory and has 2G frem memory)
  • Can you try with the file output only (without Logstash), to check if it goes into OOM in that case?(er....I use file output is use 50M memory,but when i use logstash output, it use over 100M memory,now i restart it again to send log to my logserver)

@kira8565
Copy link
Author

I start packet beat Last night to send event to my log server via logstash output ,and ...it crash again ,here is my log

fatal error: runtime: out of memory

runtime stack:
runtime.throw(0x9610a1, 0x16)
    /usr/local/go/src/runtime/panic.go:566 +0x95
runtime.sysMap(0xc424880000, 0x757270000, 0x7fbf333e8300, 0x13606d8)
    /usr/local/go/src/runtime/mem_linux.go:219 +0x1d0
runtime.(*mheap).sysAlloc(0xe32760, 0x757270000, 0x7fbf00000001)
    /usr/local/go/src/runtime/malloc.go:407 +0x37a
runtime.(*mheap).grow(0xe32760, 0x3ab937, 0x0)
    /usr/local/go/src/runtime/mheap.go:726 +0x62
runtime.(*mheap).allocSpanLocked(0xe32760, 0x3ab937, 0x2000)
    /usr/local/go/src/runtime/mheap.go:630 +0x4f2
runtime.(*mheap).alloc_m(0xe32760, 0x3ab937, 0x7f0100000000, 0x7fbf2bffed70)
    /usr/local/go/src/runtime/mheap.go:515 +0xe0
runtime.(*mheap).alloc.func1()
    /usr/local/go/src/runtime/mheap.go:579 +0x4b
runtime.systemstack(0x7fbf2bffed78)
    /usr/local/go/src/runtime/asm_amd64.s:314 +0xab
runtime.(*mheap).alloc(0xe32760, 0x3ab937, 0x10100000000, 0xc42013f730)
    /usr/local/go/src/runtime/mheap.go:580 +0x73
runtime.largeAlloc(0x75726d610, 0xc42013f701, 0xc4213f9a80)
    /usr/local/go/src/runtime/malloc.go:774 +0x93
runtime.mallocgc.func1()
    /usr/local/go/src/runtime/malloc.go:669 +0x3e
runtime.systemstack(0xc420019500)
    /usr/local/go/src/runtime/asm_amd64.s:298 +0x79
runtime.mstart()
    /usr/local/go/src/runtime/proc.go:1079

goroutine 69 [running]:
runtime.systemstack_switch()
    /usr/local/go/src/runtime/asm_amd64.s:252 fp=0xc4213f9720 sp=0xc4213f9718
runtime.mallocgc(0x75726d610, 0x8abc20, 0x94ac01, 0x7)
    /usr/local/go/src/runtime/malloc.go:670 +0x903 fp=0xc4213f97c0 sp=0xc4213f9720
runtime.makeslice(0x8abc20, 0x75726d61, 0x75726d61, 0x1, 0x1, 0x30)
    /usr/local/go/src/runtime/slice.go:57 +0x7b fp=0xc4213f9818 sp=0xc4213f97c0
github.com/elastic/beats/packetbeat/protos/mongodb.opReplyParse(0xc4213f9990, 0xc4212dd320, 0xc4219ab980)
    /go/src/github.com/elastic/beats/packetbeat/protos/mongodb/mongodb_parser.go:94 +0x37c fp=0xc4213f9940 sp=0xc4213f9818
github.com/elastic/beats/packetbeat/protos/mongodb.mongodbMessageParser(0xc42158a5d0, 0xc4222a2044)
    /go/src/github.com/elastic/beats/packetbeat/protos/mongodb/mongodb_parser.go:55 +0x335 fp=0xc4213f99c0 sp=0xc4213f9940
github.com/elastic/beats/packetbeat/protos/mongodb.(*Mongodb).doParse(0xc420142a80, 0xc420610280, 0xc4223fb740, 0xc421a81e18, 0x0, 0x6000000000000)
    /go/src/github.com/elastic/beats/packetbeat/protos/mongodb/mongodb.go:161 +0x150 fp=0xc4213f9a80 sp=0xc4213f99c0
github.com/elastic/beats/packetbeat/protos/mongodb.(*Mongodb).Parse(0xc420142a80, 0xc4223fb740, 0xc421a81e18, 0xc420142a00, 0x876340, 0xc420610280, 0x0, 0x0)
    /go/src/github.com/elastic/beats/packetbeat/protos/mongodb/mongodb.go:110 +0xf0 fp=0xc4213f9ac0 sp=0xc4213f9a80
github.com/elastic/beats/packetbeat/protos/tcp.(*TcpStream).addPacket(0xc4213f9bd8, 0xc4223fb740, 0xc4213483a8)
    /go/src/github.com/elastic/beats/packetbeat/protos/tcp/tcp.go:109 +0x176 fp=0xc4213f9b38 sp=0xc4213f9ac0
github.com/elastic/beats/packetbeat/protos/tcp.(*Tcp).Process(0xc42120f0b0, 0xc4213483a8, 0xc4223fb740)
    /go/src/github.com/elastic/beats/packetbeat/protos/tcp/tcp.go:194 +0x386 fp=0xc4213f9c28 sp=0xc4213f9b38
github.com/elastic/beats/packetbeat/decoder.(*DecoderStruct).process(0xc421348000, 0xc4223fb740, 0x2c, 0x225, 0xdf6e60)
    /go/src/github.com/elastic/beats/packetbeat/decoder/decoder.go:183 +0x18f fp=0xc4213f9c60 sp=0xc4213f9c28
github.com/elastic/beats/packetbeat/decoder.(*DecoderStruct).DecodePacketData(0xc421348000, 0xc4222a2024, 0x245, 0x245, 0xc4213f9f00)
    /go/src/github.com/elastic/beats/packetbeat/decoder/decoder.go:101 +0x230 fp=0xc4213f9db0 sp=0xc4213f9c60
github.com/elastic/beats/packetbeat/sniffer.(*SnifferSetup).Run(0xc420134be0, 0x0, 0x0)
    /go/src/github.com/elastic/beats/packetbeat/sniffer/sniffer.go:356 +0x422 fp=0xc4213f9f38 sp=0xc4213f9db0
github.com/elastic/beats/packetbeat/beat.(*Packetbeat).Run.func1(0xc4200ae900)
    /go/src/github.com/elastic/beats/packetbeat/beat/packetbeat.go:232 +0x3e fp=0xc4213f9f88 sp=0xc4213f9f38
runtime.goexit()
    /usr/local/go/src/runtime/asm_amd64.s:2086 +0x1 fp=0xc4213f9f90 sp=0xc4213f9f88
created by github.com/elastic/beats/packetbeat/beat.(*Packetbeat).Run
    /go/src/github.com/elastic/beats/packetbeat/beat/packetbeat.go:238 +0x43

goroutine 1 [chan receive, 22 minutes]:
github.com/elastic/beats/packetbeat/beat.(*Packetbeat).Run(0xc4200ae900, 0xc420134320, 0xc4212ffe18, 0x1)
    /go/src/github.com/elastic/beats/packetbeat/beat/packetbeat.go:247 +0xc1
github.com/elastic/beats/libbeat/beat.(*Beat).Run(0xc420134320)
    /go/src/github.com/elastic/beats/libbeat/beat/beat.go:197 +0x196
github.com/elastic/beats/libbeat/beat.Run(0x951583, 0xa, 0x9461c0, 0x5, 0xdfeac0, 0xc4200ae900, 0xc42013f380)
    /go/src/github.com/elastic/beats/libbeat/beat/beat.go:107 +0x141
main.main()
    /go/src/github.com/elastic/beats/packetbeat/main.go:15 +0xd3

goroutine 17 [syscall, 22 minutes, locked to thread]:
runtime.goexit()
    /usr/local/go/src/runtime/asm_amd64.s:2086 +0x1

goroutine 5 [syscall, 22 minutes]:
os/signal.signal_recv(0x0)
    /usr/local/go/src/runtime/sigqueue.go:116 +0x157
os/signal.loop()
    /usr/local/go/src/os/signal/signal_unix.go:22 +0x22
created by os/signal.init.1
    /usr/local/go/src/os/signal/signal_unix.go:28 +0x41

goroutine 66 [select, 22 minutes, locked to thread]:
runtime.gopark(0x99fa40, 0x0, 0x948fb9, 0x6, 0x18, 0x2)
    /usr/local/go/src/runtime/proc.go:259 +0x13a
runtime.selectgoImpl(0xc42002a730, 0x0, 0x18)
    /usr/local/go/src/runtime/select.go:423 +0x11d9
runtime.selectgo(0xc42002a730)
    /usr/local/go/src/runtime/select.go:238 +0x1c
runtime.ensureSigM.func1()
    /usr/local/go/src/runtime/signal1_unix.go:304 +0x2f3
runtime.goexit()
    /usr/local/go/src/runtime/asm_amd64.s:2086 +0x1

goroutine 7 [running]:
    goroutine running on other thread; stack unavailable
created by github.com/elastic/beats/libbeat/publisher.(*messageWorker).init
    /go/src/github.com/elastic/beats/libbeat/publisher/worker.go:55 +0x103

goroutine 8 [running]:
    goroutine running on other thread; stack unavailable
created by github.com/elastic/beats/libbeat/publisher.newBulkWorker
    /go/src/github.com/elastic/beats/libbeat/publisher/bulk.go:42 +0x221

goroutine 9 [running]:
    goroutine running on other thread; stack unavailable
created by github.com/elastic/beats/packetbeat/procs.NewProcess
    /go/src/github.com/elastic/beats/packetbeat/procs/procs.go:141 +0xee

goroutine 10 [select]:
github.com/elastic/beats/packetbeat/publish.(*PacketbeatPublisher).Start.func1(0xc42049aa40)
    /go/src/github.com/elastic/beats/packetbeat/publish/publish.go:61 +0x158
created by github.com/elastic/beats/packetbeat/publish.(*PacketbeatPublisher).Start
    /go/src/github.com/elastic/beats/packetbeat/publish/publish.go:68 +0x5c

goroutine 11 [select]:
github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor.func1(0xc42049aac0, 0xc42049aa80)
    /go/src/github.com/elastic/beats/libbeat/common/cache.go:219 +0x123
created by github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor
    /go/src/github.com/elastic/beats/libbeat/common/cache.go:227 +0x97

goroutine 12 [select]:
github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor.func1(0xc42049ab40, 0xc42049ab00)
    /go/src/github.com/elastic/beats/libbeat/common/cache.go:219 +0x123
created by github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor
    /go/src/github.com/elastic/beats/libbeat/common/cache.go:227 +0x97

goroutine 13 [select]:
github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor.func1(0xc42049abc0, 0xc42049ab80)
    /go/src/github.com/elastic/beats/libbeat/common/cache.go:219 +0x123
created by github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor
    /go/src/github.com/elastic/beats/libbeat/common/cache.go:227 +0x97

goroutine 14 [chan receive, 22 minutes]:
github.com/elastic/beats/packetbeat/protos/thrift.(*Thrift).publishTransactions(0xc42005a070)
    /go/src/github.com/elastic/beats/packetbeat/protos/thrift/thrift.go:1075 +0x93
created by github.com/elastic/beats/packetbeat/protos/thrift.(*Thrift).Init
    /go/src/github.com/elastic/beats/packetbeat/protos/thrift/thrift.go:263 +0x23b

goroutine 15 [select]:
github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor.func1(0xc42049ac80, 0xc42049ac40)
    /go/src/github.com/elastic/beats/libbeat/common/cache.go:219 +0x123
created by github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor
    /go/src/github.com/elastic/beats/libbeat/common/cache.go:227 +0x97

goroutine 16 [select]:
github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor.func1(0xc42049ad00, 0xc42049acc0)
    /go/src/github.com/elastic/beats/libbeat/common/cache.go:219 +0x123
created by github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor
    /go/src/github.com/elastic/beats/libbeat/common/cache.go:227 +0x97

goroutine 50 [select]:
github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor.func1(0xc42049ad80, 0xc42049ad40)
    /go/src/github.com/elastic/beats/libbeat/common/cache.go:219 +0x123
created by github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor
    /go/src/github.com/elastic/beats/libbeat/common/cache.go:227 +0x97

goroutine 51 [select]:
github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor.func1(0xc42049b5c0, 0xc42049b580)
    /go/src/github.com/elastic/beats/libbeat/common/cache.go:219 +0x123
created by github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor
    /go/src/github.com/elastic/beats/libbeat/common/cache.go:227 +0x97

goroutine 52 [select]:
github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor.func1(0xc42049b640, 0xc42049b600)
    /go/src/github.com/elastic/beats/libbeat/common/cache.go:219 +0x123
created by github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor
    /go/src/github.com/elastic/beats/libbeat/common/cache.go:227 +0x97

goroutine 67 [chan receive, 22 minutes]:
github.com/elastic/beats/libbeat/service.HandleSignals.func1(0xc42134c000, 0xc4211ee2c0, 0xc4209dc980)
    /go/src/github.com/elastic/beats/libbeat/service/service.go:29 +0x44
created by github.com/elastic/beats/libbeat/service.HandleSignals
    /go/src/github.com/elastic/beats/libbeat/service/service.go:32 +0x195

@urso
Copy link

urso commented Oct 28, 2016

Have you tried with packetbeat 5.0 release as well? in packetbeat 5.0 amount of events stored in memory in backpressure is configurable via queue_size (in 1.3 release it should be shipper.queue_size) and output.logstash.bulk_max_size.

Do you have some means to monitor memory usage?

@kira8565
Copy link
Author

I think i found the problem ,the first hour ,packetbeat is run very well,it publish 5w events to my server,but in next hour,it send nerly 100w event which all of mongo package to my logserver,and it crash

@kira8565
Copy link
Author

kira8565 commented Nov 6, 2016

@medcl
Copy link
Contributor

medcl commented Feb 6, 2017

closing it, the problem is caused by Snowball Effect, as explained in above blog post(in Chinese), the data which send to es was captured by packetbeat, and then the data packetbeat send to es was recaptured by packetbeat, and then resend to es, again and again, each time plus the previous one's data, until OOM.

@medcl medcl closed this as completed Feb 6, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants