How to Use Filebeat

How to Use Filebeat Filebeat is a lightweight, open-source log shipper developed by Elastic as part of the Elastic Stack (formerly known as the ELK Stack). Designed to efficiently collect, forward, and centralize log data from files on your servers, Filebeat ensures that your system, application, and service logs are reliably delivered to destinations such as Elasticsearch, Logstash, or Kafka for

Nov 6, 2025 - 10:38
Nov 6, 2025 - 10:38
 0

How to Use Filebeat

Filebeat is a lightweight, open-source log shipper developed by Elastic as part of the Elastic Stack (formerly known as the ELK Stack). Designed to efficiently collect, forward, and centralize log data from files on your servers, Filebeat ensures that your system, application, and service logs are reliably delivered to destinations such as Elasticsearch, Logstash, or Kafka for indexing, analysis, and visualization. In todays highly distributed and dynamic infrastructure environments, where logs are critical for monitoring, troubleshooting, security auditing, and compliance, Filebeat has become an indispensable tool for DevOps teams, site reliability engineers (SREs), and security analysts.

Unlike heavier log collection agents, Filebeat operates with minimal system resource consumption. It uses a tailing mechanism to read new lines from log files in real time, stores the state of each file to avoid duplication, and includes built-in resilience features such as backpressure handling and retry logic. This makes Filebeat ideal for production environments where stability and efficiency are non-negotiable.

This comprehensive guide will walk you through every aspect of using Filebeatfrom initial installation and configuration to advanced use cases and optimization strategies. Whether youre managing a single server or a fleet of hundreds, understanding how to properly configure and operate Filebeat will significantly enhance your observability stacks reliability and performance.

Step-by-Step Guide

1. Understanding Filebeats Role in the Data Pipeline

Before installing Filebeat, its essential to understand its position within a typical logging architecture. Filebeat does not process or transform logsit acts as a lightweight collector and forwarder. It reads log files from disk, applies basic filtering if configured, and sends the data to an output destination.

Common Filebeat architectures include:

  • Filebeat ? Elasticsearch (direct ingestion)
  • Filebeat ? Logstash ? Elasticsearch (for advanced parsing and enrichment)
  • Filebeat ? Kafka ? Logstash ? Elasticsearch (for high-throughput, decoupled pipelines)

The choice of architecture depends on your scalability needs, data transformation requirements, and network constraints. For simple use cases, direct ingestion to Elasticsearch is sufficient. For complex log formats or multi-source aggregation, integrating Logstash adds flexibility.

2. Prerequisites

Before installing Filebeat, ensure your system meets the following requirements:

  • Operating System: Linux (Ubuntu, CentOS, RHEL), macOS, or Windows Server
  • Permissions: Root or sudo access to install packages and read log files
  • Network Access: Connectivity to your target output (Elasticsearch, Logstash, or Kafka)
  • Log Files: Accessible log files with read permissions (e.g., /var/log/nginx/access.log, /var/log/syslog)

Ensure your target output service is running and accessible. For Elasticsearch, verify the HTTP endpoint (default: http://localhost:9200). For Logstash, confirm the Beats input plugin is enabled on port 5044.

3. Installing Filebeat

Installation varies slightly depending on your operating system. Below are the most common methods.

On Linux (Ubuntu/Debian)

First, import the Elastic GPG key:

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

Add the Elastic repository:

echo "deb https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-8.x.list

Update the package list and install Filebeat:

sudo apt-get update && sudo apt-get install filebeat

On Linux (CentOS/RHEL)

Import the GPG key:

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Create the repository file:

sudo tee /etc/yum.repos.d/elastic-8.x.repo [elastic-8.x]

name=Elastic repository for 8.x packages

baseurl=https://artifacts.elastic.co/packages/8.x/yum

gpgcheck=1

gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch

enabled=1

autorefresh=1

type=rpm-md

EOF

Install Filebeat:

sudo yum install filebeat

On macOS

Using Homebrew:

brew tap elastic/tap

brew install elastic/tap/filebeat

On Windows

Download the Windows ZIP file from the official downloads page. Extract it to a directory like C:\Program Files\Filebeat. Open PowerShell as Administrator and run:

cd 'C:\Program Files\Filebeat'

.\install-service-filebeat.ps1

4. Configuring Filebeat

Filebeats configuration file is located at:

  • Linux: /etc/filebeat/filebeat.yml
  • Windows: C:\Program Files\Filebeat\filebeat.yml

Always back up the original configuration before making changes:

sudo cp /etc/filebeat/filebeat.yml /etc/filebeat/filebeat.yml.bak

Basic Configuration: Sending Logs to Elasticsearch

Open the configuration file in your preferred editor:

sudo nano /etc/filebeat/filebeat.yml

Locate the output.elasticsearch section and uncomment/modify it:

output.elasticsearch:

hosts: ["http://localhost:9200"]

username: "elastic"

password: "your_password"

If youre using a remote Elasticsearch cluster, replace localhost with the servers IP or hostname.

Defining Input Sources

Under the filebeat.inputs section, define which log files to monitor. Heres an example for Nginx access and error logs:

filebeat.inputs:

- type: filestream

enabled: true

paths:

- /var/log/nginx/access.log

- /var/log/nginx/error.log

tags: ["nginx"]

fields:

service: web-server

Key parameters:

  • type: Use filestream (recommended for Filebeat 7.10+) instead of the deprecated log type.
  • paths: Specify the full path to log files. Use wildcards like /var/log/*.log to monitor multiple files.
  • tags: Add custom tags for easier filtering in Kibana.
  • fields: Add static key-value pairs to enrich events (e.g., environment, application name).

Configuring for Logstash

If youre using Logstash as an intermediary, disable Elasticsearch output and enable Logstash:

output.logstash:

hosts: ["logstash.example.com:5044"]

Ensure Logstash is configured with the Beats input plugin:

input {

beats {

port => 5044

}

}

5. Enabling Modules

Filebeat comes with pre-built modules for common services like Apache, Nginx, MySQL, PostgreSQL, and system logs. These modules include predefined input configurations and Elasticsearch ingest pipelines to parse logs automatically.

To list available modules:

filebeat modules list

To enable the Nginx module:

sudo filebeat modules enable nginx

This automatically creates a configuration file at /etc/filebeat/modules.d/nginx.yml. Edit it to point to your Nginx log paths:

- module: nginx

access:

enabled: true

var.paths: ["/var/log/nginx/access.log*"]

error:

enabled: true

var.paths: ["/var/log/nginx/error.log*"]

Repeat for other services like system logs:

sudo filebeat modules enable system

Modules reduce configuration time and improve log parsing accuracy. Always review the generated configurations to ensure paths match your environment.

6. Testing the Configuration

Before starting Filebeat, validate your configuration to avoid runtime errors:

filebeat test config

If successful, youll see:

Config OK

Test connectivity to your output:

filebeat test output

This will show whether Filebeat can reach Elasticsearch or Logstash. If authentication fails or the host is unreachable, fix the issue before proceeding.

7. Starting and Enabling Filebeat

Start the Filebeat service:

sudo systemctl start filebeat

Enable it to start on boot:

sudo systemctl enable filebeat

Check the service status:

sudo systemctl status filebeat

On Windows, start the service via PowerShell:

Start-Service filebeat

8. Verifying Log Delivery

Once Filebeat is running, verify logs are being ingested:

  • For Elasticsearch: Visit http://localhost:9200/_cat/indices?v and look for indices named filebeat-*.
  • For Kibana: Navigate to Stack Management ? Index Patterns and create an index pattern matching filebeat-*. Then go to Discover to view live log events.
  • For Logstash: Check Logstash logs at /var/log/logstash/logstash-plain.log for incoming beats events.

If no data appears, check Filebeats internal logs:

sudo tail -f /var/log/filebeat/filebeat

Common issues include incorrect file paths, permission denied errors, or misconfigured output endpoints.

9. Advanced Configuration: Filtering and Processing

Filebeat supports basic event processing using processors. These are applied before data is sent to the output.

Example: Dropping Logs Based on Content

To exclude logs containing a specific string (e.g., healthcheck):

processors:

- drop_fields:

fields: ["message"]

when:

contains:

message: "healthcheck"

Example: Adding Timestamps

Ensure logs use the correct timestamp by overriding the @timestamp field:

processors:

- add_timestamp:

field: "@timestamp"

timezone: "America/New_York"

Example: Parsing JSON Logs

If your application outputs JSON logs:

filebeat.inputs:

- type: filestream

enabled: true

paths:

- /var/log/myapp/*.json

json:

keys_under_root: true

overwrite_keys: true

add_error_key: true

This extracts all JSON fields into the top level of the event, making them searchable in Elasticsearch.

Best Practices

1. Use Filestream Input (Not Log)

Filebeat versions 7.10 and later deprecated the log input type in favor of filestream. The new input provides better performance, improved file handling, and enhanced reliability. Always use filestream in new deployments.

2. Avoid Monitoring Large or Rapidly Rotating Logs

Filebeat is optimized for structured and semi-structured logs. Avoid monitoring extremely large files (e.g., multi-gigabyte database dumps) or logs that rotate every few seconds. These can cause high I/O and memory pressure. Use log rotation tools like logrotate to manage file sizes and frequencies.

3. Set Appropriate Harvesters and Close_inactive

By default, Filebeat opens a harvester (reader) for each file. Too many open files can exhaust system limits. Adjust these settings:

filebeat.inputs:

- type: filestream max_bytes: 10485760

10 MB per file

close_inactive: 5m

Close file reader after 5 minutes of inactivity

close_removed: true

Close and forget files when removed

close_renamed: true

Close files when renamed (e.g., during rotation)

These settings reduce memory usage and prevent stale file handles.

4. Use TLS for Secure Transmission

If sending logs over the network, always enable TLS encryption. For Elasticsearch:

output.elasticsearch:

hosts: ["https://elasticsearch.example.com:9200"]

ssl.enabled: true

ssl.certificate_authorities: ["/etc/filebeat/ca.crt"]

For Logstash:

output.logstash:

hosts: ["logstash.example.com:5045"]

ssl.enabled: true

ssl.certificate_authorities: ["/etc/filebeat/ca.crt"]

Use certificates signed by a trusted CA or generate self-signed certificates using OpenSSL for internal environments.

5. Enable Logging and Monitoring

Enable Filebeats internal logging and metrics for troubleshooting and performance analysis:

logging.level: info

logging.to_files: true

logging.files:

path: /var/log/filebeat

name: filebeat

keepfiles: 7

permissions: 0644

monitoring.enabled: true

monitoring.elasticsearch:

hosts: ["http://localhost:9200"]

Monitor Filebeats health via Kibanas Monitoring UI or by querying the .monitoring-beats-* indices.

6. Use Fields for Contextual Enrichment

Always add static fields to identify the source of logs:

fields:

environment: production

region: us-east-1

application: payment-service

This allows you to filter logs by environment or service in Kibana without relying on file paths or hostnames alone.

7. Avoid Over-Indexing

Dont ship logs that arent needed for analysis. For example, debug-level logs may be useful during development but create unnecessary storage and indexing load in production. Use log level filters or configure your applications to output only INFO and above in production.

8. Regularly Update Filebeat

Elastic releases updates with performance improvements, bug fixes, and new features. Subscribe to Elastics security advisories and update Filebeat regularly. Always test updates in a staging environment before deploying to production.

9. Implement Rate Limiting for High-Volume Environments

For environments generating tens of thousands of events per second, use Filebeats rate limiting to prevent overwhelming Elasticsearch:

output.elasticsearch:

bulk_max_size: 50

bulk_max_size: 10

timeout: 90s

Adjust bulk_max_size based on your Elasticsearch clusters capacity.

10. Use Index Lifecycle Management (ILM)

Configure ILM in Elasticsearch to automatically roll over, shrink, and delete old Filebeat indices. This prevents disk space exhaustion and maintains query performance.

In your Filebeat configuration:

output.elasticsearch:

indices:

- index: "filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"

setup.ilm.enabled: true

setup.ilm.rollover_alias: "filebeat"

setup.ilm.pattern: "{now/d}-000001"

Then use Kibanas Index Lifecycle Management UI to define policies (e.g., delete after 30 days).

Tools and Resources

Official Documentation

The definitive source for Filebeat configuration and usage is the official Elastic documentation:

Community and Forums

Engage with the Elastic community for troubleshooting and best practices:

Sample Configurations

GitHub hosts numerous open-source Filebeat configurations for common use cases:

Monitoring and Visualization Tools

  • Kibana: The primary UI for visualizing Filebeat data. Use dashboards for system metrics, web server traffic, and security events.
  • Elastic Observability: Pre-built dashboards for infrastructure and application performance monitoring using Filebeat data.
  • Prometheus + Grafana: Use Filebeats built-in metrics endpoint (http://localhost:5066) to expose internal metrics for scraping.

Validation and Debugging Tools

  • filebeat test config: Validates YAML syntax.
  • filebeat test output: Checks connectivity to output destinations.
  • tail -f /var/log/filebeat/filebeat: Monitors Filebeats internal logs for errors.
  • curl -XGET "http://localhost:9200/_cat/indices?v": Confirms index creation.

Automation and Infrastructure as Code

Integrate Filebeat into your infrastructure automation workflows:

  • Ansible: Use the ansible.posix and community.general roles to install and configure Filebeat across servers.
  • Terraform: Deploy Filebeat via cloud-init scripts on EC2 or GCE instances.
  • Docker: Run Filebeat in a container with mounted log volumes:
docker run -d \

--name=filebeat \

--user=root \

--volume="/var/log:/var/log:ro" \

--volume="/etc/filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro" \

docker.elastic.co/beats/filebeat:8.12.0

Real Examples

Example 1: Monitoring Nginx Access Logs in Production

Scenario: You manage a web application serving 10,000+ requests per minute. You need to monitor traffic patterns, detect spikes, and identify malicious IPs.

Configuration:

filebeat.inputs:

- type: filestream

enabled: true

paths:

- /var/log/nginx/access.log*

tags: ["nginx", "web"]

fields:

service: frontend

environment: prod

processors:

- add_fields:

target: ''

fields:

log_type: access

- decode_json_fields:

fields: ["message"]

target: ""

overwrite_keys: true

add_error_key: true

output.elasticsearch:

hosts: ["https://elasticsearch.prod.example.com:9200"]

username: "filebeat_writer"

password: "secure_password_123"

ssl.enabled: true

ssl.certificate_authorities: ["/etc/filebeat/ca.crt"]

setup.ilm.enabled: true

setup.ilm.rollover_alias: "filebeat"

setup.ilm.pattern: "{now/d}-000001"

Result: In Kibana, you create a dashboard showing top client IPs, HTTP status codes, response times, and request volume over time. You set up alerts for 4xx/5xx error spikes and blocklist IPs with excessive failed requests.

Example 2: Centralized System Logging Across 50 Servers

Scenario: You have 50 Linux servers running different services. You want to collect system logs (auth, syslog, journal) to detect unauthorized access or service failures.

Implementation:

  • Enable the system module on all servers:
sudo filebeat modules enable system

  • Configure Filebeat to send logs to a central Logstash instance:
output.logstash:

hosts: ["logstash-central.example.com:5044"]

ssl.enabled: true

  • In Logstash, use grok filters to parse syslog messages and enrich with server metadata.

Result: You create a Kibana dashboard showing failed SSH attempts, sudo usage, and disk space alerts across all servers. Security teams receive automated alerts for brute-force attacks.

Example 3: Containerized Application Logs with Docker and Kubernetes

Scenario: Your microservices run in Docker containers on Kubernetes. You need to collect logs from each pod without modifying the applications.

Solution:

  • Deploy Filebeat as a DaemonSet in Kubernetes:
apiVersion: apps/v1

kind: DaemonSet

metadata:

name: filebeat

spec:

selector:

matchLabels:

app: filebeat

template:

metadata:

labels:

app: filebeat

spec:

containers:

- name: filebeat

image: docker.elastic.co/beats/filebeat:8.12.0

args: ["-c", "/etc/filebeat.yml", "-e"]

volumeMounts:

- name: varlog

mountPath: /var/log

- name: varlibdockercontainers

mountPath: /var/lib/docker/containers

readOnly: true

- name: filebeat-config

mountPath: /etc/filebeat.yml

subPath: filebeat.yml

volumes:

- name: varlog

hostPath:

path: /var/log

- name: varlibdockercontainers

hostPath:

path: /var/lib/docker/containers

- name: filebeat-config

configMap:

defaultMode: 0600

name: filebeat-config

  • Configure Filebeat to read Docker log files:
filebeat.inputs:

- type: filestream

paths:

- /var/lib/docker/containers/*/*.log

json:

keys_under_root: true

overwrite_keys: true

processors:

- add_kubernetes_metadata:

host: ${NODE_NAME}

matchers:

- logs_path:

logs_path: "/var/lib/docker/containers/"

Result: Each containers logs are enriched with Kubernetes metadata (pod name, namespace, labels) and indexed into Elasticsearch. You can filter logs by pod, container, or namespace in Kibana.

FAQs

What is the difference between Filebeat and Logstash?

Filebeat is a lightweight log shipper designed to collect and forward logs with minimal overhead. Logstash is a full-featured data processing pipeline that can parse, filter, enrich, and transform logs. Use Filebeat for simple ingestion; use Logstash when you need complex transformations.

Can Filebeat send logs to multiple destinations?

No. Filebeat supports only one output at a time. To send logs to multiple destinations, use Logstash or Kafka as a central hub that can fan out to multiple systems.

Does Filebeat handle log rotation automatically?

Yes. Filebeat tracks the position of each log file using a registry file (/var/lib/filebeat/registry). When a file is rotated (renamed or deleted), Filebeat detects the change and begins reading the new file from the beginning.

How much memory does Filebeat use?

Filebeat typically uses less than 100 MB of RAM per instance, even when monitoring dozens of log files. Memory usage scales with the number of active harvesters and buffer sizes.

Can Filebeat parse JSON, CSV, or XML logs?

Filebeat supports JSON parsing natively via the json processor. For CSV and XML, use Logstash or preprocess logs before ingestion.

What happens if Elasticsearch is down?

Filebeat stores events in an in-memory queue and retries delivery with exponential backoff. If the queue fills up, Filebeat will pause reading new logs until the output becomes available again. This ensures no data loss during temporary outages.

How do I upgrade Filebeat without losing configuration?

Backup your filebeat.yml before upgrading. Run the upgrade command (sudo apt-get upgrade filebeat), then compare the new default config with your custom settings. Most settings are preserved, but check for deprecated fields.

Is Filebeat secure?

Filebeat supports TLS encryption, authentication (username/password or API keys), and secure file permissions. Always use TLS in production and restrict access to configuration files.

Can Filebeat monitor remote log files over SSH?

No. Filebeat only reads local files. To monitor remote logs, use SSH to mount the remote filesystem via NFS or rsync, or use a remote log collector like rsyslog to forward logs locally first.

Why are my logs not appearing in Kibana?

Common causes: incorrect file paths, permission denied, misconfigured output, disabled inputs, or index pattern mismatch. Check Filebeat logs, test output connectivity, and verify the index pattern in Kibana matches the actual index name.

Conclusion

Filebeat is a powerful, reliable, and resource-efficient tool for log collection in modern infrastructure. Its simplicity, resilience, and seamless integration with the Elastic Stack make it the go-to choice for organizations seeking to centralize and analyze log data at scale. By following the configuration best practices outlined in this guideusing filestream inputs, enabling modules, securing transmissions, and monitoring performanceyou can deploy Filebeat with confidence across any environment, from single servers to large Kubernetes clusters.

Remember: the goal of log collection is not just to store data, but to enable actionable insights. Filebeat ensures your logs are delivered accurately and consistently, laying the foundation for effective monitoring, security analysis, and operational excellence. As your infrastructure evolves, Filebeat scales with youwithout complexity or overhead.

Start small, validate your setup, and gradually expand your coverage. With Filebeat, youre not just collecting logsyoure building the backbone of your observability strategy.