How to Install Logstash

How to Install Logstash Logstash is a powerful, open-source data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and sends it to your preferred destination—whether that’s Elasticsearch, a database, or a data lake. As a core component of the Elastic Stack (formerly known as the ELK Stack), Logstash plays a critical role in centralized logging, real-time an

Nov 6, 2025 - 10:37
Nov 6, 2025 - 10:37
 1

How to Install Logstash

Logstash is a powerful, open-source data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and sends it to your preferred destinationwhether thats Elasticsearch, a database, or a data lake. As a core component of the Elastic Stack (formerly known as the ELK Stack), Logstash plays a critical role in centralized logging, real-time analytics, and observability across modern infrastructure. From web servers and cloud services to containers and IoT devices, Logstash enables organizations to collect, parse, and enrich logs at scale.

Installing Logstash correctly is the foundation of a robust data pipeline. A misconfigured or improperly installed Logstash instance can lead to data loss, performance bottlenecks, or security vulnerabilities. This guide provides a comprehensive, step-by-step walkthrough for installing Logstash on major operating systemsincluding Linux, macOS, and Windowsalong with best practices, real-world examples, and essential tools to ensure your deployment is secure, scalable, and maintainable.

By the end of this tutorial, you will have a fully functional Logstash installation, understand how to validate its operation, and be equipped with the knowledge to troubleshoot common issues. Whether youre a DevOps engineer, system administrator, or data analyst, mastering Logstash installation is a vital skill in todays data-driven environments.

Step-by-Step Guide

Prerequisites

Before installing Logstash, ensure your system meets the following requirements:

  • Java Runtime Environment (JRE) 11 or higher Logstash is built on Java and requires a compatible JVM. OpenJDK is recommended.
  • At least 2 GB of RAM Logstash performs best with sufficient memory, especially when processing high-volume data streams.
  • Administrative or sudo privileges Installation and configuration require elevated permissions.
  • Internet access Required for downloading packages and plugins.
  • Compatible operating system Supported platforms include Linux (Ubuntu, CentOS, Debian), macOS, and Windows.

Verify your Java version by running:

java -version

If Java is not installed, follow the instructions for your OS to install OpenJDK 11 or later. For example, on Ubuntu:

sudo apt update

sudo apt install openjdk-11-jre

Installing Logstash on Linux (Ubuntu/Debian)

Logstash can be installed via APT on Ubuntu and Debian systems. The Elastic repository provides the most stable and up-to-date versions.

  1. Import the Elastic GPG key to verify package authenticity:
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
  1. Add the Elastic repository to your systems package list:
echo "deb https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-8.x.list
  1. Update the package index:
sudo apt update
  1. Install Logstash:
sudo apt install logstash
  1. Start and enable the Logstash service to run at boot:
sudo systemctl start logstash

sudo systemctl enable logstash

  1. Verify the service status:
sudo systemctl status logstash

If Logstash is running correctly, youll see active (running) in the output.

Installing Logstash on Linux (CentOS/RHEL)

On Red Hat-based systems like CentOS and RHEL, Logstash is installed using YUM or DNF.

  1. Import the Elastic GPG key:
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
  1. Create the Elastic repository file in /etc/yum.repos.d/:
sudo tee /etc/yum.repos.d/elastic-8.x.repo [elastic-8.x]

name=Elastic repository for 8.x packages

baseurl=https://artifacts.elastic.co/packages/8.x/yum

gpgcheck=1

gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch

enabled=1

autorefresh=1

type=rpm-md

EOF

  1. Install Logstash using DNF (RHEL 8+) or YUM (RHEL 7):
sudo dnf install logstash

Or for older systems:

sudo yum install logstash
  1. Start and enable the service:
sudo systemctl start logstash

sudo systemctl enable logstash

  1. Check the status:
sudo systemctl status logstash

Installing Logstash on macOS

On macOS, Logstash can be installed via Homebrew, the most popular package manager.

  1. Install Homebrew (if not already installed):
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
  1. Install Logstash using Homebrew:
brew install logstash
  1. Start Logstash manually (it does not auto-start on macOS):
logstash -e 'input { stdin { } } output { stdout { } }'

This command starts Logstash with a minimal configuration that reads from standard input and writes output to the consoleuseful for testing.

  1. To run as a background service, create a launch daemon or use a process manager like brew services:
brew services start logstash

Installing Logstash on Windows

On Windows, Logstash is distributed as a ZIP archive. Manual installation is required.

  1. Download the Logstash ZIP file from the official Elastic website: https://www.elastic.co/downloads/logstash
  1. Extract the ZIP file to a directory such as C:\logstash. Avoid paths with spaces (e.g., C:\Program Files\).
  1. Open Command Prompt as Administrator and navigate to the Logstash directory:
cd C:\logstash
  1. Run Logstash in test mode to verify the installation:
bin\logstash -e "input { stdin { } } output { stdout { } }"

If successful, youll see Logstash start and prompt you to type input. Press Enter after typing a message to see it processed and output to the console.

  1. Install Logstash as a Windows service (optional but recommended for production):
bin\logstash-service.bat install

Then start the service:

net start logstash-service

To stop or uninstall the service:

net stop logstash-service

bin\logstash-service.bat remove

Configuring Your First Logstash Pipeline

Logstash operates using pipelines defined in configuration files. A pipeline consists of three components: input, filter, and output.

  1. Create a configuration file in the config directory:
sudo nano /etc/logstash/conf.d/01-simple.conf
  1. Add the following basic configuration:
input {

stdin { }

}

filter {

grok {

match => { "message" => "%{WORD:Greeting}, %{WORD:Subject}!" }

}

}

output {

stdout { codec => rubydebug }

}

This configuration reads input from the terminal, parses it using a Grok pattern to extract two fields (Greeting and Subject), and outputs the structured data to the console.

  1. Test the configuration for syntax errors:
sudo /usr/share/logstash/bin/logstash --path.settings /etc/logstash -t

If the configuration is valid, youll see Configuration OK.

  1. Run Logstash with your configuration:
sudo /usr/share/logstash/bin/logstash --path.settings /etc/logstash
  1. Type a test message like Hello, World! and press Enter. You should see structured JSON output in the console.

Verifying Installation Success

Once Logstash is installed and configured, confirm its working as expected:

  • Check service status: sudo systemctl status logstash
  • Review logs: sudo tail -f /var/log/logstash/logstash-plain.log
  • Test input/output with a simple pipeline as shown above
  • Ensure ports are open (default: 5044 for Beats, 9600 for monitoring)
  • Verify Java memory settings in jvm.options (default: 1GB heap)

If Logstash fails to start, common issues include:

  • Java version mismatch
  • Incorrect file permissions on config or log directories
  • Port conflicts (e.g., another service using 9600)
  • Malformed configuration files

Use the -t flag to test configurations before starting the service to avoid runtime failures.

Best Practices

Use Separate Configuration Files

Organize your Logstash pipelines into multiple configuration files within the conf.d directory. Name files numerically (e.g., 01-input.conf, 02-filter.conf, 03-output.conf) to control load order. This improves maintainability, especially in complex environments with multiple data sources.

Enable Monitoring and Metrics

Logstash includes a built-in monitoring endpoint. Enable it by adding the following to logstash.yml:

monitoring.enabled: true

monitoring.elasticsearch.hosts: ["http://localhost:9200"]

This allows you to monitor performance, throughput, and error rates via Kibanas Monitoring UI. Enable it in production to detect bottlenecks before they impact data flow.

Optimize Memory and JVM Settings

Logstashs default heap size is 1GB. For high-throughput environments, increase it by editing jvm.options located in /etc/logstash/:

-Xms2g

-Xmx2g

Ensure the system has enough physical RAM to accommodate the heap size and avoid swapping. Never set the heap size to more than 50% of available RAM.

Use Filebeat or Winlogbeat for Log Collection

While Logstash can read files directly, its more efficient and reliable to use Filebeat (Linux/macOS) or Winlogbeat (Windows) as lightweight log shippers. These agents are designed to monitor log files, handle file rotation, and send data reliably to Logstash via the Beats input plugin.

Example Filebeat configuration:

filebeat.inputs:

- type: log

enabled: true

paths:

- /var/log/nginx/*.log

output.logstash:

hosts: ["your-logstash-server:5044"]

Implement Error Handling and Dead Letter Queues

Not all log entries will parse correctly. Use the dead_letter_queue feature to capture malformed events instead of dropping them:

dead_letter_queue.enable: true

dead_letter_queue.path: "/var/lib/logstash/dead_letter_queue"

This allows you to review and reprocess failed events later, improving data integrity.

Secure Your Installation

Logstash should never be exposed directly to the internet. Use a reverse proxy (e.g., Nginx) or firewall rules to restrict access to ports 9600 (monitoring) and 5044 (Beats). Enable SSL/TLS for communication between agents and Logstash:

input {

beats {

port => 5044

ssl => true

ssl_certificate => "/etc/pki/tls/certs/logstash-beats.crt"

ssl_key => "/etc/pki/tls/private/logstash-beats.key"

}

}

Use Version Control for Configurations

Treat Logstash configuration files as code. Store them in a Git repository with clear commit messages and CI/CD pipelines to validate syntax before deployment. This ensures consistency across environments and enables rollback if a configuration causes instability.

Regularly Update Logstash

Elastic releases updates with security patches, bug fixes, and performance improvements. Subscribe to Elastics release notes and schedule regular updates during maintenance windows. Never skip major version upgradesthese often include breaking changes that require configuration adjustments.

Monitor Resource Usage

Logstash can be CPU and memory intensive. Use tools like htop, top, or Prometheus + Grafana to monitor resource consumption. Set up alerts for sustained high CPU or memory usage to prevent service degradation.

Tools and Resources

Official Documentation

The Elastic documentation is the most authoritative source for Logstash configuration, plugins, and troubleshooting:

Logstash Plugins

Logstash supports over 200 plugins for input, filter, and output operations. Key plugins include:

  • Input: beats, file, syslog, kafka, jdbc
  • Filter: grok, mutate, date, geoip, dissect, ruby
  • Output: elasticsearch, stdout, file, s3, http, redis

Install plugins via the Logstash plugin manager:

bin/logstash-plugin install logstash-filter-grok

View installed plugins:

bin/logstash-plugin list

Configuration Validators

Always validate your configuration before restarting Logstash:

bin/logstash --path.settings /etc/logstash -t

This checks for syntax errors and missing dependencies.

Logstash Docker Images

For containerized environments, Elastic provides official Docker images:

docker pull docker.elastic.co/logstash/logstash:8.12.0

docker run -it --rm -v "$(pwd)/config:/usr/share/logstash/pipeline" docker.elastic.co/logstash/logstash:8.12.0

Use Docker Compose to integrate Logstash with Elasticsearch and Kibana in a single stack:

version: '3.8'

services:

logstash:

image: docker.elastic.co/logstash/logstash:8.12.0

ports:

- "5044:5044"

- "9600:9600"

volumes:

- ./config/logstash.conf:/usr/share/logstash/pipeline/logstash.conf

depends_on:

- elasticsearch

environment:

- xpack.monitoring.enabled=true

- ELASTICSEARCH_HOSTS=http://elasticsearch:9200

Community and Support

Engage with the Logstash community for help and inspiration:

Monitoring and Alerting Tools

Integrate Logstash with:

  • Kibana For visualizing metrics and logs
  • Prometheus + Grafana For custom performance dashboards
  • ELK Stack Full observability pipeline with Elasticsearch and Kibana

Sample Configuration Repositories

GitHub hosts numerous open-source Logstash configurations:

Real Examples

Example 1: Parsing Nginx Access Logs

One of the most common use cases for Logstash is parsing web server logs. Heres a complete pipeline for processing Nginx access logs:

input {

file {

path => "/var/log/nginx/access.log"

start_position => "beginning"

sincedb_path => "/dev/null"

}

}

filter {

grok {

match => { "message" => "%{IPORHOST:client_ip} - %{USERNAME:remote_user} \[%{HTTPDATE:timestamp}\] \"%{WORD:method} %{URIPATHPARAM:request} HTTP/%{NUMBER:http_version}\" %{NUMBER:response_code} %{NUMBER:bytes_sent} \"%{DATA:referrer}\" \"%{DATA:agent}\"" }

}

date {

match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]

target => "@timestamp"

}

geoip {

source => "client_ip"

}

mutate {

remove_field => [ "message", "timestamp" ]

}

}

output {

elasticsearch {

hosts => ["http://localhost:9200"]

index => "nginx-access-%{+YYYY.MM.dd}"

document_type => "_doc"

}

}

This configuration:

  • Reads Nginx logs from the file system
  • Uses Grok to extract client IP, request method, URL, response code, and user agent
  • Converts the timestamp into a proper Elasticsearch date format
  • Enriches data with geolocation using the geoip filter
  • Sends structured data to Elasticsearch with daily indices

Example 2: Centralized Syslog Collection

Collect and normalize syslog data from multiple Linux servers:

input {

syslog {

port => 514

type => "syslog"

}

}

filter {

if [type] == "syslog" {

grok {

match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }

}

date {

match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]

}

mutate {

remove_field => [ "message", "syslog_timestamp" ]

}

}

}

output {

elasticsearch {

hosts => ["http://elasticsearch:9200"]

index => "syslog-%{+YYYY.MM.dd}"

}

}

Configure remote systems to forward logs via rsyslog or syslog-ng to this Logstash instance on port 514.

Example 3: Processing Application Logs in JSON Format

If your application outputs structured JSON logs (e.g., Node.js, Python Flask), you can skip parsing and use the json filter:

input {

file {

path => "/opt/myapp/logs/app.log"

codec => "json"

}

}

output {

elasticsearch {

hosts => ["http://localhost:9200"]

index => "app-logs-%{+YYYY.MM.dd}"

}

stdout { codec => rubydebug }

}

With this setup, each line in app.log must be a valid JSON object:

{"level":"info","message":"User logged in","user_id":123,"timestamp":"2024-05-10T12:34:56Z"}

Logstash automatically maps JSON fields to Elasticsearch document properties.

Example 4: Conditional Routing Based on Log Source

Route logs from different sources to different Elasticsearch indices:

input {

file {

path => "/var/log/nginx/access.log"

tags => ["nginx"]

}

file {

path => "/var/log/auth.log"

tags => ["auth"]

}

}

filter {

if "nginx" in [tags] {

grok {

match => { "message" => "%{COMBINEDAPACHELOG}" }

}

}

if "auth" in [tags] {

grok {

match => { "message" => "%{SYSLOG5424SD}" }

}

}

}

output {

if "nginx" in [tags] {

elasticsearch {

hosts => ["http://localhost:9200"]

index => "nginx-access-%{+YYYY.MM.dd}"

}

}

if "auth" in [tags] {

elasticsearch {

hosts => ["http://localhost:9200"]

index => "auth-logs-%{+YYYY.MM.dd}"

}

}

}

This approach improves query performance and enables fine-grained access control.

FAQs

Can I install Logstash without Java?

No. Logstash is built on Java and requires a JRE (Java Runtime Environment) version 11 or higher to function. You cannot run Logstash without Java installed on the system.

Whats the difference between Logstash and Filebeat?

Filebeat is a lightweight log shipper designed to collect and forward logs efficiently. Logstash is a full-featured data processing pipeline that can parse, enrich, filter, and transform data. Filebeat is often used as an input source for Logstash to reduce resource usage on edge servers.

How do I upgrade Logstash to a newer version?

Backup your configuration files first. Then use your package manager to upgrade:

  • Ubuntu/Debian: sudo apt update && sudo apt upgrade logstash
  • CentOS/RHEL: sudo dnf update logstash
  • Windows: Download the new ZIP, extract, and replace the old folder (keep config files)

Always test the new version in a staging environment before deploying to production.

Why is Logstash using so much memory?

High memory usage is often due to large pipelines, insufficient heap settings, or processing high volumes of unstructured data. Optimize by:

  • Increasing the heap size in jvm.options
  • Using the pipeline.batch.size and pipeline.workers settings to tune throughput
  • Avoiding complex Grok patterns on large fields
  • Using Filebeat to offload log collection

Can Logstash run on a Raspberry Pi?

Yes, but with limitations. Logstash can run on ARM-based systems like Raspberry Pi, but performance will be constrained by limited RAM and CPU. Its suitable for light logging tasks, but not for high-volume environments. Consider using Filebeat directly to Elasticsearch instead.

How do I troubleshoot Logstash not starting?

Check the following:

  • Java version: java -version
  • Configuration syntax: logstash -t
  • File permissions: Ensure Logstash can read config files and write to logs
  • Port conflicts: Use netstat -tlnp | grep 9600 to check for conflicts
  • Logs: Review /var/log/logstash/logstash-plain.log for error messages

Is Logstash secure by default?

No. Logstash does not enable encryption or authentication by default. Always enable SSL/TLS for Beats input, restrict network access via firewalls, and avoid exposing monitoring ports (9600) to public networks.

Can I use Logstash without Elasticsearch?

Yes. Logstash can output to numerous destinations including files, databases (PostgreSQL, MySQL), message queues (Kafka, Redis), cloud storage (S3), and HTTP endpoints. Elasticsearch is optional but commonly used for search and visualization.

How often should I restart Logstash?

Restart Logstash only when configuration changes are made or after updates. Frequent restarts can cause data loss or delays. Use the reload feature (if available) or deploy changes via rolling updates in containerized environments.

Conclusion

Installing Logstash is more than a technical taskits the first step toward building a scalable, reliable, and insightful data pipeline. Whether youre collecting application logs, monitoring infrastructure, or analyzing security events, a properly configured Logstash instance ensures your data flows smoothly from source to destination.

In this guide, we covered installation across Linux, macOS, and Windows, provided best practices for performance and security, introduced essential tools and plugins, and demonstrated real-world use cases that reflect industry standards. You now have the knowledge to deploy Logstash confidently and troubleshoot common issues before they impact your operations.

Remember: Logstash thrives in well-organized, monitored, and version-controlled environments. Pair it with Filebeat for efficient log shipping, Elasticsearch for storage and search, and Kibana for visualization to unlock the full power of the Elastic Stack. Stay updated, test thoroughly, and prioritize data integrity at every stage.

As data volumes continue to grow and observability becomes central to system reliability, mastering Logstash installation and configuration is not just beneficialits essential. Start small, validate often, and scale with purpose.