Skip to content

Commit 3726e8f

Browse files
committed
Merge 4.9.1 into 4.10.0 (#358)
* Merge 4.9.1 into 4.10.0 (#358) --------- Signed-off-by: Álex Ruiz <[email protected]>
1 parent c8b33d4 commit 3726e8f

32 files changed

+1737
-119
lines changed

distribution/packages/src/common/systemd/wazuh-indexer.service

+1-1
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ SendSIGKILL=no
5959
SuccessExitStatus=143
6060

6161
# Allow a slow startup before the systemd notifier module kicks in to extend the timeout
62-
TimeoutStartSec=75
62+
TimeoutStartSec=180
6363

6464
[Install]
6565
WantedBy=multi-user.target

distribution/packages/src/deb/debian/postinst

+1-1
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ product_dir=/usr/share/wazuh-indexer
1717
config_dir=/etc/wazuh-indexer
1818
data_dir=/var/lib/wazuh-indexer
1919
log_dir=/var/log/wazuh-indexer
20-
pid_dir=/var/run/wazuh-indexer
20+
pid_dir=/run/wazuh-indexer
2121
tmp_dir=/var/log/wazuh-indexer/tmp
2222

2323

distribution/packages/src/rpm/init.d/wazuh-indexer

+1-1
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ if [ -f "$OPENSEARCH_ENV_FILE" ]; then
4747
. "$OPENSEARCH_ENV_FILE"
4848
fi
4949

50-
exec="$OPENSEARCH_HOME/bin/wazuh-indexer"
50+
exec="$OPENSEARCH_HOME/bin/opensearch"
5151
prog="wazuh-indexer"
5252
pidfile="$PID_DIR/${prog}.pid"
5353

distribution/packages/src/rpm/wazuh-indexer.cicd.spec

+4-2
Original file line numberDiff line numberDiff line change
@@ -695,7 +695,9 @@ rm -fr %{buildroot}
695695

696696

697697
%changelog
698-
* Thu Mar 28 2024 support <[email protected]> - 4.9.0
698+
* Thu Aug 15 2024 support <[email protected]> - 4.9.1
699+
- More info: https://documentation.wazuh.com/current/release-notes/release-4-9-1.html
700+
* Thu Aug 15 2024 support <[email protected]> - 4.9.0
699701
- More info: https://documentation.wazuh.com/current/release-notes/release-4-9-0.html
700702
* Tue Jan 30 2024 support <[email protected]> - 4.8.1
701703
- More info: https://documentation.wazuh.com/current/release-notes/release-4-8-1.html
@@ -750,4 +752,4 @@ rm -fr %{buildroot}
750752
* Wed May 18 2022 support <[email protected]> - 4.3.1
751753
- More info: https://documentation.wazuh.com/current/release-notes/release-4-3-1.html
752754
* Thu May 05 2022 support <[email protected]> - 4.3.0
753-
- More info: https://documentation.wazuh.com/current/release-notes/release-4-3-0.html
755+
- More info: https://documentation.wazuh.com/current/release-notes/release-4-3-0.html

distribution/packages/src/rpm/wazuh-indexer.rpm.spec

+10-6
Original file line numberDiff line numberDiff line change
@@ -108,11 +108,13 @@ set -- "$@" "%%dir /usr/lib/systemd/system"
108108
set -- "$@" "%%dir /usr/lib/tmpfiles.d"
109109
set -- "$@" "%%dir /usr/share"
110110
set -- "$@" "%%dir /var"
111+
set -- "$@" "%%dir /var/run"
112+
set -- "$@" "%%dir /var/run/%{name}"
113+
set -- "$@" "%%dir /run"
111114
set -- "$@" "%%dir /var/lib"
112115
set -- "$@" "%%dir /var/log"
113116
set -- "$@" "%%dir /usr/lib/sysctl.d"
114117
set -- "$@" "%%dir /usr/lib/systemd"
115-
set -- "$@" "%%dir /usr/lib/systemd"
116118
set -- "$@" "%{_sysconfdir}/sysconfig/%{name}"
117119
set -- "$@" "%{config_dir}/log4j2.properties"
118120
set -- "$@" "%{config_dir}/jvm.options"
@@ -174,8 +176,8 @@ exit 0
174176

175177
%post
176178
set -e
177-
chown -R %{name}.%{name} %{config_dir}
178-
chown -R %{name}.%{name} %{log_dir}
179+
chown -R %{name}:%{name} %{config_dir}
180+
chown -R %{name}:%{name} %{log_dir}
179181

180182
# Apply PerformanceAnalyzer Settings
181183
chmod a+rw /tmp
@@ -232,7 +234,7 @@ exit 0
232234
# Service files
233235
%attr(0644, root, root) %{_prefix}/lib/systemd/system/%{name}.service
234236
%attr(0644, root, root) %{_prefix}/lib/systemd/system/%{name}-performance-analyzer.service
235-
%attr(0644, root, root) %{_sysconfdir}/init.d/%{name}
237+
%attr(0750, root, root) %{_sysconfdir}/init.d/%{name}
236238
%attr(0644, root, root) %config(noreplace) %{_prefix}/lib/sysctl.d/%{name}.conf
237239
%attr(0644, root, root) %config(noreplace) %{_prefix}/lib/tmpfiles.d/%{name}.conf
238240

@@ -263,9 +265,11 @@ exit 0
263265
%attr(750, %{name}, %{name}) %{product_dir}/performance-analyzer-rca/bin/*
264266

265267
%changelog
266-
* Wed Jun 19 2024 support <[email protected]> - 4.10.0
268+
* Tue Aug 20 2024 support <[email protected]> - 4.10.0
267269
- More info: https://documentation.wazuh.com/current/release-notes/release-4-10-0.html
268-
* Thu Mar 28 2024 support <[email protected]> - 4.9.0
270+
* Thu Aug 15 2024 support <[email protected]> - 4.9.1
271+
- More info: https://documentation.wazuh.com/current/release-notes/release-4-9-1.html
272+
* Thu Aug 15 2024 support <[email protected]> - 4.9.0
269273
- More info: https://documentation.wazuh.com/current/release-notes/release-4-9-0.html
270274
* Tue Jan 30 2024 support <[email protected]> - 4.8.1
271275
- More info: https://documentation.wazuh.com/current/release-notes/release-4-8-1.html

docker/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -91,4 +91,4 @@ Then, start a container with:
9191

9292
```console
9393
docker run -it --rm wazuh-indexer:4.10.0
94-
```
94+
```

docker/dev/images/Dockerfile

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
FROM gradle:jdk21-alpine AS builder
1+
FROM gradle:8.7.0-jdk21-alpine AS builder
22
USER gradle
33
WORKDIR /home/wazuh-indexer
44
COPY --chown=gradle:gradle . /home/wazuh-indexer

integrations/.gitignore

+2-1
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,3 @@
11
external
2-
docker/certs
2+
docker/certs
3+
docker/config

integrations/README.md

+5-6
Original file line numberDiff line numberDiff line change
@@ -14,14 +14,13 @@ and combines security data from AWS and a broad range of enterprise security dat
1414

1515
Refer to these documents for more information about this integration:
1616

17-
* [User Guide](./amazon-security-lake/README.md).
18-
* [Developer Guide](./amazon-security-lake/CONTRIBUTING.md).
19-
17+
- [User Guide](./amazon-security-lake/README.md).
18+
- [Developer Guide](./amazon-security-lake/CONTRIBUTING.md).
2019

2120
### Other integrations
2221

2322
We host development environments to support the following integrations:
2423

25-
* [Splunk](./splunk/README.md).
26-
* [Elasticsearch](./elastic/README.md).
27-
* [OpenSearch](./opensearch/README.md).
24+
- [Splunk](./splunk/README.md).
25+
- [Elasticsearch](./elastic/README.md).
26+
- [OpenSearch](./opensearch/README.md).

integrations/amazon-security-lake/CONTRIBUTING.md

+8-12
Original file line numberDiff line numberDiff line change
@@ -5,41 +5,38 @@
55
A demo of the integration can be started using the content of this folder and Docker. Open a terminal in the `wazuh-indexer/integrations` folder and start the environment.
66

77
```console
8-
docker compose -f ./docker/amazon-security-lake.yml up -d
8+
docker compose -f ./docker/compose.amazon-security-lake.yml up -d
99
```
1010

1111
This Docker Compose project will bring up these services:
1212

1313
- a _wazuh-indexer_ node
1414
- a _wazuh-dashboard_ node
1515
- a _logstash_ node
16-
- our [events generator](./tools/events-generator/README.md)
16+
- our [events generator](../tools/events-generator/README.md)
1717
- an AWS Lambda Python container.
1818

19-
On the one hand, the event generator will push events constantly to the indexer, to the `wazuh-alerts-4.x-sample` index by default (refer to the [events generator](./tools/events-generator/README.md) documentation for customization options). On the other hand, Logstash will query for new data and deliver it to output configured in the pipeline, which can be one of `indexer-to-s3` or `indexer-to-file`.
19+
On the one hand, the event generator will push events constantly to the indexer, to the `wazuh-alerts-4.x-sample` index by default (refer to the [events generator](../tools/events-generator/README.md) documentation for customization options). On the other hand, Logstash will query for new data and deliver it to output configured in the pipeline `indexer-to-s3`. This pipeline delivers the data to an S3 bucket, from which the data is processed using a Lambda function, to finally be sent to the Amazon Security Lake bucket in Parquet format.
2020

21-
The `indexer-to-s3` pipeline is the method used by the integration. This pipeline delivers the data to an S3 bucket, from which the data is processed using a Lambda function, to finally be sent to the Amazon Security Lake bucket in Parquet format.
22-
23-
24-
Attach a terminal to the container and start the integration by starting Logstash, as follows:
21+
The pipeline starts automatically, but if you need to start it manually, attach a terminal to the Logstash container and start the integration using the command below:
2522

2623
```console
27-
/usr/share/logstash/bin/logstash -f /usr/share/logstash/pipeline/indexer-to-s3.conf --path.settings /etc/logstash
24+
/usr/share/logstash/bin/logstash -f /usr/share/logstash/pipeline/indexer-to-s3.conf
2825
```
2926

3027
After 5 minutes, the first batch of data will show up in http://localhost:9444/ui/wazuh-aws-security-lake-raw. You'll need to invoke the Lambda function manually, selecting the log file to process.
3128

3229
```bash
33-
bash amazon-security-lake/src/invoke-lambda.sh <file>
30+
bash amazon-security-lake/invoke-lambda.sh <file>
3431
```
3532

36-
Processed data will be uploaded to http://localhost:9444/ui/wazuh-aws-security-lake-parquet. Click on any file to download it, and check it's content using `parquet-tools`. Just make sure of installing the virtual environment first, through [requirements.txt](./amazon-security-lake/).
33+
Processed data will be uploaded to http://localhost:9444/ui/wazuh-aws-security-lake-parquet. Click on any file to download it, and check it's content using `parquet-tools`. Just make sure of installing the virtual environment first, through [requirements.txt](./requirements.txt).
3734

3835
```bash
3936
parquet-tools show <parquet-file>
4037
```
4138

42-
If the `S3_BUCKET_OCSF` variable is set in the container running the AWS Lambda function, intermediate data in OCSF and JSON format will be written to a dedicated bucket. This is enabled by default, writing to the `wazuh-aws-security-lake-ocsf` bucket. Bucket names and additional environment variables can be configured editing the [amazon-security-lake.yml](./docker/amazon-security-lake.yml) file.
39+
If the `S3_BUCKET_OCSF` variable is set in the container running the AWS Lambda function, intermediate data in OCSF and JSON format will be written to a dedicated bucket. This is enabled by default, writing to the `wazuh-aws-security-lake-ocsf` bucket. Bucket names and additional environment variables can be configured editing the [compose.amazon-security-lake.yml](../docker/compose.amazon-security-lake.yml) file.
4340

4441
For development or debugging purposes, you may want to enable hot-reload, test or debug on these files, by using the `--config.reload.automatic`, `--config.test_and_exit` or `--debug` flags, respectively.
4542

@@ -56,4 +53,3 @@ See [README.md](README.md). The instructions on that section have been based on
5653
**Docker is required**.
5754

5855
The [Makefile](./Makefile) in this folder automates the generation of a zip deployment package containing the source code and the required dependencies for the AWS Lambda function. Simply run `make` and it will generate the `wazuh_to_amazon_security_lake.zip` file. The main target runs a Docker container to install the Python3 dependencies locally, and zips the source code and the dependencies together.
59-
+12-41
Original file line numberDiff line numberDiff line change
@@ -1,46 +1,17 @@
1-
# MULTI-STAGE build
1+
# docker build --platform linux/amd64 --no-cache -f aws-lambda.dockerfile -t docker-image:test .
2+
# docker run --platform linux/amd64 -p 9000:8080 docker-image:test
23

3-
FROM python:3.9 as builder
4-
# Create a virtualenv for dependencies. This isolates these packages from
5-
# system-level packages.
6-
RUN python3 -m venv /env
7-
# Setting these environment variables are the same as running
8-
# source /env/bin/activate.
9-
ENV VIRTUAL_ENV /env
10-
ENV PATH /env/bin:$PATH
11-
# Copy the application's requirements.txt and run pip to install all
12-
# dependencies into the virtualenv.
13-
COPY requirements.txt /app/requirements.txt
14-
RUN pip install -r /app/requirements.txt
4+
# FROM public.ecr.aws/lambda/python:3.9
5+
FROM amazon/aws-lambda-python:3.12
156

7+
# Copy requirements.txt
8+
COPY requirements.aws.txt ${LAMBDA_TASK_ROOT}
169

17-
FROM python:3.9
18-
ENV LOGSTASH_KEYSTORE_PASS="SecretPassword"
19-
# Add the application source code.
20-
COPY --chown=logstash:logstash ./src /home/app
21-
# Add execution persmissions.
22-
RUN chmod a+x /home/app/lambda_function.py
23-
# Copy the application's dependencies.
24-
COPY --from=builder /env /env
10+
# Install the specified packages
11+
RUN pip install -r requirements.aws.txt
2512

26-
# Install Logstash
27-
RUN apt-get update && apt-get install -y iputils-ping wget gpg apt-transport-https
28-
RUN wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | gpg --dearmor -o /usr/share/keyrings/elastic-keyring.gpg && \
29-
echo "deb [signed-by=/usr/share/keyrings/elastic-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | tee -a /etc/apt/sources.list.d/elastic-8.x.list && \
30-
apt-get update && apt install -y logstash
31-
# Install logstash-input-opensearch plugin.
32-
RUN /usr/share/logstash/bin/logstash-plugin install logstash-input-opensearch
33-
# Copy the Logstash's ingestion pipelines.
34-
COPY --chown=logstash:logstash logstash/pipeline /usr/share/logstash/pipeline
35-
# Grant logstash ownership over its files
36-
RUN chown --recursive logstash:logstash /usr/share/logstash /etc/logstash /var/log/logstash /var/lib/logstash
13+
# Copy function code
14+
COPY src ${LAMBDA_TASK_ROOT}
3715

38-
USER logstash
39-
# Copy and run the setup.sh script to create and configure a keystore for Logstash.
40-
COPY --chown=logstash:logstash logstash/setup.sh /usr/share/logstash/bin/setup.sh
41-
RUN bash /usr/share/logstash/bin/setup.sh
42-
43-
# Disable ECS compatibility
44-
RUN `echo "pipeline.ecs_compatibility: disabled" >> /etc/logstash/logstash.yml`
45-
46-
WORKDIR /home/app
16+
# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
17+
CMD [ "lambda_function.lambda_handler" ]

integrations/amazon-security-lake/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,7 @@ Follow the [official documentation](https://docs.aws.amazon.com/lambda/latest/dg
9090
- Configure the runtime to have 512 MB of memory and 30 seconds timeout.
9191
- Configure a trigger so every object with `.txt` extension uploaded to the S3 bucket created previously invokes the Lambda.
9292
![AWS Lambda trigger](./images/asl-lambda-trigger.jpeg)
93-
- Use the [Makefile](./Makefile) to generate the zip package `wazuh_to_amazon_security_lake.zip`, and upload it to the S3 bucket created previously as per [these instructions](https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-package.html#gettingstarted-package-zip). See [CONTRIBUTING](./CONTRIBUTING.md) for details about the Makefile.
93+
- Use the [Makefile](./Makefile) to generate the zip package `wazuh_to_amazon_security_lake.zip`, and upload it to the S3 bucket created previously as per [these instructions](https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-package.html#gettingstarted-package-zip). See [CONTRIBUTING](./CONTRIBUTING.md) for details about the Makefile.
9494
- Configure the Lambda with the at least the required _Environment Variables_ below:
9595

9696
| Environment variable | Required | Value |
@@ -234,7 +234,7 @@ The tables below represent how the Wazuh Security Events are mapped into the OCS
234234
| type_uid | Long | 200101 |
235235
| metadata.product.name | String | "Wazuh" |
236236
| metadata.product.vendor_name | String | "Wazuh, Inc." |
237-
| metadata.product.version | String | "4.9.0" |
237+
| metadata.product.version | String | "4.9.1" |
238238
| metadata.product.lang | String | "en" |
239239
| metadata.log_name | String | "Security events" |
240240
| metadata.log_provider | String | "Wazuh" |

integrations/amazon-security-lake/invoke-lambda.sh

+1-1
Original file line numberDiff line numberDiff line change
@@ -39,4 +39,4 @@ curl -X POST "http://localhost:9000/2015-03-31/functions/function/invocations" -
3939
}
4040
}
4141
]
42-
}'
42+
}'

integrations/amazon-security-lake/logstash/pipeline/indexer-to-s3.conf

+14-6
Original file line numberDiff line numberDiff line change
@@ -27,19 +27,27 @@ output {
2727
s3 {
2828
id => "output.s3"
2929
access_key_id => "${AWS_ACCESS_KEY_ID}"
30-
secret_access_key => "${AWS_SECRET_ACCESS_KEY}"
31-
region => "${AWS_REGION}"
32-
endpoint => "${AWS_ENDPOINT}"
3330
bucket => "${S3_BUCKET_RAW}"
3431
codec => "json_lines"
35-
retry_count => 0
36-
validate_credentials_on_root_bucket => false
32+
encoding => "gzip"
33+
endpoint => "${AWS_ENDPOINT}"
3734
prefix => "%{+YYYY}%{+MM}%{+dd}"
35+
region => "${AWS_REGION}"
36+
retry_count => 0
37+
secret_access_key => "${AWS_SECRET_ACCESS_KEY}"
3838
server_side_encryption => true
3939
server_side_encryption_algorithm => "AES256"
40+
time_file => 5
41+
validate_credentials_on_root_bucket => false
4042
additional_settings => {
4143
"force_path_style" => true
4244
}
43-
time_file => 5
45+
}
46+
file {
47+
id => "output.file"
48+
path => "/usr/share/logstash/logs/indexer-to-file-%{+YYYY-MM-dd-HH}.log"
49+
file_mode => 0644
50+
codec => json_lines
51+
flush_interval => 30
4452
}
4553
}

integrations/amazon-security-lake/src/lambda_function.py

+2-1
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,7 @@
22
import os
33
import urllib.parse
44
import json
5+
import gzip
56
import boto3
67
import pyarrow as pa
78
import pyarrow.parquet as pq
@@ -31,7 +32,7 @@ def get_events(bucket: str, key: str) -> list:
3132
logger.info(f"Reading {key}.")
3233
try:
3334
response = s3_client.get_object(Bucket=bucket, Key=key)
34-
data = response['Body'].read().decode('utf-8')
35+
data = gzip.decompress(response['Body'].read()).decode('utf-8')
3536
return data.splitlines()
3637
except ClientError as e:
3738
logger.error(

integrations/docker/.env

+20-5
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,6 @@ ELASTIC_PASSWORD=elastic
44
# Password for the 'kibana_system' user (at least 6 characters)
55
KIBANA_PASSWORD=elastic
66

7-
# Version of Elastic products
8-
STACK_VERSION=8.6.2
9-
107
# Set the cluster name
118
CLUSTER_NAME=elastic
129

@@ -22,8 +19,26 @@ KIBANA_PORT=5602
2219
# Increase or decrease based on the available host memory (in bytes)
2320
MEM_LIMIT=1073741824
2421

22+
# Wazuh version
23+
WAZUH_VERSION=4.8.1
24+
25+
# Wazuh Indexer version (Provisionally using OpenSearch)
26+
WAZUH_INDEXER_VERSION=2.14.0
27+
28+
# Wazuh Dashboard version (Provisionally using OpenSearch Dashboards)
29+
WAZUH_DASHBOARD_VERSION=2.14.0
30+
31+
# Wazuh certs generator version
32+
WAZUH_CERTS_GENERATOR_VERSION=0.0.1
33+
2534
# OpenSearch destination cluster version
2635
OS_VERSION=2.14.0
2736

28-
# Wazuh version
29-
WAZUH_VERSION=4.7.5
37+
# Logstash version:
38+
LOGSTASH_OSS_VERSION=8.9.0
39+
40+
# Splunk version:
41+
SPLUNK_VERSION=9.1.4
42+
43+
# Version of Elastic products
44+
STACK_VERSION=8.14.3

0 commit comments

Comments
 (0)