|
1 | 1 | ## Wazuh indexer integrations
|
2 | 2 |
|
3 |
| -This folder contains integrations with third-party XDR, SIEM and cybersecurity software. |
| 3 | +This folder contains integrations with third-party XDR, SIEM and cybersecurity software. |
4 | 4 | The goal is to transport Wazuh's analysis to the platform that suits your needs.
|
5 | 5 |
|
6 | 6 | ### Amazon Security Lake
|
7 | 7 |
|
8 |
| -Amazon Security Lake automatically centralizes security data from AWS environments, SaaS providers, |
9 |
| -on premises, and cloud sources into a purpose-built data lake stored in your account. With Security Lake, |
10 |
| -you can get a more complete understanding of your security data across your entire organization. You can |
11 |
| -also improve the protection of your workloads, applications, and data. Security Lake has adopted the |
12 |
| -Open Cybersecurity Schema Framework (OCSF), an open standard. With OCSF support, the service normalizes |
| 8 | +Amazon Security Lake automatically centralizes security data from AWS environments, SaaS providers, |
| 9 | +on premises, and cloud sources into a purpose-built data lake stored in your account. With Security Lake, |
| 10 | +you can get a more complete understanding of your security data across your entire organization. You can |
| 11 | +also improve the protection of your workloads, applications, and data. Security Lake has adopted the |
| 12 | +Open Cybersecurity Schema Framework (OCSF), an open standard. With OCSF support, the service normalizes |
13 | 13 | and combines security data from AWS and a broad range of enterprise security data sources.
|
14 | 14 |
|
15 |
| -##### Usage |
| 15 | +#### Development guide |
16 | 16 |
|
17 | 17 | A demo of the integration can be started using the content of this folder and Docker.
|
18 | 18 |
|
19 | 19 | ```console
|
20 | 20 | docker compose -f ./docker/amazon-security-lake.yml up -d
|
21 | 21 | ```
|
22 | 22 |
|
23 |
| -This docker compose project will bring a *wazuh-indexer* node, a *wazuh-dashboard* node, |
24 |
| -a *logstash* node and our event generator. On the one hand, the event generator will push events |
25 |
| -constantly to the indexer, on the `wazuh-alerts-4.x-sample` index by default (refer to the [events |
26 |
| -generator](./tools/events-generator/README.md) documentation for customization options). |
27 |
| -On the other hand, logstash will constantly query for new data and deliver it to the integration |
28 |
| -Python program, also present in that node. Finally, the integration module will prepare and send the |
29 |
| -data to the Amazon Security Lake's S3 bucket. |
| 23 | +This docker compose project will bring a _wazuh-indexer_ node, a _wazuh-dashboard_ node, |
| 24 | +a _logstash_ node, our event generator and an AWS Lambda Python container. On the one hand, the event generator will push events |
| 25 | +constantly to the indexer, to the `wazuh-alerts-4.x-sample` index by default (refer to the [events |
| 26 | +generator](./tools/events-generator/README.md) documentation for customization options). |
| 27 | +On the other hand, logstash will constantly query for new data and deliver it to output configured in the |
| 28 | +pipeline, which can be one of `indexer-to-s3` or `indexer-to-file`. |
| 29 | + |
| 30 | +The `indexer-to-s3` pipeline is the method used by the integration. This pipeline delivers |
| 31 | +the data to an S3 bucket, from which the data is processed using a Lambda function, to finally |
| 32 | +be sent to the Amazon Security Lake bucket in Parquet format. |
| 33 | + |
30 | 34 | <!-- TODO continue with S3 credentials setup -->
|
31 | 35 |
|
32 | 36 | Attach a terminal to the container and start the integration by starting logstash, as follows:
|
33 | 37 |
|
34 | 38 | ```console
|
35 |
| -/usr/share/logstash/bin/logstash -f /usr/share/logstash/pipeline/indexer-to-integrator.conf --path.settings /etc/logstash |
| 39 | +/usr/share/logstash/bin/logstash -f /usr/share/logstash/pipeline/indexer-to-s3.conf --path.settings /etc/logstash |
36 | 40 | ```
|
37 | 41 |
|
38 |
| -Unprocessed data can be sent to a file or to an S3 bucket. |
39 |
| -```console |
40 |
| -/usr/share/logstash/bin/logstash -f /usr/share/logstash/pipeline/indexer-to-file.conf --path.settings /etc/logstash |
41 |
| -/usr/share/logstash/bin/logstash -f /usr/share/logstash/pipeline/indexer-to-s3.conf --path.settings /etc/logstash |
| 42 | +After 5 minutes, the first batch of data will show up in http://localhost:9444/ui/wazuh-indexer-aux-bucket. |
| 43 | +You'll need to invoke the Lambda function manually, selecting the log file to process. |
| 44 | + |
| 45 | +```bash |
| 46 | +export AUX_BUCKET=wazuh-indexer-aux-bucket |
| 47 | + |
| 48 | +bash amazon-security-lake/src/invoke-lambda.sh <file> |
42 | 49 | ```
|
43 | 50 |
|
44 |
| -All three pipelines are configured to fetch the latest data from the *wazuh-indexer* every minute. In |
45 |
| -the case of `indexer-to-file`, the data is written at the same pace, whereas `indexer-to-s3`, data |
46 |
| -is uploaded every 5 minutes. |
| 51 | +Processed data will be uploaded to http://localhost:9444/ui/wazuh-indexer-amazon-security-lake-bucket. Click on any file to download it, |
| 52 | +and check it's content using `parquet-tools`. Just make sure of installing the virtual environment first, through [requirements.txt](./amazon-security-lake/). |
47 | 53 |
|
48 |
| -For development or debugging purposes, you may want to enable hot-reload, test or debug on these files, |
| 54 | +```bash |
| 55 | +parquet-tools show <parquet-file> |
| 56 | +``` |
| 57 | + |
| 58 | +Bucket names can be configured editing the [amazon-security-lake.yml](./docker/amazon-security-lake.yml) file. |
| 59 | + |
| 60 | +For development or debugging purposes, you may want to enable hot-reload, test or debug on these files, |
49 | 61 | by using the `--config.reload.automatic`, `--config.test_and_exit` or `--debug` flags, respectively.
|
50 | 62 |
|
51 | 63 | For production usage, follow the instructions in our documentation page about this matter.
|
52 | 64 | (_when-its-done_)
|
53 | 65 |
|
54 | 66 | As a last note, we would like to point out that we also use this Docker environment for development.
|
55 | 67 |
|
| 68 | +#### Deployment guide |
| 69 | + |
| 70 | +- Create one S3 bucket to store the raw events, for example: `wazuh-security-lake-integration` |
| 71 | +- Create a new AWS Lambda function |
| 72 | + - Create an IAM role with access to the S3 bucket created above. |
| 73 | + - Select Python 3.12 as the runtime |
| 74 | + - Configure the runtime to have 512 MB of memory and 30 seconds timeout |
| 75 | + - Configure an S3 trigger so every created object in the bucket with `.txt` extension invokes the Lambda. |
| 76 | + - Run `make` to generate a zip deployment package, or create it manually as per the [AWS Lambda documentation](https://docs.aws.amazon.com/lambda/latest/dg/python-package.html#python-package-create-dependencies). |
| 77 | + - Upload the zip package to the bucket. Then, upload it to the Lambda from the S3 as per these instructions: https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-package.html#gettingstarted-package-zip |
| 78 | +- Create a Custom Source within Security Lake for the Wazuh Parquet files as per the following guide: https://docs.aws.amazon.com/security-lake/latest/userguide/custom-sources.html |
| 79 | +- Set the **AWS account ID** for the Custom Source **AWS account with permission to write data**. |
| 80 | + |
| 81 | +<!-- TODO Configure AWS Lambda Environment Variables /--> |
| 82 | +<!-- TODO Install and configure Logstash /--> |
| 83 | + |
| 84 | +The instructions on this section have been based on the following AWS tutorials and documentation. |
| 85 | + |
| 86 | +- [Tutorial: Using an Amazon S3 trigger to create thumbnail images](https://docs.aws.amazon.com/lambda/latest/dg/with-s3-tutorial.html) |
| 87 | +- [Tutorial: Using an Amazon S3 trigger to invoke a Lambda function](https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html) |
| 88 | +- [Working with .zip file archives for Python Lambda functions](https://docs.aws.amazon.com/lambda/latest/dg/python-package.html) |
| 89 | +- [Best practices for working with AWS Lambda functions](https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html) |
| 90 | + |
56 | 91 | ### Other integrations
|
57 | 92 |
|
58 | 93 | TBD
|
0 commit comments