This article explains how to use Treasure Data with Fluentd to aggregate semi-structured logs into Treasure Data (TD), which offers Cloud Data Service.
Fluentd is an advanced open-source log collector originally developed at Treasure Data, Inc. Fluentd is specifically designed to solve the big-data log collection problem.
Treasure Data provides Cloud Data Service, which Fluentd users can use to easily store and analyze data on the cloud. Fluentd is designed to flexibly connect with many systems via plugins, but Treasure Data should be your top choice if you don't want to spend engineering resources maintaining your backend infrastructure.
This article will show you how to use Fluentd to receive data from HTTP and stream it into TD.
The figure below shows the high-level architecture:
The following software/services are required to be set up correctly:
- Fluentd
- TD Output Plugin
- Your Treasure Data (CDP) Services Account
For simplicity, this article will describe how to set up a one-node configuration. Please install the above prerequisites software/services on the same node.
You can install Fluentd via major packaging systems.
If out_td
(fluent-plugin-td) is not installed yet, please install it manually.
See Plugin Management section how to install fluent-plugin-td on your environment.
{% hint style='info' %}
If you use fluent-package
, out_td (fluent-plugin-td) is bundled by default.
{% endhint %}
Next, please sign up to TD and get your apikey
using the td apikey:show
command:
$ td account -f
Enter your Treasure Data credentials.
Email: [email protected]
Password (typing will be hidden):
$ td apikey:show
kdfasklj218dsakfdas0983120
Let's start configuring Fluentd. If you used the deb/rpm package, Fluentd's config file is located at /etc/fluent/fluentd.conf
.
For the input source, we will set up Fluentd to accept records from HTTP. The Fluentd configuration file should look like this:
<source>
@type http
port 8888
</source>
The output destination will be Treasure Data. The output configuration should look like this:
<match td.*.*>
@type tdlog
apikey YOUR_API_KEY_IS_HERE
auto_create_table
use_ssl true
<buffer>
@type file
path /var/log/fluent/buffer/td
</buffer>
</match>
The match section specifies the regexp used to look for matching tags. If a matching tag is found in a log, then the config inside <match>...</match>
is used (i.e. the log is routed according to the config inside).
To test the configuration, just post the JSON to Fluentd. Sending a USR1
signal flushes Fluentd's buffer into TD:
$ curl -X POST -d 'json={"action":"login","user":2}' \
http://localhost:8888/td.testdb.www_access
$ kill -USR1 `cat /var/run/fluent/fluentd.pid`
Next, please use the td tables
command. If the count is not zero, the data was imported successfully.
$ td tables
+----------+------------+------+-------+--------+
| Database | Table | Type | Count | Schema |
+----------+------------+------+-------+--------+
| testdb | www_access | log | 1 | |
+----------+------------+------+-------+--------+
You can now issues queries against the imported data:
$ td query -w -d testdb \
"SELECT COUNT(1) AS cnt FROM www_access"
queued...
started at 2012-04-10T23:44:41Z
2012-04-10 23:43:12,692 Stage-1 map = 0%, reduce = 0%
2012-04-10 23:43:18,766 Stage-1 map = 100%, reduce = 0%
2012-04-10 23:43:32,973 Stage-1 map = 100%, reduce = 100%
Status : success
Result :
+-----+
| cnt |
+-----+
| 1 |
+-----+
It is not advisable to send sensitive user information to the cloud. To assist with this need, out_tdlog
comes with some anonymization systems. For more details, see Treasure Data plugin.
Fluentd + Treasure Data gives you a data collection and analysis system in days, not months. Treasure Data is a useful solution if you do not want to spend engineering resources maintaining the backend storage and analytics infrastructure.
If this article is incorrect or outdated, or omits critical information, please let us know. Fluentd is an open-source project under Cloud Native Computing Foundation (CNCF). All components are available under the Apache 2 License.