Skip to main content
Skip table of contents

Creating Fluentd plugins

The Delphix Fluentd service includes built-in support for sending data to Splunk, but the feature can be extended to other data consumers through the use of plugins that can be uploaded to a Delphix engine for use by Fluentd. These plugins can be adopted from various widely available options, such as those found at http://rubygems.org, but only require a few specific gem files and a configuration file to tell Fluentd where and how to send metrics to your platform of choice.

Delphix Fluentd plugin structure

The expected file structure of a Delphix Fluentd plugin is simple: A root folder containing a properly configured fluent.conf.stg file, and a /gems subfolder with the necessary .gem files. Delphix Fluentd plugins are simpler than a typical Fluentd plugin because of Delphix's native Fluentd integration, which comes with a variety of Ruby gems that are used by many standard plugins. Additionally, the Delphix Fluentd service requires only the dependency gem files (i.e. the files ending with a .gem extension). Additional files that come with a downloaded plugin should be removed.

Pre-installed Fluentd ruby gems

To minimize the surface area of uploads to a given Delphix engine, all .gem files that are already present on the Delphix OS should also be removed from your plugin before upload. Here is a list of the provided gems that should be removed from your /gems folder, if present:

CODE
addressable (2.8.1)
async (1.30.3)
async-http (0.59.2)
async-io (1.34.0)
async-pool (0.3.12)
aws-eventstream (1.2.0)
aws-partitions (1.650.0)
aws-sdk-core (3.164.0)
aws-sdk-kms (1.58.0)
aws-sdk-s3 (1.116.0)
aws-sdk-sqs (1.51.1)
aws-sigv4 (1.5.2)
benchmark (default: 0.1.0)
bigdecimal (default: 2.0.0)
bindata (2.4.14)
bundler (2.3.18, default: 2.1.4)
cgi (default: 0.1.0.1)
cmetrics (0.3.3)
concurrent-ruby (1.1.10)
console (1.16.2)
cool.io (1.7.1)
csv (default: 3.1.2)
date (default: 3.0.3)
delegate (default: 0.1.0)
did_you_mean (default: 1.4.0)
digest-crc (0.6.4)
digest-murmurhash (1.1.1)
etc (default: 1.1.0)
excon (0.93.1)
faraday (1.10.2)
faraday-em_http (1.0.0)
faraday-em_synchrony (1.0.0)
faraday-excon (1.1.0)
faraday-httpclient (1.0.1)
faraday-multipart (1.0.4)
faraday-net_http (1.0.1)
faraday-net_http_persistent (1.2.0)
faraday-patron (1.0.0)
faraday-rack (1.0.0)
faraday-retry (1.0.3)
faraday_middleware-aws-sigv4 (0.6.1)
fcntl (default: 1.0.0)
ffi (1.15.5)
fiber-local (1.0.0)
fiddle (default: 1.0.0)
fileutils (1.6.0, default: 1.4.1)
fluent-config-regexp-type (1.0.0)
fluent-diagtool (1.0.1)
fluent-logger (0.9.0)
fluent-plugin-calyptia-monitoring (0.1.3)
fluent-plugin-flowcounter-simple (0.1.0)
fluent-plugin-kafka (0.18.1)
fluent-plugin-metrics-cmetrics (0.1.2)
fluent-plugin-opensearch (1.0.8)
fluent-plugin-prometheus (2.0.3)
fluent-plugin-prometheus_pushgateway (0.1.0)
fluent-plugin-record-modifier (2.1.1)
fluent-plugin-rewrite-tag-filter (2.4.0)
fluent-plugin-s3 (1.7.2)
fluent-plugin-sd-dns (0.1.0)
fluent-plugin-systemd (1.0.5)
fluent-plugin-td (1.2.0)
fluent-plugin-utmpx (0.5.0)
fluent-plugin-webhdfs (1.5.0)
fluentd (1.15.3)
forwardable (default: 1.3.1)
getoptlong (default: 0.1.0)
hirb (0.7.3)
http_parser.rb (0.8.0)
httpclient (2.8.3)
io-console (default: 0.5.6)
ipaddr (default: 1.2.2)
irb (default: 1.2.6)
jmespath (1.6.1)
json (2.6.2, default: 2.3.0)
linux-utmpx (0.3.0)
logger (default: 1.4.2)
ltsv (0.1.2)
matrix (default: 0.2.0)
mini_portile2 (2.8.0)
minitest (5.13.0)
msgpack (1.6.0)
multi_json (1.15.0)
multipart-post (2.2.3)
mutex_m (default: 0.1.0)
net-pop (default: 0.1.0)
net-smtp (default: 0.1.0)
net-telnet (0.2.0)
nio4r (2.5.8)
observer (default: 0.1.0)
oj (3.13.17)
open3 (default: 0.1.0)
opensearch-api (2.0.2)
opensearch-ruby (2.0.3)
opensearch-transport (2.0.1)
openssl (default: 2.1.3)
ostruct (default: 0.2.0)
parallel (1.22.1)
power_assert (1.1.7)
prime (default: 0.1.1)
prometheus-client (2.1.0)
protocol-hpack (1.4.2)
protocol-http (0.23.12)
protocol-http1 (0.14.6)
protocol-http2 (0.14.2)
pstore (default: 0.1.0)
psych (default: 3.1.0)
public_suffix (5.0.0)
racc (default: 1.4.16)
rake (13.0.6, 13.0.1)
rdkafka (0.11.1)
rdoc (default: 6.2.1.1)
readline (default: 0.0.2)
reline (default: 0.1.5)
rexml (default: 3.2.3.1)
rss (default: 0.2.8)
ruby-kafka (1.5.0)
ruby-progressbar (1.11.0)
ruby2_keywords (0.0.5)
rubyzip (1.3.0)
sdbm (default: 1.0.0)
serverengine (2.3.0)
sigdump (0.2.4)
singleton (default: 0.1.0)
stringio (default: 0.1.0)
strptime (0.2.5)
strscan (default: 1.0.3)
systemd-journal (1.4.2)
td (0.16.9)
td-client (1.0.8)
td-logger (0.3.28)
test-unit (3.3.4)
timeout (default: 0.1.0)
timers (4.3.5)
tracer (default: 0.1.0)
traces (0.7.0)
tzinfo (2.0.5)
tzinfo-data (1.2022.5)
uri (default: 0.10.0)
webhdfs (0.10.2)
webrick (1.7.0, default: 1.6.1)
xmlrpc (0.3.0)
yajl-ruby (1.4.3)
yaml (default: 0.1.0)
zip-zip (0.3)
zlib (default: 1.1.0)

Note that many gems have similar names but may still require your plugin to work. For example, opensearch-api.gem from the list above is distinct from opensearch-transport.gem. While gem version clashes can sometimes occur when there are duplicate gems found during installation, there is often no harm in including additional gem files, but it is recommended to eliminate as many non-essential gems as possible before uploading.

Setting up a Fluentd configuration file

Besides requiring the necessary gems to connect to a given data consumer, Fluentd needs a configuration file to know where and how to send the data it has received from Delphix. The Fluentd Configuration GUI in Delphix also uses the configuration file to determine what parameters should be displayed to the system administrator during setup. The configuration file should be named fluent.conf.stg and must be placed in your plugin's root folder. Each data consumer requires specific syntax to connect with Fluentd. Consult the documentation of your data consumer of choice when creating your fluent.conf.stg file. Here is an example configuration file template, which must be customized depending on the requirements of your chosen data provider (Splunk, elasticsearch, etc.):

CODE
/*
 * Copyright (c) 2023 Delphix. All rights reserved.
 */

delimiters "^", "^"

/*
 * This template is used by the Delphix management stack to auto-generate the final configuration used
 * by the internal fluent service. User-editable fields are filled in with information provided through
 * the GUI or API. Additional params specific to your needs can be substituted for my_param.
 * buffer_flush_interval is a fluentd parameter that is dynamically populated based on user input and
 * is provided here as an example.
 */

fluentConfig(my_param, buffer_flush_interval) ::= <<

<system>
  # equal to -v command line option
  log_level info
  <log>
    format json
    time_format %Y-%m-%dT%H:%M:%S.%NZ
  </log>
</system>

<source>
  @type forward
  port 24224
  bind 127.0.0.1 # only accept connections from localhost
</source>

<match delphix.events.**>
  ^commonFields(tagPrefix="delphix.events", index={^event_index^},
                flushInterval={^buffer_flush_interval^},
                retryTimeout="72h",
                totalLimitSize="10g",
                chunkLimitSize="1m", ...)^
  data_type event
</match>

<match delphix.metrics.**>
  ^commonFields(tagPrefix="delphix.metrics", index={^metrics_index^},
                flushInterval={^buffer_flush_interval^},
                retryTimeout="72h",
                totalLimitSize="10g",
                chunkLimitSize="1m", ...)^
</match>
>>

commonFields(tagPrefix, index, flushInterval, retryTimeout, totalLimitSize, chunkLimitSize) ::= <<

/* Replace with syntax specific to your data consumer of choice */
@type myDataConsumer

/* Replace with parameters specific to your data consumer */
my_param ^my_param^

source ${tag} # Filled in at runtime by fluentd

<buffer>
  @type file
  path /var/lib/fluent/^tagPrefix^.*.buffer ^! buffer path must be unique for each match section !^
  chunk_limit_size ^chunkLimitSize^
  total_limit_size ^totalLimitSize^
  flush_interval ^flushInterval^
  retry_timeout ^retryTimeout^
</buffer>
>>

Fluentd defaults to using JSON formatting for its output plugins. Refer to Fluentd documentation for including a formatter plugin in your config should you require an alternative.

Readying the plugin for upload

Here is an example of what a plugin for elasticsearch 7 might look like after taking the above steps:

CODE
$ ls ./elasticsearch-7 

fluent.conf.stg gems

$ ls ./elasticsearch-7/gems/

elasticsearch-7.17.1.gem elasticsearch-api-7.17.1.gem elasticsearch-transport-7.17.1.gem fluent-plugin-elasticsearch-5.1.4.gem

Delphix Fluentd plugins must be stored in a .far archive file before uploading them to your engine. You can create a .far archive of your plugin files using a utility such as tar. Here is an example tar syntax for creating a plugin archive file:

CODE
tar --create --verbose --owner=0 --group=0 --exclude-backups --exclude-vcs --one-file-system --format=gnu --file=dlpx-example-plugin.far dlpx-example-plugin/

Uploading plugins to Delphix

Your new Fluentd plugin .far file can be uploaded through the API or the GUI in the Fluentd configuration wizard, by clicking the plus icon next to the plugin selection dropdown menu:

Click the gray box shown in the next window to locate your plugin .far file or drag it to the box to upload it to your Delphix engine:

Limitations

The Delphix Fluentd service supports sending logs to Splunk instances by default through use of the built-in splunkHec plugin. Only one additional plugin can be uploaded to a given Delphix engine at a time. If logs need to be sent to another service requiring a new plugin, the existing one needs to be removed from the engine, and a new plugin uploaded for the new data consumer. Removing an existing plugin can be done from the same window for uploading the plugin by selecting your plugin from the “Select a plugin Configuration” drop-down menu and clicking the trash can icon next to it:

SSL Verification is not currently supported for data consumers other than Splunk and must be disabled.

These limitations may be relaxed in a future Delphix release.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.