Cloudmarker Tutorial¶
Cloudmarker is a cloud monitoring tool and framework.
Install¶
Create a virtual Python environment and install Cloudmarker in it:
python3 -m venv venv . venv/bin/activate pip3 install cloudmarker
Run sanity test:
cloudmarker -n
The above command runs a mock audit with mock plugins that generate some mock data. The mock data generated can be found at
/tmp/cloudmarker/
. Logs from the tool are written to the standard output as well as to/tmp/cloudmarker.log
.The
-n
or--now
option tells Cloudmarker to run right now instead of waiting for a scheduled run.
Get Started¶
Cloudmarker’s behaviour is driven by configuration files written in
YAML format. Cloudmarker comes
with two built-in mock plugins known as MockCloud
and MockEvent
.
These mock plugins are useful in generating some mock data to test out
the Cloudmarker framework and familiarize oneself with how Cloudmarker
can be configured.
We will first see how to configure the MockCloud
plugin, just so
that we can quickly get started with understanding the configuration
file format without having to work out how to provide Cloudmarker access
to real clouds. We will see how to work with real clouds later in this
document. Follow these steps to get started:
Create a config file named
cloudmarker.yaml
in the current directory with the following content:plugins: mymockcloud: plugin: cloudmarker.clouds.mockcloud.MockCloud params: record_count: 5 record_types: - apple - ball - cat audits: mymockaudit: clouds: - mymockcloud stores: - filestore events: - mockevent alerts: - filestore run: - mymockaudit
Enter this command to run Cloudmarker:
cloudmarker -n
Examine the output in
/tmp/cloudmarker/mymockaudit_mymockcloud.json
. It should contain a JSON array with 5 objects as defined in therecord_count
value in the config. There arerecord_type
fields in these objects that cycle between the values"apple"
,"ball"
, and"cat"
as defined in therecord_types
value in the config. The data we see in this output file is generated by thecloudmarker.clouds.mockcloud.MockCloud
plugin defined under themymockcloud
config key.Now examine the output in
/tmp/cloudmarker/mymockaudit_mockevent
. It should contain a JSON array with 2 objects. This data is generated by thecloudmarker.clouds.mockcloud.MockEvent
plugin referred to asmockevent
. Themockevent
config key is defined in the built-in base config.Note that the mock audit files written at
/tmp/cloudmarker/
have names that are composed of audit key name, underscore, and plugin key name. These files are written by thecloudmarker.clouds.filestore.FileStore
plugin which is specified in the config asfilestore
. Thefilestore
config key is defined in the built-in base config.
Configuration Format¶
Let us take a closer look at the config file format in the previous section:
plugins:
mymockcloud:
plugin: cloudmarker.clouds.mockcloud.MockCloud
params:
record_count: 5
record_types:
- apple
- ball
- cat
audits:
mymockaudit:
clouds:
- mymockcloud
stores:
- filestore
events:
- mockevent
alerts:
- filestore
run:
- mymockaudit
There are three top-level config keys: plugins
, audits
, and
run
. These top-level keys and their values are input to the
Cloudmarker framework. They tell the framework what to do. Let us see
each top-level key in more detail.
plugins¶
The plugins
key defines one or more plugin configs. In the above
example, we have defined only one plugin config for the
cloudmarker.clouds.mockcloud.MockCloud
plugin. A plugin is a
Python class that implements a few methods required by the Cloudmarker
framework. In this case, the MockCloud
plugin has the code to
generate some mock data for the purpose of testing other plugins.
Under the plugins
key, we have one or more user-defined keys that
name our plugin configs. In this example, we have defined one plugin
config and chosen that name mymockcloud
for it. We could name this
anything. This name appears in the logs, so it is good to name this
meaningfully.
Under the user-defined key for a plugin, there are at most two keys:
plugin
: Its value is the fully qualified class name of the plugin class.params
: Its value is a mapping of key-value pairs that specify the keyword arguments to pass to the plugin class constructor expression. For example, see the API documentation ofMockCloud
by clicking on this link:cloudmarker.clouds.mockcloud.MockCloud
. We can see that the config key namesrecord_count
andrecord_types
under the theparams
config key match the parameter names of theMockCloud
plugin.
audits¶
The audits
key defines one or more audit configs. In the above
example, we have defined only one audit config to generate mock data
using the plugin defined under the mymockcloud
config key earlier.
Under the audits
key, we have one or more user-defined keys that
name our audit configs. In this example, we have defined one audit
config named key mymockaudit
. This name appears in the logs, so it
is good to name this meaningfully.
Under the user-defined key for an audit, there are four keys:
clouds
: Its value is a list of config keys defined under theplugins
key. Each key should refer to a cloud plugin.stores
: Its value is a list of config keys defined under theplugins
key. Each key should refer to a store plugin.events
: Its value is a list of config keys defined under theplugins
key. Each key should refer to an event plugin.alerts
: Its value is a list of config keys defined under theplugins
key. Each key should refer to a store or alert plugin.
The Cloudmarker framework instantiates all the plugins in an audit and then lets the cloud plugins generate cloud records. Within an audit, records generated by each cloud plugin are then fed to each store and event plugin configured in the same audit.
Each cloud plugin generates data, typically, by connecting to a cloud and pulling cloud data related to resources configured in the cloud. Each cloud plugin emits this data in Python dictionary formats (appears as JSON when written to files or a storage/indexing system that can store JSON documents). We call each such dictionary object (or JSON document) as a record.
Each event plugin then receives the data generated by the cloud plugins configured within the same audit. An event plugin checks each record for security issues or some pattern. If a security issue is found in some record or if the pattern being checked for is found, then the event plugins generate one or more events for it. These events are fed to each alert plugin in the same audit.
Each store plugin takes the data fed to it and sends it to the store destination. A store destination is typically a storage or indexing engine such as Elasticsearch, Splunk, etc.
Each alert plugin takes events fed to it and sends the events to an
alerting destination. A store plugin can also function as an alert
plugin and vice-versa. From the framework’s perspective, there is no
difference between store and alert plugin classes because they
implemented the same methods. The only difference is that the store
plugins are mentioned under the stores
key in an audit config and
the alert plugins are mentioned under the alerts
key in an audit
config. However, some alert plugins such as the ones to send events as
email alerts or Slack messages make sense only as alert plugins and not
as store plugins. That’s because we wouldn’t want to email the entire
cloud data to email recipients or Slack users but we might want to email
just the events as notifications to email recipients or Slack users.
run¶
Finally, the run
key defines the audits we want to run. Its value is
a list of one or more user-defined audit keys.
Base Configuration¶
In the above examples, we defined mymockcloud
under the plugins
key but we did not define filestore
or mockevent
although we
used them under the audits
key. That’s because they are already
defined in the built-in base config. Enter this command to see the
built-in base config:
cloudmarker --print-base-config
Alternatively, see the complete built-in base config here:
cloudmarker.baseconfig
.
The config in cloudmarker.yaml
or any other user-specified config
files is merged with the built-in base config to arrive at the final
working config. The merging rules are described in the next section.
Cascading Configuration¶
By default, Cloudmarker looks for the following config files in the order specified:
/etc/cloudmarker.yaml
~/.cloudmarker.yaml
~/cloudmarker.yaml
cloudmarker.yaml
Note that the built-in base config is always used. If a config file in the list above is missing, it is ignored. If all config files in the list above are missing, then only the built-in base config is used.
If one or more config files are present, they are merged together with the built-in base config to arrive at the final working config. The built-in base config is loaded first. Then the config files are loaded and merged in the order specified in the list above. A config that is loaded later has higher precedence in case of conflicting values for the same key.
A custom list of config files to look for can be specified with the
-c
or --config
option. For example, the following command first
loads the built-in base config, then foo.yaml
from the current
directory, and then bar.yaml
from the current directory:
cloudmarker -n -c foo.yaml bar.yaml
It means that in case of conflicting values for the key, the
builtin-base config has the lowest priority and bar.yaml
has the
highest priority.
Here is a precise specification of how two configs are merged:
- If a key in the first config does not exist in the second config, then the final config contains this key with its value intact.
- If a key in the second config does not exist in the first config, then the final config contains this key with its value intact.
- If a key in the first config also exists in the second config, then the final config contains this key and its value is the value found in the second config.
This means, if a key with a list value exists in the first config and the same key with another list value exists in the second config, then the final config is the key with the list value in the second config. The final config does not contain the key with both lists merged together as its value.
For example, let us assume that foo.yaml
contains this key:
run:
- mockaudit
- fooaudit
And bar.yaml
contains this key:
run:
- baraudit
Then cloudmarker -n -c foo.yaml bar.yaml
leads to the following
value for this config key:
run:
- baraudit
Note how the list value of bar.yaml
replaced the list value of
foo.yaml
while merging. In other words, when we talk about merging
of configs, only keys are merged, i.e., keys from both configs are
picked for the final working config. Values are not merged.
When there are multiple config files to be merged, the first config file is merged with the base config file, then the second config file is merged with the result of the previous merge, and so on.
Cloud Plugins¶
AzCloud¶
To get started with a real audit, it is necessary to configure
Cloudmarker with an actual cloud such as Azure or GCP. In this section,
we see how to configure Cloudmarker for Azure with the AzCloud
plugin.
This plugin is offered by the
cloudmarker.clouds.azcloud.AzCloud
plugin class.
Perform the following steps to configure this plugin:
At first follow this how-to document at https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal to register an application in Azure Active Directory to allow Cloudmarker to access your Azure resources.
After completing the above step, create a config file named
cloudmarker.yaml
in the current directory with this content:plugins: myazcloud: plugin: cloudmarker.clouds.azcloud.AzCloud params: tenant: null client: null secret: null audits: myazaudit: clouds: - myazcloud stores: - filestore events: - firewallruleevent alerts: - filestore run: - myazaudit
Then replace the
null
values fortenant
,client
, andsecret
as described below:tenant
: This is the tenant ID obtained from following the “Get tenant ID” section of the how-to document. This is also known as the directory ID. To find this value, go to Azure Portal > Azure Active Directory > Properties > Directory ID. This value is also available in the newly created application at Azure Portal > Azure Active Directory > App Registrations > (the app) > Directory (tenant) ID.client
: This is the application ID created in the “Get application ID and authentication key” section of the how-to document. This value is also available at Azure Portal > Azure Active Directory > App Registrations > (the app) > Application (client) ID.secret
: This is the secret password created in the “Get application ID and authentication key” section of the how-to document. This valuable is available only while creating a new secret at Azure Portal > Azure Active Directory > App Registrations > (the app) > New client secret.
After setting these values, enter this command to run Cloudmarker:
cloudmarker -n
After Cloudmarker completes running, check these files in
/tmp/cloudmarker/
:myazaudit_myazcloud.json
: This file contains the data obtained from Azure cloud by thecloudmarker.clouds.azcloud.AzCloud
plugin configured under themyazaudit
config key.myazaudit_firewallruleevent.json
: This file contains insecure firewall rules detected by thecloudmarker.events.firewallruleevent.FirewallRuleEvent
referred to with the key namefirewallruleevent
in the config. Note thatfirewallruleevent
config key is defined in the built-in base config.
AzVM¶
This is another plugin for Azure that has a narrower but deeper scope
than the AzCloud
plugin described in the previous section. It pulls
only virtual machine (VM) data with more details about each VM.
This plugin is offered by the cloudmarker.clouds.azvm.AzVM
plugin class.
Perform the following steps to use the AzVM
plugin:
As mentioned in the previous section, register an application in Azure Active Directory to allow Cloudmarker to access your Azure resources.
After completing the above step, create a config file named
cloudmarker.yaml
in the current directory with this content:plugins: myazvm: plugin: cloudmarker.clouds.azvm.AzVM params: tenant: f3cfe067-d008-48f3-b026-cf0dd7409b25 client: 6c4980e2-2652-466d-8157-853f9d0a288f secret: 4FAU+gYAkl96zbnlXZqu25d5iZBlDhzj0EHD8fi6HR8= audits: myazaudit: clouds: - myazvm stores: - filestore events: - firewallruleevent - azvmosdiskencryptionevent - azvmdatadiskencryptionevent alerts: - filestore run: - myazaudit
Then replace the
null
values fortenant
,client
, andsecret
with actual values as described in the previous section.After setting these values, enter this command to run Cloudmarker:
cloudmarker -n
After Cloudmarker completes running, check the generated files in
/tmp/cloudmarker/
.
Let us discuss how AzCloud
and AzVM
are different.
AzCloud
pulls data at subscription level. It first connects to Azure
with the specified credentials, then queries for all subscriptions it
has access to, and then loops over each subscription and makes one API
call per subscription per resource type to pull all resources of that
type. It pulls data related to virtual machines (VMs), application
gateways, load balancers, network interface controllers (NICs), network
security groups (NSGs), etc.
AzVM
on the other hand pulls data at the virtual machine level. It
makes one API call per VM. Thus, it makes more number of API calls. It
retrieves only VM data but it gets more detailed VM data. For example,
this plugin also obtains the power state, the disk encryption status,
etc. of the VM. These detailed level of information cannot be obtained
by the AzCloud
plugin.
To understand the difference between the two plugins better, consider an
environment where there are 5 subscriptions such that each subscription
has exactly 20 VMs and 50 NSGs. So, there are a total of 100 VMs and 250
NSGs. AzCloud
would make only 5 API calls to pull data
for all 100 VMs and another 5 API calls to pull data for all NSGs. On
the other hand, AzVM
would make 100 API calls to pull data for all
VMs. AzVM
cannot pull data for NSGs or any other type of resources.
However, the VM data obtained by AzVM
contains power state, disk
encryption status, and other detailed information. AzCloud
data does
not pull such detailed information.
In general, AzCloud
runs faster due to less number of API calls and
is usually sufficient for most types of cloud monitoring use cases.
AzVM
is necessary only for advanced use cases such as monitoring
whether a particular VM is running or stopped, if its disks are
encrypted or not, etc.
Note in the config above that event plugins
cloudmarker.events.azvmosdiskencryptionevent.AzVMOSDiskEncryptionEvent
and
cloudmarker.events.azvmdatadiskencryptionevent.AzVMDataDiskEncryptionEvent
referred to with the built-in base config keys
azvmosdiskencryptionevent
and azvmdatadiskencryptionevent
can be
used with AzVM
. These plugins work only with AzVM
records and
generates events if OS disks and data disks are found. They ignore
records generated by any other cloud plugins.
GCPCloud¶
Follow these steps to get started with auditing a GCP cloud environment.
Follow the steps at https://cloud.google.com/iam/docs/creating-managing-service-account-keys to create a service account key using the GCP Console and download it as a file named
keyfile.json
.Then create a config file name
cloudmarker.yaml
with this content:plugins: mygcpcloud: plugin: cloudmarker.clouds.gcpcloud.GCPCloud params: key_file_path: keyfile.json zone: null audits: mygcpaudit: clouds: - mygcpcloud stores: - filestore events: - firewallruleevent alerts: - filestore run: - mygcpaudit
Replace the value of
zone
key to the zone name in which your resources reside. The zone name can be found in GCP Console > (select project) > Go to Compute Engine. An example of zone name isus-east1-b
. Note: We are aware that the requirement of providing a specific zone name in the config makes this plugin less flexible. This will be fixed in the next release. The fix would allow the plugin to discover resources in all zones automatically.Now enter this command to run Cloudmarker:
cloudmarker -n
Now examine these files generated by Cloudmarker at
/tmp/cloudmarker/
:mygcpaudit_mygcpcloud.json
: This file contains the data obtained from GCP cloud by thecloudmarker.clouds.gcpcloud.GCPCloud
plugin configured under themygcpaudit
config key.mygcpaudit_firewallruleevent.json
: This file contains insecure firewall rules detected by thecloudmarker.events.firewallruleevent.FirewallRuleEvent
plugin referred to with the key namefirewallruleevent
in the built-in base config.
MockCloud¶
The MockCloud
plugin has already been discussed in the Get
Started section once. A config key named mockcloud
already
configures this plugin in the built-in base config as follows:
plugins:
mockcloud:
plugin: cloudmarker.clouds.mockcloud.MockCloud
There are no parameters specified for this plugin in the built-in base
config because this plugin class already has default keyword parameters.
See cloudmarker.clouds.mockcloud.MockCloud
for the keyword
parameters with default values. By default, it generates 10 mock records
such that record['ext']['record_type']
alternate between 'foo'
and 'bar'
where record
represents each JSON object generated by
this plugin.
To override the default behaviour to, say, generate 20 records with
record types that alternate between ‘foo’, ‘bar’, and ‘baz’, we
could override the mockcloud
config key defined in the built-in base
config. To do so, create a file named cloudmarker.yaml
with the
following content only:
plugins:
mockcloud:
params:
record_count: 20
record_types:
- foo
- bar
- baz
Then run Cloudmarker with this command:
cloudmarker -n
Note that we did not specify the plugin
key under mockcloud
here
because that is already available in the base config (see
cloudmarker.baseconfig
). Similarly, we did not define audits
or run
config keys here because they are also defined in the base
config. We only defined what we needed to override in the base config.
Event Plugins¶
The event plugins have been discussed in the sections for cloud plugins
above. Here is how the config keys for these plugins have been defined
in the base config (see cloudmarker.baseconfig
):
plugins:
...
firewallruleevent:
plugin: cloudmarker.events.firewallruleevent.FirewallRuleEvent
azvmosdiskencryptionevent:
plugin: cloudmarker.events.azvmosdiskencryptionevent.AzVMOSDiskEncryptionEvent
azvmdatadiskencryptionevent:
plugin: cloudmarker.events.azvmdatadiskencryptionevent.AzVMDataDiskEncryptionEvent
The ellipsis (...
) in this example denote content omitted in the
above example for the sake of brevity.
FirewallRuleEvent¶
The FirewallRuleEvent
plugin can be used with both AzCloud
and
GCPCloud
plugins. It looks for firewall rules that expose sensitive
ports to the entire Internet and generates events for them.
This plugin is offered by the
cloudmarker.events.firewallruleevent.FirewallRuleEvent
plugin
class.
By default, it monitors for insecure exposure of a fixed set of TCP
ports. If that’s okay for you, there is no need to define this plugin
explicitly in the config file. The built-in base config key
firewallruleevent
can be used as is. However, if there is a need for
monitoring a custom set of ports, then it can be overridden. Here is an
example configuration that monitors for insecure exposure of ports 22
and 3389 in Azure cloud:
plugins:
myazcloud:
plugin: cloudmarker.clouds.azcloud.AzCloud
params:
tenant: null
client: null
secret: null
firewallruleevent:
params:
ports:
- 22
- 3389
audits:
myazaudit:
clouds:
- myazcloud
stores:
- filestore
events:
- firewallruleevent
alerts:
- filestore
run:
- myazaudit
Remember to replace the null
values in the config above with actual
values before using this config.
AzVMOSDiskEncryptionEvent¶
The AzVMOSDiskEncryptionEvent
plugin can be used with AzVM
plugin. It looks for unencrypted OS disks attached to Azure virtual
machines.
This plugin is offered by the
cloudmarker.events.azvmosdiskencryptionevent.AzVMOSDiskEncryptionEvent
plugin class.
An example usage of this plugin is available in the AzVM section.
Since it only checks whether disks are encrypted or not (a binary
decision), it does not accept any parameters that can be configured in
config file. Therefore, it is recommended to use the built-in base
config key named azosdiskencryptionevent
for this plugin.
AzVMDataDiskEncryptionEvent¶
The AzVMDataDiskEncryptionEvent
plugin can be used with AzVM
plugin. It looks for unencrypted data disks attached to Azure virtual
machines.
This plugin is offered by the
cloudmarker.events.azvmdatadiskencryptionevent.AzVMDataDiskEncryptionEvent
plugin class.
An example usage of this plugin is available in the AzVM section.
Since it only checks whether disks are encrypted or not (a binary
decision), it does not accept any parameters that can be configured in
config file. Therefore, it is recommended to use the built-in base
config key named azdatadiskencryptionevent
for this plugin.
MockEvent¶
The MockEvent
plugin can be used with MockCloud
plugin. The
MockCloud
plugin generates data such that record['raw']['data']
has an integer value that increments in each record where record
here
represents each record generated by MockCloud
. The MockEvent
plugin when used checks the value of record['raw']['data']
in each
input record
and generates an event if this value is a multiple of
some number (3
by default).
This plugin is offered by the
cloudmarker.events.mockevent.MockEvent
plugin class.
We use MockCloud
and MockEvent
plugins together to test out the
store and alert plugins.
In case, we want the MockEvent
plugin to look for a multiple of some
other number, say, 5
, we can override the built-in base config as
follows:
plugins:
mockevent:
params:
n: 5
Store Plugins¶
FileStore¶
We have been using the FileStore
plugin already in the examples
above. This plugin is good for quick testing because we can see the
cloud data records and events written locally to a file that we can
easily inspect.
This plugin is offered by the
cloudmarker.stores.filestore.FileStore
plugin class.
By default, it writes the output files to the /tmp/cloudmarker/
directory. Here is how it can be configured to write the output files to
another directory, say, ~/cloudmarker
:
plugins:
filestore:
params:
path: ~/cloudmarker
On running Cloudmarker with this config, we would see that the output
files have been written to ~/cloudmarker
, i.e.,
$HOME/cloudmarker
or in other words, the cloudmarker
directory
under the home directory. Yes, FileStore
performs tilde expansion
to expand a path beginning with ~
to a user’s home directory as
mentioned here: os.path.expanduser()
.
EsStore¶
The EsStore
plugin can be used to send cloud data as well as
events to an Elasticsearch cluster.
This plugin is offered by the
cloudmarker.stores.esstore.EsStore
plugin class.
In this section, we will use a Docker image of Elasticsearch to quickly get started with configuring this plugin. Here are the steps to set up a Docker container for Elasticsearch:
Enter the following command to run a Docker container with Elasticsearch instance:
docker run -p 9200:9200 -p 9300:9300 \ -e 'discovery.type=single-node' \ docker.elastic.co/elasticsearch/elasticsearch:7.0.1
Ensure that Elasticsearch is able to index documents:
curl -H 'Content-Type: application/json' \ -X PUT http://localhost:9200/foo/foo/1?pretty \ -d '{"a": "apple", "b": "ball"}'
Double-check that the document was indexed:
curl http://localhost:9200/foo/_search?pretty
Now that Elasticsearch is running in a Docker container and indexing data, configure Cloudmarker to send data and events to it with the following steps:
Create
cloudmarker.yaml
with the following content to configure Cloudmarker to send mock cloud records and mock events to Elasticsearch:audits: mockaudit: stores: - filestore - esstore alerts: - filestore - esstore
The above example is a very minimal config. It works because the
esstore
plugin config key is defined in the built-in base config and it sends data to a locally running Elasticsearch by default. Here is what a more elaborate config would look like:plugins: esstore: host: localhost port: 9200 index: cloudmarker audits: mockaudit: stores: - filestore - esstore alerts: - filestore - esstore
Run Cloudmarker:
cloudmarker -n
Confirm that mock cloud records and mock events are indexed in Elasticsearch:
curl http://localhost:9200/cloudmarker/_search?pretty
MongoDBStore¶
The MongoDBStore
plugin can be used to send cloud data as well as
events to a MongoDB collection.
This plugin is offered by the
cloudmarker.stores.mongodbstore.MongoDBStore
plugin class.
In this section, we will use a Docker image of MongoDB to quickly get started with configuring this plugin. Here are the steps to set up a Docker container for MongoDB:
Enter the following commands to run a Docker container with MongoDB instance:
docker rm mongo; docker run --name mongo -p 27017:27017 mongo
Ensure that we can insert data into MongoDB:
docker exec -it mongo mongo foo --eval 'db.bar.insert({"a": "apple"})'
Double-check that the data was inserted:
docker exec -it mongo mongo foo --eval 'db.bar.find()'
Now that MongoDB is running in a Docker container, configure Cloudmarker to send data to it with these steps:
Create
cloudmarker.yaml
with the following content to configure Cloudmarker to send mock cloud records and mock events to MongoDB:audits: mockaudit: stores: - filestore - mongodbstore alerts: - filestore - mongodbstore
The above example is a very minimal config. It works because the
mongodbstore
plugin config key is defined in the built-in base config and it sends data to a locally running MongoDB by default. Here is what a more elaborate config would look like:plugins: mongodbstore: host: localhost port: 27017 db: cloudmarker collection: cloudmarker username: null password: null audits: mockaudit: stores: - filestore - mongodbstore alerts: - filestore - mongodbstore
If the MongoDB instance requires user authentication, then the
username
andpassword
config keys should be set to the appropriate values.Run Cloudmarker:
cloudmarker -n
Confirm that mock cloud records and mock events are indexed in Elasticsearch:
docker exec -it mongo mongo cloudmarker --eval 'db.cloudmarker.find()'
SplunkHECStore¶
The SplunkHECStore
plugin can be used to send cloud data as well as
events to a Splunk HTTP Event Collector (HEC).
This plugin is offered by the
cloudmarker.stores.splunkhecstore.SplunkHECStore
plugin class.
In this section, we will use a Docker image of Splunk to quickly get started with configuring this plugin. Here are the steps to set up a Docker container for Splunk:
Enter the following command to run a Docker container with Splunk instance with HTTP Event Collector (HEC):
docker run -e 'SPLUNK_START_ARGS=--accept-license' \ -e 'SPLUNK_PASSWORD=admin123' \ -e 'SPLUNK_HEC_TOKEN=token123' \ -p 8000:8000 -p 8088:8088 splunk/splunk
Ensure that Splunk HEC is able to receive events:
curl -k https://localhost:8088/services/collector/event \ -H 'Authorization: Splunk token123' \ -d '{"event": "hello, world"}'
To double-check that Splunk received the event, visit http://localhost:8000/ with a web browser.
Then log into Splunk with username as
admin
and password as the password specified in thedocker
command in step 1 above.Click on “Search & Reporting” on the left sidebar.
In the search box, enter * (asterisk) and click the search button. One event with the string
hello, world
should appear in the result.
Now that Splunk is running in a Docker container and accepting events via HEC, configure Cloudmarker to send data and events to it with the following steps:
Create
cloudmarker.yaml
with the following content to configure Cloudmarker to send mock cloud records and mock events to Splunk:plugins: splunkstore: plugin: cloudmarker.stores.splunkhecstore.SplunkHECStore params: uri: https://localhost:8088/services/collector token: token123 index: main ca_cert: false audits: mockaudit: stores: - filestore - splunkstore
Run Cloudmarker with this configuration:
cloudmarker -n
Visit http://localhost:8000/ with a web browser.
Then log into Splunk with username as
admin
and password as the password specified in thedocker
command in step 1 above.Click on “Search & Reporting” on the left sidebar.
In the search box, enter * (asterisk) and click the search button. There should be many new events now.
In the search box, enter the following query to see the mock cloud records:
index=main com.record_type=mock
There should 10 records in the results.
In the search box, enter the following query to see the mock events:
index=main com.record_type=mock_event
There should be 4 events in the results.
In the search box, enter the following query to see the event description fields in a table format:
index=main com.record_type=mock_event | table com.description
Alert Plugins¶
All of the store plugins discussed above can also be used as alert plugins. Additionally, there are a few plugins that are specialized as alert plugins only and do not serve very well as store plugins. Only these plugins are discussed in this section.
EmailAlert¶
The EmailAlert
plugin can be used to send events to email recipients
via SMTP.
This plugin is offered by the
cloudmarker.alerts.emailalert.EmailAlert
plugin class.
The EmailAlert
parameters are same as that of the
cloudmarker.util.send_email()
function, so read its API
documentation to learn about the parameters this plugin accepts.
Perform the following steps to configure Cloudmarker to send mock events as email alerts:
Create a config file named
cloudmarker.yaml
in the current directory with the following content:plugins: emailalert: plugin: cloudmarker.alerts.emailalert.EmailAlert params: from_addr: Cloudmarker <cloudmarker@example.com> to_addrs: - user1@example.com - user2@example.com subject: Cloudmarker Alert host: smtp.example.com audits: mockaudit: alerts: - filestore - emailalert
Set the values of
from_addr
andto_addrs
appropriately.If authentication is required, add
username
andpassword
parameters. Seecloudmarker.send_email()
documentation for details.If the SMTP host does not support
SSL
, then addssl_mode
parameter and set its value tostarttls
if the SMTP host supportsSTARTTLS
. If the SMTP host supports neitherSSL
norSTARTTLS
, set its value todisable
.If the SMTP host is listening on a non-standard port, then set the
port
parameter to an integer value representing the expected port number. If the SMTP host is listening on a standard port, then there is no need to set this parameter. It has a default value of0
which automatically selects the appropriate port based on the value ofssl_mode
parameter.Run Cloudmarker with this configuration:
cloudmarker -n
Check the configured recipients’ inboxes to confirm that the email alerts have been received.
SlackAlert¶
The SlackAlert
plugin can be used to send events to Slack users via
a Slack bot.
This plugin is offered by the
cloudmarker.alerts.slackalert.SlackAlert
plugin class.
Perform the following steps to configure Cloudmarker to send mock events as alerts via Slack:
Create a config file named
cloudmarker.yaml
in the current directory with the following content:plugins: slackalert: plugin: cloudmarker.alerts.slackalert.SlackAlert params: bot_user_token: null to: - user1@example.com - user2@example.com text: Attention - Cloudmarker Alert audits: mockaudit: alerts: - filestore - slackalert
Change the value of
bot_user_token
key fromnull
to actual token of the Slack bot in the config file.Change the vlaue of
to
key from example users to actual Slack users.Now, enter this command to run Cloudmarker:
cloudmarker -n
The mock events would be sent to the configured Slack users as a JSON snippet.
Framework¶
Schedule¶
In the built-in base config (see cloudmarker.baseconfig
),
there is a schedule
config key that specifies the local time (in
24-hour notation) at which Cloudmarker should start running audits every
day. This schedule is honoured when Cloudmarker is run without the
-n
or -now
option as follows:
cloudmarker
Logger¶
In the built-in base config (see cloudmarker.baseconfig
),
there is a logger
config key that specifies an elaborate logging
configuration. This can be overridden in a config file to customize the
logger. For example, by default, the log files are written to
/tmp/cloudmarker.log
. If we want to override this location to, say,
log/cloudmarker.log
, we can define a config file named
cloudmarker.yaml
like this:
logger:
handlers:
file:
filename: log/cloudmarker.log
To test this configuration, enter these commands:
mkdir -p log
cloudmarker -n
cat log/cloudmarker.log
To see the default logger
config, see cloudmarker.baseconfig
.
To understand more about what each of the config keys under logger
mean, see the Python standard library logging documentation:
Configuration dictionary schema.
Email¶
When Cloudmarker is made to run in scheduled mode, it could be useful to
get email notifications about when the audits start and the audits stop.
The email configuration for such audit emails can be specified under a
config key named email
. Note that this should be a top-level key in
the config file, i.e., it should be at the same level as the audits
and run
keys.
The value for the email
config key should be similar to the value of
params
key of an email alert. See EmailAlert section for more
details on this. Here is an example:
emailalert:
from_addr: Cloudmarker <cloudmarker@example.com>
to_addrs:
- user1@example.com
- user2@example.com
subject: Cloudmarker Alert
host: smtp.example.com
With this configuration, Cloudmarker sends four types of emails:
- An email when all configured audits begin.
- An email when all configured audits end.
- An email when each configured audit begins.
- An email when each configured audit ends.
Therefore, if there are 3 audits configured under the audits
config key, then a total of 8 emails are sent: 1 begin audits email, 1
end audits email, 3 begin audit emails (one for each audit), and 3 end
audit emails (one for each audit).