Evaluate your SIEM
Get the guideComplete visibility for DevSecOps
Reduce downtime and move from reactive to proactive monitoring.
September 7, 2021
The NoSQL ecosystem consists of a plethora of databases that support all kinds of crucial business applications. In addition, NoSQL offers greater flexibility and scalability than SQL workloads. The top three NoSQL systems – Redis, MongoDB, and Cassandra – are all well established, and each one is built to solve specific problems.
In this article, we will explain the basic characteristics of Redis, MongoDB, and Cassandra databases. In addition, we will show you how to collect and monitor the logs that these databases produce using Sumo Logic’s unified logs and metrics (ULM) offering and related apps.
In order to set up monitoring agents for Sumo Logic, you need to install the following tools:
Telegraf: This is a server agent with a plugin-based architecture that sends metrics and events from databases and servers to external systems. We will use Telegraf to capture performance metrics and events and send them to a Hosted Collector.
Sumo Logic Installed Collector: An Installed Collector is a program installed on a local machine that can collect logs and metrics from different sources and send them to a SumoLogic Collector. You can download the software for your machine here.
Redis is the leading key-value, in-memory database. It can be used in many ways, including as a cache, a job queue, or a session store. A single instance of Redis consists of a binary that runs on a single thread of execution, spawning extra background threads for out-of-band updates and work schedules. Most of the data in Redis is stored in memory which includes an optional persistence layer for disaster recovery.
Redis can be configured to run with high availability using Redis Sentinel and with high scalability using Redis Cluster. Each instance of Redis keeps track of its own statistics and can be configured to store events in a log file.
Redis can query metrics and stats that provide insight into its overall performance status. The complete list of metrics and stats that Redis offers is available on the official INFO command page. Version 5 and higher offers some additional commands for monitoring, including:
Latency Doctor: This is used for performing latency monitoring and for querying additional data like event data, deviation, and the average period between spikes. Latency monitoring is disabled by default, and you must set the config value as follows in order to use it: CONFIG SET latency-monitor-threshold <ms>. Once you’ve set the config, you need to leave it for a while so that it can collect some useful latency statistics. Then, you can run it again to review any latency spikes.
Memory Stats: This command displays metrics and values related to memory usage. You can use this command to display additional stats that are not reported by the INFO command.
Memory Malloc-Stats: If you compile Redis using the jemalloc allocator, you can use this command to return its internal statistics.
While collecting logs using the redis-cli command is useful, it’s much better to use a tool that automatically collects metrics and logs for you in order to achieve more comprehensive monitoring. By collecting all metrics from all registered reais instances and clusters, then aggregating them together into a single view, you can obtain a more coherent picture of their general health.
The steps for configuring and monitoring the state of a Redis database cluster are as follows:
Follow these guidelines for collecting logs and metrics. There are two different ways to do this, depending on whether or not you use K8s. First, you will need to configure some fields in Sumo Logic. This can be performed by clicking the Manage Data-> Logs option in the sidebar:
Once you’ve added the fields, you will need to configure a Hosted Collector or an Installed Collector. A Hosted Collector will use Telegraf to collect metrics and forward them to Sumo Logic. Alternatively, an Installed Collector will collect and send server logs and events from a file. Here is an example of the telegraf.conf for a Hosted Collector:
# Read Redis's basic status information
[[inputs.redis]]
servers = ["tcp://localhost:6379"]
namepass = ["redis"]
fieldpass = ["blocked_clients", "clients", "cluster_enabled", "cmdstat_calls", "connected_slaves", "evicted_keys", "expired_keys", "instantaneous_ops_per_sec", "keyspace_hitrate", "keyspace_hits", "keyspace_misses", "master_repl_offset", "maxmemory", "mem_fragmentation_bytes", "mem_fragmentation_ratio", "rdb_changes_since_last_save", "rejected_connections", "slave_repl_offset", "total_commands_processed", "total_net_input_bytes", "total_net_output_bytes", "tracking_total_keys", "uptime", "used_cpu_sys", "used_cpu_user", "used_memory", "used_memory_overhead", "used_memory_rss", "used_memory_startup"]
[inputs.redis.tags]
environment="prod"
component="database"
db_system="redis"
db_cluster="redis_prod_cluster01"
[[outputs.sumologic]]
url = <URL from Hosted Collector>
data_format = "prometheus"
[outputs.sumologic.tagpass]
db_cluster=["redis_prod_cluster01"]
In the example above, we only specified one server in the server list, but you can add as many servers as you have in your cluster.
For an Installed Collector, you will need to make sure that you specify the log file name in the redis.conf as follows:
# Specify the log file name. Also the empty string can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile "redis-log"
Once everything is complete, you should be able to see your Collectors under the collections tab:
Now you can install the Redis ULM app from the app catalog. This will allow you to see a general overview of the Redis server status, error logs, and database metrics. You can learn how to install the Redis ULM app as well as a Redis monitor (which will allow you to trigger alerts and email notifications in case of abnormal spikes) by following the instructions on this page.
Now that we’ve shown you how to monitor Redis instances, let’s take a look at monitoring MongoDB clusters.
MongoDB is a data model designed to support JSON documents, horizontal scaling, and an easy development process. Since it’s easy to set up, MongoDB is part of many popular stacks, such as MERN and MEAN.
In terms of monitoring, MongoDB offers several strategies for capturing and submitting metrics and logs concerning its status. For one, you can use the diagnostic command index to retrieve valuable information about the cluster, including:
dbStats: This command returns storage statistics for a given database.
serverStatus: This command returns a document that provides an overview of the database's state.
Top: This command returns usage statistics for each collection.
MongoDB Cloud provides a free monitoring dashboard by default. But if you are working with different database systems, it makes more sense to consolidate them into a single page of glass view with Sumo Logic apps.
The steps for configuring and monitoring the state of a MongoDB database cluster are essentially the same as we described above for Redis:
Follow these guidelines for collecting logs and metrics with MongoDB. There are two different ways to do this, depending on whether or not you use K8s.
Once you’ve added the fields, you will need to configure a Hosted Collector or an Installed Collector. A Hosted Collector will use Telegraf to collect metrics and forward them to Sumo Logic. Alternatively, an Installed Collector will collect and send server logs and events from a file. Here is an example of the telegraf.conf for a Hosted Collector:
[[inputs.mongodb]]
servers = ["mongodb://127.0.0.1:27017"]
gather_perdb_stats = true
gather_col_stats = true
[inputs.mongodb.tags]
environment="prod"
component="database"
db_system="mongodb"
db_cluster="mongodb_on_premise"
[[outputs.sumologic]]
url = "<URL OF HOSTED COLLECTOR"
data_format = "prometheus"
Note that the process is a bit different when setting up monitoring using MongoDB Atlas. In order to collect logs, you will have to configure Sumo Logic’s MongoDB Atlas Collector in Amazon Web Services (AWS) using the AWS Lambda service or by running a cron job. This process is beyond the scope of our article, but you can see what’s involved by visiting this page.
Once everything is finished, you should be able to see your Collectors by navigating to the collections tab:
Now you’re ready to install the MongoDB ULM app from the app catalog page. This will provide a general overview of the MongoDB server status, error logs, and database metrics. To install the MongoDB ULM app, just follow the steps on this page. If you don’t have access to the app from the catalog, you can use the included alerts monitor by downloading this JSON configuration.
Then, you need to import it via the Alerts->Add-> Import option:
Next, select the MongoDB monitor and enable the alerts one by one:
Now that we’ve shown you how to monitor MongoDB and Redis instances, let’s move on to our final topic: monitoring Cassandra clusters.
Apache Cassandra is an open source NoSQL database management system that was released by Facebook almost 12 years ago. It’s designed to handle vast amounts of data, ensure high availability, and prevent a single point of failure.
Cassandra is a column-oriented DBMS, meaning that it stores values in columns instead of rows (which is how they are stored in a traditional SQL database). For example, take a look at this sample SQL table:
Table: Vehicles
Id Make/Model # Wheels # Doors Type
1 Ford Focus 4 4 Sedan
2 Tesla M 4 2 Sports
3 Tesla S 4 4 Sedan
In Cassandra, the same information would look like this:
Ford Focus:1,Tesla M:2,Tesla S:3
4:1,4:2,4:3
4:1,4:2,4:3
Sedan:1,Sports:2,Sedan:3
Cassandra can also compress columns that have the same values. For example, the second and third rows would become:
4:1,2,3
Here, the row IDs share the same value.
To monitor Cassandra clusters, you can follow the same steps described above with a few additions:
Install and configure the Jolokia agent on each node. By default, Cassandra metrics are managed using the Dropwizard Metrics Library. This creates a registry of metrics (like counters and gauge) and reports them via JMX. In order to consume these JMX metrics, you have to install an agent that exposes them to the Collectors. Since Jolokia acts as a JMX-HTTP bridge, it allows you to view the metrics in your browser. Cassandra distributions do not use Jolokia by default, so you will need to install and configure it when you start a new node. For a typical setup, you’ll need to install the Jolokia JVM agent, download it into the Cassandra lib folder, and restart the Cassandra node with the following JVM_OPTS:
JOLOKIA_VERSION=1.6.2
JOLOKIA_HOST=0.0.0.0
JVM_OPTS="$JVM_OPTS -javaagent:/var/lib/cassandra/jolokia-jvm-${JOLOKIA_VERSION}-agent.jar=port=8778,host=${JOLOKIA_HOST}"
Once you have Jolokia configured, you can follow this guide to learn how to collect logs and metrics for Cassandra. There are two different ways to do this, depending on whether or not you use K8s.
After you’ve added the fields, you will need to configure a Hosted Collector or an Installed Collector, just as you would for Redis or MongoDB. A Hosted Collector will use Telegraf to collect metrics and forward them to Sumo Logic. Alternatively, an Installed Collector will collect and send server logs and events from a file. Here is an example of the telegraf.conf for a Hosted Collector:
[[inputs.jolokia2_agent]]
urls = ["http://0.0.0.0:8778/jolokia"]
name_prefix = "cassandra_"
[inputs.jolokia2_agent.tags]
environment="prod"
component="database"
db_system="cassandra"
db_cluster="cassandra_on_premise"
dc = "IDC1"
[[outputs.sumologic]]
url = "<URL OF HOSTED COLLECTOR"
data_format = "prometheus"
[outputs.sumologic.tagpass]
db_cluster=["cassandra_on_premise"]
Once everything is complete, you should be able to see your Collectors under the collections tab:
Finally, you can follow these steps to install the Cassandra ULM app from the app catalog page.
In order to set up database monitoring, you will have to configure the Collectors to receive the logs and metrics and then dispatch them to your Sumo Logic service. After that, you will have dedicated and detailed dashboards that continuously monitor and provide actionable insights concerning your database’s health and operational activities.
Not only does Sumo Logic provide an excellent solution for all three databases described above, but their ULM and apps also integrate with SQL database systems (such as PostgreSQL and MySQL) that you may use to cover traditional workloads. In addition, Sumo Logic integrates seamlessly with Kubernetes, so you can capture logs from any containerized environment with ease.
Reduce downtime and move from reactive to proactive monitoring.
Build, run, and secure modern applications and cloud infrastructures.
Start free trial