Skip to main content
Photo of Rodrigo Pastrana, Architect, LexisNexis Risk Solutions Group

Rodrigo Pastrana, (Architect, LexisNexis Risk Solutions Group) has been a member of the HPCC Systems core technology team for nine years and a member of the LexisNexis Risk Solutions Group team for just over a decade. Rodrigo focuses on platform integration and plug-in development. He is the principle developer of WsSQL, the HPCC Systems JDBC connector, the HPCC Systems Java APIs library and tools, the Spark-HPCC plugin and connector, and the Dynamic ESDL component. He has close to twenty years of experience in design, research and development of state of the art technology including IBM’s embedded text-to-speech and voice recognition products and Eclipse’s device development environment. Rodrigo holds an MS and BS in Computer Engineering from the University of Florida and during his professional career has filed more than ten patent disclosures through the USPTO.

Log visualizations help identify, track and predict important events and trends on HPCC Systems clusters, by spotting interesting patterns and giving you visual clues which are easier to interpret than reading through the log file itself. Log visualization integration with ECL Watch using ELK (ElasticSearch, Logstash and Kibana) has been available using our bare metal HPCC Systems Platform since HPCC Systems 7.x.x, and you can find out more about it by reading about HPCC Systems log visualizations using ELK, by Rodrigo Pastrana.

In this blog, Rodrigo expands on this feature, focusing on a simple mechanism to process Cloud Native HPCC Systems platform component-level logs via Elastic Stack.

As HPCC Systems® continues its journey to the cloud, new challenges are presented by the containerized mode of operation. One major challenge is persistence and the availability of application-level logs. Following the most widely accepted containerized methodologies, HPCC Systems component log information is routed to the standard output streams rather than local files, as in previous versions of HPCC Systems. In a Kubernetes environment, the Docker container engine redirects the streams to a logging driver, which Kubernetes configures to write to a file in JSON format and those logs are exposed by Kubernetes via the aptly named logs command. Although the logs are temporarily available, they can be difficult to analyze and monitor in this format. It is also important to understand that these logs are ephemeral in nature and may be lost if the pod is evicted, the container crashes, the node dies, etc.

Starting in HPCC Systems 8.0.0, a mechanism is provided to facilitate the deployment of a lightweight Elastic Stack as a simple solution for the logs on-cloud processing challenges.

A Simple solution

This feature provides casual users with a lightweight, yet full log processing solution and attempts to minimize the setup steps requiring manual intervention. This feature creates a local dependency on the Elastic Stack Helm charts for Elastic Search, Filebeats and Kibana. Users are responsible for fulfilling this local dependency, however, Helm provides convenient commands to automatically pull the appropriate dependencies.

Automatic dependency Update Mechanisms

Helm provides two convenient mechanisms to automatically pull the appropriate dependencies. The CLI command, which can be executed before deploying the Helm chart and a parameter which can be passed into the Helm install command.

Note: The Helm deployment process validates the declared dependencies even if the feature is disabled.

Using the Helm Dependency Update Command

The Helm dependency update command pulls the appropriate dependency packages and places them in a sub-folder named charts:

helm dependency update <location of Helm chart>

Using the Helm Install Dependency Update Parameter

An alternative to using the Helm dependency update command, is to simply provide the --dependency-update argument in the Helm install command:

helm install <name> <Helm chart location> <other parameters> --dependency-update

Deployed Elastic Stack Components

The managed Elastic Stack instance is comprised of several components which work together, in this case, to persist and expose the HPCC Systems component logs.

  • Filebeat - Harvests the log entries of interest, aggregates metadata information and forwards to a target index within the local Elastic Search.

  • Elastic Search - Provides persistence, indexing, and querying capabilities.
    Note: The Elasticsearch chart will declare a Permanent Volume Claim (PVC), which is used to persist data related to its indexes and indirectly, the HPCC Component logs. The PVCs by nature can outlive the HPCC Systems and Elastic deployments. It is up to you to manage the PVC appropriately, which includes deleting the PVC(s) when they are no longer needed.

  • Kibana - Provides a very useful UI layer which allows users to explore, query, and visualize the log data. More information is available in the official Kibana Guide

Getting Started

The elastic4hpcc chart can be found in the related HPCC Systems GitHub repository, (HPCC-Platform/helm/managed/logging/elastic).

Before deploying this chart, be sure to pull the appropriate Elastic Stack chart using the Helm dependency update command:

helm dependency update HPCC-Platform/helm/managed/logging/elastic

Once this command has completed successfully, the ElasticSearch, Filebeat, and Kibana chart packages will be located in a subfolder called charts.

Deploying Elastic Search and Kibana in the same context as your HPCC Systems cluster

The first step is to deploy the ElasticSearch, Filebeat and Kibana chart packages using the following command:

helm install myelk HPCC-Platform/helm/managed/logging/elastic

On completion, the following message is displayed:

Thank you for installing elastic4hpcclogs.
A lightweight Elastic Search instance for HPCC component log processing.

This deployment varies slightly from defaults set by Elastic, please review the effective values.


PLEASE NOTE: Elastic Search declares PVC(s) which might require explicit manual removal when no longer needed.

Confirming the Elastic pods are ready

To do this, use the following command:

kubectl get pods

Which displays the following information showing the running pods:

...
elasticsearch-master-0                    1/1     Running            0          
myelk-filebeat-6wd2g                      1/1     Running            0          
myelk-kibana-68688b4d4d-d489b             1/1     Running            0          
...

Confirming the Elastic services are ready

To do this, use the following command:

$ kubectl get svc

Which displays the following confirmation information:

...
elasticsearch-master            ClusterIP      10.109.50.54    <none>        9200/TCP,9300/TCP   68m
elasticsearch-master-headless   ClusterIP      None            <none>        9200/TCP,9300/TCP   68m
myelk-kibana                    ClusterIP      10.104.96.242   <none>        5601/TCP            68m
...

Exposing the Kibana port

It may be necessary to expose Kibana’s port in order to interact with the service.

To set up port forwarding in a dedicated shel use the following command:

kubectl port-forward service/myelk-kibana 5601:5601
Forwarding from 127.0.0.1:5601 -> 5601

Confirming the Kibana Service is Accessible

To do this, direct your browser to localhost:5601 and the following screen will be displayed:

Screenshot confirming the Kibana Service is Accessible

Viewing HPCC Systems Component Logs Via Kibana

Once enabled and running, it’s possible to explore and query HPCC Systems component logs from the Kibana UI interface. Kibana index patterns are required to explore Elastic Search data from the Kibana user interface. Elastic provides detailed explanation of the index pattern concept, with details about how to create appropriate patterns. The following examples help to demonstrate the functionality utilizing the out-of-the-box implementation.

Navigating to the Index Patterns Section

  1. Select the Hamburger Icon
  2. Scroll down to Stack Management and select Data
  3. Select Index Management
  4. Select Kibana and select the Index Patterns menu item provided

Creating an Index Pattern

From the Index Patterns section:

  1. Click the Create Index Pattern button to display the Create Index Pattern Dialog
  2. Using the dialog, add a name pattern which matches the target index(es), for example filebeat*
    In this example, filebeat* is chosen because in this scenario, the index(es) created for the HPCC Systems log data follow this naming convention: filebeat-xxxxe
  3. Select Next Step
  4. Select @timestamp as the time field
  5. Select Create index pattern

Discovering Log Data

  1. Navigate to the Discover section by selecting the Hamburger Icon, then Kibana, then Discover
  2. Select filebeat* as the Index Pattern
  3. From the Discover section, select filebeat* from the index pattern drop-down list.

The following screen is displayed:

Screenshot showing how to discover log data

Filtering Out Noise

The filebeat* index pattern may contain many entries, possibly many of little interest. Even those entries which are of interest may contain many metadata columns of little or repetitive information.

At this point, you could simply dive in and perform free hand queries in the query bar provided at the top of the Discover section. The query bar accepts Kibana’s standard query language (KQL), which boasts a simple and flexible syntax and even attempts to provide auto complete guidance.

However, it might be helpful, as a first step, to filter out all entries not originating from HPCC Systems components (the majority of the index contents might not be related to HPCC Systems component log data).

Building Filter Conditions Based on Container Name

 

container.image.name distribution

From the left panel you can choose the columns of interest and specific filters conditions, for example:

  • Choosing + on the hpccsystems related value ensures that only log entries stemming from the hpccsystems running images is displayed.
  • Hovering over the container.image.name entry and choosing the Add button, presents a view based on the newly added container.image.name column.
  • Adding Actual Log Entry Content- From the available list on the left panel, hover over the message entry and click the Add button. HPCC Systems component log messages should now be visible
  • Adding a Source HPCC Systems Component Name Column. This allows you to group the log entries by HPCC Systems component. One way to accomplish this, is by adding the kubernetes.container.name column. From the available list on the left panel, hover over the kubernetes.container.name entry and click the Add button. The source HPCC Systems component names should now be visible.

Filter out the noise

Configuration of Elastic Stack Components

You may need or want to customise the Elastic stack components. The Elastic component charts values can be overridden as part of the HPCC System deployment command, for example:

helm install myelk HPCC-Platform/helm/managed/logging/elastic --set elasticsearch.minimumMasterNodes=2 --set elasticsearch.replicas=2

See the Elastic Stack GitHub repository for the complete list of all Filebeat, Elastic Search, LogStash and Kibana options with descriptions.

Celebrating 10 Years of the HPCC Systems Open Source Project

June 15, 2021 marks the 10th anniversary of HPCC Systems as an open source offering in the big data analytics market. Our 10 Year Anniversary Podcast Series showcases members of our HPCC Systems Community commemorating this milestone event. 

Rodrigo Pastrana is featured in the video below, alongside our LexisNexis Risk Solutions Group colleague James McMullan (Sr Software Engineer). Both Rodrigo and James have developed many connectors and plugins for working with datasets using other open source projects. Their work has not only helped expand our ecosystem, but has also opened the door to new opportunities for creating additional interfaces to accommodate any data size or format.

Find out about these connectors, plugins and plans for the future as the HPCC Systems Platform becomes cloud native.

Click the image to watch the video hosted on the HPCC Systems YouTube Channel.

Image showing the video opening shot

 

See more videos in this series by visiting our 10 Year Anniversary Podcast Series Wiki