Performance considerations while building high-performance big data analytics systems

image

Entrepreneurs, small, medium and big businesses heavily rely on Big Data and Analytics systems as they provide crucial insights into business values and aid the top management in making critical business decisions.

These systems have become an integral part of any organization's IT infrastructure. Sometimes, processing such a huge amount of data means that these systems face challenges pertaining to performance.

If in any case, these systems fail to provide useful insights in a timely manner then they lose their relevance to the business.

Through this article, we are hoping to lay down a proper guideline that can be used while building a high-performance big data analytics system.

Understand Big Data and It’s Characteristics

Big data has been a fascinating term in the IT industry and it has changed the way people look at Big Data and its role in any industry.

Big Data systems have these five main characteristics:

Volume

It refers to the huge amounts of data that is generated every second. Such data sets are difficult to analyze using traditional database technology.

Variety

The various data types that prevail today are called variety of Big Data. 80% of the data in today’s world is unstructured data which cannot be put into relational database.

Velocity

It is referred to the speed at which new data is generated and moves around.

Veracity

The disorderness or reliability of data is its veracity. There are various forms of big data which makes controlling the quality and accuracy of these data less manageable.

Value

Until or unless the structured and unstructured data can be turned into something valuable for organizations, it is deemed worthless.

These characteristics are collectively called the 5 V’s of data.

The building blocks of Big Data System

There are various functional blocks that provide ability of data acquisition to Big Data system.

It comprises of data acquisition from miscellaneous sources, doing pre-processing (e.g. absolving, authentication, etc.) on this data, accumulating the data, handling and analyzing this stored data (e.g.

predictive analytics, generating recommendations for online usage), and lastly dispensing and visualizing the outlined and aggregated results.

image

Figure 1. Explains the Components of Big Data System

Let’s discuss these components briefly.

Diverse Data Sources

A big data system analyses data from online web applications, batch feeds and uploads, live streaming, sensors and many more. With these diverse data sources, the format and protocols of these data vary on a larger scale.

E.g., some data come in SOAP or XML format over http from online web applications. Some feeds come in CSV formats and many devices communicate over MQTT protocol.

Data Acquisition

Before processing the data, it needs to be acquired from diverse sources. They are parsed, cleansed, validated and stored in suitable formats.

Storage

After acquiring, cleansing and transforming the data in the required format, it gets stored in a persistent storage space where it can be processed and analyzed.

Data Processing and Analyzing

The stored data gets de-normalized after cleansing. It is then correlated amongst different data sets.

After this process, the data results are then aggregated based on predefined time intervals by performing predictive analysis, ML algorithm and so on.

Visualization and Presentation

Visualization of the data output is the last step in the data flow. It includes:

  • Reading from the precomputed aggregated results
  • Presenting them as tables or charts for easy interpretation and understanding

Performance Considerations Regarding Data Acquisition

In data acquisition, data from diverse sources enter the big data system. Its performance decides the quantity of data that can be received by a big data system.

Following are the performance considerations, which should be done to ensure a high performing data acquisition component:

  • Using Message-Oriented-Middleware (MoM) the asynchronous nature of data transfer is dealt effectively
  • Pull bulk data when pulling it directly from external source
  • Use appropriate parsers while parsing feed files
  • Use in-built or out of the box validation solutions
  • Use built-in libraries and frameworks when parsers or validations etc. aren't running in a server environment
  • Identify and filter out invalid data as early as possible
  • Invalid data are stored in error tables by some systems, keep this in mind when sizing the database and storage.
  • If valid source data needs to be cleansed, do not do it record by record, instead do it in bulk.
  • Determine what constitutes a unique record while de-duping.
  • Transform the received data in multiple formats.
  • Use built-in transformers instead of building something from scratch for parsing the data.
  • Transformation is the most complex step of data acquisition and achieving parallelization in this is the most important part.
  • Once you are through the data acquisition process, you will need to store the processed data in some persistent storage such as RDBMS, NoSQL, Distributed file systems like Hadoop and so on. You might need to use combination of such solutions depending on your requirements.

Performance Considerations Regarding Storage

We will discuss some of the most important guidelines that should be considered for storing the processed data:

  • Data modelling plays an important role in your storage performance.
  • Data redundancy, disk space (capacity) etc. play a part while considering storage performance.
  • For storing and processing huge amount of data, most of the big data systems have NoSQL.
  • Different NoSQL databases will have different capabilities, some are good for faster updates others are good for faster reads.
  • Some database storages are column or row oriented.
  • Based on your requirement, select the database.
  • While selecting the databases also consider the level of replication, consistency and so on.
  • Some NoSQL databases don’t have built-in support for joins, sorts, aggregations, filters, indexes, and so on.
  • Level of compaction, size of buffer pools, timeouts, and caching are some more properties of different NoSQL databases that impact performance.
  • Another very important functionality of these databases; Sharding and partitioning.
  • Careful configuration for sharding is crucial to the system performance; hence it should be handled carefully.
  • For improved performance, you should use Storage Area Network (SAN) based storage.

Performance Considerations Regarding Data Processing

Data and analytical processing are the core of any big data system. Bulk processing such as aggregation, summarization, forecasting and other logics are performed at this stage.

  • Always select an appropriate data processing framework after thorough evaluation of f/w and other requirements.
  • The selection is based on requirements such as real time stream processing or batch processing, some frameworks use in-memory model while other use disk or file based processing.
  • Analyze and then allocated data for individual jobs, the burden increases with the size of the data, smaller the data more the burden.
  • Keep an eye out on the size of the data transfers.
  • Design your system such that it can merge results of a real-time stream event with the output of batch analytical processes.
  • Design a system that handles re-processing on the same set of data in case of error/exception in initial processing.
  • Store the final output in a format/model based on the expected end results. E.g. weekly aggregated forms for a business that requires result of aggregated output in weekly time series interval.
  • Lazy evaluation of big data queries; this is a helpful feature as the data is not pulled unless it is required.
  • Keep a check on the performance using tools provided by different frameworks

Performance Considerations Regarding Visualization

A carefully designed big data system provides deep dive analysis of the data giving valuable insights. A better visualization helps the users by providing a thorough drilled down view of the data.

Such visualization demands are not met by traditional BI and reporting tools.

  • Always make sure that the data from final summarized output tables is displayed by the visualization layer.
  • Avoid reading the raw data directly from visualization layer.
  • This minimizes data transfer to minimal and helps avoiding heavy processing while viewing reports.
  • Caching can be used perfectly in visualization tool as it has a good impact on overall performance of visualization layer
  • Materialized view is another better option for improving performance
  • Visualization tools allow multiple ways to read data and allow configurations that increase threads for handling requests.
  • Some tools also allow incremental data retrieval, minimizing the data transfer and fastening the whole visualization.
  • Most visualization tools and frameworks use Scalable Vector Graphics (SVG) which might have serious performance impacts when complex layouts use them.
  • Sufficient resources like CPUs, memory, disk storage, network bandwidth should be planned properly.

Impact of Big Data Security on Performance

Security requirements are integral part of any IT system. And they also have a major impact on performance of big data system.

  • Proper authorization and authentication of data coming from diverse sources at the entry level, flexibility in the system to allow data from trusted sources.
  • After early stage authentication, avoid authenticating the data again.
  • Make sure to create frameworks that will support other mechanisms like PKI based solutions or Kerberos.
  • Data needs to be compressed before getting sent to the big data system; decreasing the size of the transferred data resulting in faster data transfer, but slowing down the overall process as the compressed data needs to be uncompressed.
  • For this compression issue, many algorithms and formats are available and they have different CPU requirements which makes it necessary to choose them carefully.
  • Limit the use of encryption logic and algorithms for sensitive and confidential fields.
  • Based on your organizational requirement, you might need extra storage for maintaining the logs of various updates making the process a bit lengthy.
  • Infrastructure like OS, DB work better than custom security solutions.

Read More about Big Data Security

Conclusion

There are multiple reasons for Big Data and Analytics systems to be complex, we presented some guidelines that can be used as a factor of creating a system that best suits your organization’s requirements.

To meet the requirements of a complex Big Data and Analytics system you need to make sure that the system must be designed from scratch.

RapidOps as a Big Data Service Provider

RapidOps has been working with many startups giving them a perfect substitute for their traditional BI and analytics platforms, giving a unique retrospective of their company’s data on multiple levels with various visualization options. If you are looking for a digital partner that understands your Big Data and Analytics need, you are on the right track.

Contact RapidOps Now!

News From

Rapidops, Inc.Rapidops, Inc.
Category: Software Developers Profile: Rapidops Inc. is a technology, transformation, and analytics company. We help clients design, engineer, and launch advanced digital products and platforms that enable them to thrive and achieve more for their customers. Since 2008, our team has delivered thousands of projects and hundreds of digital products that are touching millions of lives daily, solving tough problems, and changing the way people live, work, and engage. We are working with companies like Dassault Systemes, Harris Teeter, B ...
This email address is being protected from spambots. You need JavaScript enabled to view it.

Stories for you