23 Aug

CH-BenCHmark for MS SQL Server – HTAP benchmarking

There aren’t many benchmarks which allow you to test your systems against a hybrid OLTP and OLAP workloads. CH-BenCHmark fills that gap by combining TPC-C and TPC-H. You can download the source from the linked site or you can use something like OLTPBench (a collection of benchmarks). However, the TPC-H modified queries are not written for SQL server. In this article, I will add the modified CH-BenCHmark OLAP queries for SQL Server.

Read More
19 Jul

NetworkX visualization with Graphviz (Example)

NetworkX visualization with Graphviz (Example)

If you are trying to visualize a nice graph with NetworkX, you should be exhausted by now. After all, NetworkX only provides basic functionality for graph visualization. The main goal of NetworkX is to enable graph analysis. For everything other than basic visualization, it’s advisable to use a separate specialized library. In my case, I choose Graphviz. It’s simplistic to get an attractive visualization of a NetworkX graph with Graphviz. I’m taking a gradual start, but you may skip to “NetworkX with Graphviz” directly.

Read More
18 Jul

Index Physical Structure Example; Multi-column Non-Clustered Index with Includes

Structure of a non-clustered multi-column index with include columns.

This article demonstrates the physical design of a multi-column non-clustered index with include-columns. Many examples on the internet only demonstrate the most simple version of an index with a single column. This article gives a proper view of an index with multiple columns through a simple example. Furthermore, you can see how the include-columns are stored, only at the leaf level of the tree.

Here we use a simple table ‘People’ with 6 columns (ID, First Name, Last Name, Age, Sex, Address). We assume we already have a clustered index created on the ID column (it will be almost no difference if there is no clustered index as well, explained at the end). Now we are going to create the non-clustered index as defined below.

Below diagram shows the structure of this non-clustered index.

Structure of a non-clustered multi-column index with include columns.
Read More
27 May

PRIVATE: A Privacy-preserving Data Analysis Language.

This is a new project, I’m working on from early last year. The motivation behind this project is to build a programing language that allows users to analyze private data without exposing sensitive information. Many data analysis languages (R, Python, MATLAB etc.) in the current market assume direct access to data. PRIVATE, on the other hand, performs a privacy calculation that will make sure only non-sensitive information is released to the user.

More Information:

This is the tutorial series by Simon Dennis, Founder of PRIVATE

Contribute to PRIVATE: Git-hub

31 Mar

Microsoft SQL Server 2016 Database with IMDB 2013 Dataset

Microsoft SQL Server 2016 Database with IMDB 2013 Dataset

Recently I wanted to run the JOB benchmark for an experiment. This benchmark uses an IMDB dataset, published in 2013. Initially, I had some trouble running the benchmark as it was designed for a PostgreSQL database. And the dataset was created in a UNIX system which can create issues when used in a Windows system. So I decided to share the exact steps you need to take to take in order to create a Microsoft SQL Server database with IMDB dataset. All the scripts used in the project can be found in this Git repo.

Read More
27 Jan

What-if Analysis with SQL server (Hypothetical Indexes) – Using python

What-if Analysis with SQL server (Hypothetical Indexes) – Using python

If you are a Database administrator or a developer working with a transaction database, you might have come across this problem

“Is it worthy to build that index?”

Exact answer for that question is only known once you build it. However, luckily SQL server provides you with functionality to check the workload performance under hypothetical indexes (without actually creating them)

You can find more information about hypothetical indexes here.

I will just provide you with a simple python code that will help you with the hypothetical index creation. Example code will compose of 3 parts

  1. Index creation
  2. Enabling the index (unlike the normal indexes you need to enable them before using)
  3. Executing the query under the hypothetical index

Index creation

Enabling the indexes

Executing the query

 

 

07 Oct

Evolution of a Data Platform

Evolution of a Data Platform

Being a startup is “great” as a feeling. Startup culture is filled with so much positive energy to get the things done. In this process of getting things done, one thing we miss is the proper design in a data platform. It is understandable that people start with a simple data platform and evolve it over the time. Starting with the perfect data platform is less practical when we consider the cost involved and the lack of domain knowledge in initial stages. We should all admit that proper data platform costs a lot, which sometimes not efficient for a startup. My personal opinion is to start small and to evolve with time. Here we will talk about common problems that we faced in a start-up data platform.

Lacking Scalability

Evolution of a Data Platform

Scalability issues impact in several ends. Startup systems are not meant to scale until the end of time. Sometimes they become impossible to scale, sometimes scaling requires so much additional effort that they need a separate team working on scaling the data platform. Sometimes scaling is involved with a large cost that is rapidly increasing. Sometimes scaling increases the overall system complexity and reduce maintainability. If I summarize main impact area of scalability costs, it will be as follows,

  • Being impossible to scale
  • High Cost of scaling
  • Increasing manual tasks of Scaling
  • Increase in system complexity while scaling
  • Reduction of system maintainability

Proper data platform design should answer above concerns. Proper design should be scalable beyond the foreseeable future. While scaling it should minimize the cost additions, remove any complexity additions and should involve minimal or no manual effort.

Read More

10 Aug

Performance evaluation between different Druid roll-up levels

Introduction

In most datasets with a large number of events, going through individual events is less important. Most of the data use cases are around the summarized data. Druid summarizes this raw data at ingestion time using a process refer to as “roll-up”. Roll-up is the highest granularity of the data and will be able to query only up to the roll-up granularity. However, there are some scenarios where it’s important to have more granular data. However keeping more granular data comes at a cost. We did a small experiment to identify how different roll-up levels affect performance.

Rolling up data can dramatically reduce the size of data that needs to be stored (up to a factor of 100). Druid will roll up data as it is ingested to minimize the amount of raw data that needs to be stored. This storage reduction does come at a cost; as we roll up data, we lose the ability to query individual events. Phrased another way, the rollup granularity is the minimum granularity you will be able to explore data at and events are floored to this granularity. Hence, Druid ingestion specs define this granularity as the queryGranularity of the data. The lowest supported queryGranularity is millisecond. -http://druid.io

Dataset and Setup

We choose a CSV data set with millions (150M+) of records which contain sales data spanning across 2 years. CSV file was around 6 GB in physical size. This is a narrow data set with 3 dimensions and 2 metrics. We had 2 servers where all the components are deployed.

m4 large – Coordinator, Brokers, Overload nodes
r3 large – Middle managers and Historical nodes

Read More

25 Sep

Application Aware Sharding for a MySQL Database

Application Aware Sharding for a MySQL Database

Introduction

When MySQL databases get too large, it become extremely hard to maintain as well as it reaches physical limits. Under maintainability issues we normally came across,

  1. Taking too much time to ALTER a table
  2. Became impossible to take dumps
  3. Have to move tables to separate disks time to time
  4. Etc.

If someone is looking for a database sharding as a solution, this maintainability issue must have exhausted him. In this article, I am going to share my personal experience in sharding a large MySQL database in real life. With the hands on experience on this large project, I am aware of lot of things related to sharding and organization level issue, concerns and limitations when you are going ahead with this project. This attempt is to go through the project from end to end so anyone who is about to do something similar can get benefit of my knowledge.
Before going to the article, I must share several things with you.

  1. After this project, do not expect any performance gain on your day-to-day queries. This might actually add an additional overhead on your queries as your data will be shattered among multiple places. However, this will greatly improve the maintainability and will make sure that your data architecture survive in the fullness of time.
  2. What I am going to talk about is application aware sharding and there are few other ways to do sharding in the data layer, which will be seamless to the applications. However, at the time of writing I did not find and tool that supports MySQL sharding in the DB level. Existing options are either not reliable or adds a large overhead on queries.
  3. Sharding itself might not be the full answer for the problem. Sharding might need be accompanied with several sub projects that will supplement shading to reach its potential. This is feather explained the next section (“Problem and Solution”). I will be concentrating on the sharding project and will not explain much on the other supplementary projects.
  4. You must be in a real need to do this project and you must have identified this as the only way to solve this problem before proceed because what you are going to start is a big project.
  5. I have skipped some sensitive information as it relates to the company I am working for.

Problem and Solution

Problem: Data was growing in our MySQL database in rapid rates and we needed to make it constant as it lead to so many maintainability issues. With the amount of data grew bigger and number of queries per second was increased, there was even a possibility of downtimes. We needed to make the data volume in the MySQL a constant and keep it in a maintainable level.
Solution: We thought of partitioning the data and in the same time purging the old data to make size of data in one database somewhat constant. Therefore, there were two supplement projects carried out with the sharding project. One is to purge old data (purging project), and master data management (MDM project, moved some selected set of data to a central location). Data that could not be sharded due to various reasons and which global to the company was moved out, in the MDM project.

sharding-architecture, Application Aware Sharding for a MySQL Database

Read More

04 Apr

Redis Sorted Object Set (Sorted Hashes)

Redis Sorted Object Set (Sorted Hashes)

We all know that best way to store object in Redis is through, Redis hashes. And Redis sorted sets are used to keep a sorted set with a give score for each value. What if we need a sorted set of objects? What we need is a “Sorted set of Objects” . Which Redis does not support by default. We came up with a hybrid, simple data structure that a allows sorted object using Redis Hashes and Redis Sorted-sets.

Read More

Last updated by .