Tuesday, February 16, 2016

What are big data tools used currently in market?

Big data is a broad term for data sets so large or complex that traditional data processing applications are inadequate.

Traditional SQL databases used for storing and retrieving data. It all depends on the use cases. In contrast, nonSQL databases are in-memory caches, full-text search engines, real-time streaming, graph databases, etc.

Cassandra: An open source distributed database management system originally developed at Facebook and designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure.

Redis: An open source (BSD licensed), in-memory data structure store, used as database, cache and message broker.

CouchBase: An open-source, distributed NoSQL document-oriented database that is optimized for interactive applications.

CouchDB: An open-source document-oriented NoSQL database that uses JSON to store data.

MongoDB: A popular, a cross-platform document-oriented database.

Elasticsearch: A distributed RESTful search engine built for the cloud.

Hazelcast: An open source in-memory data grid based on Java.

EHCache : A widely used open source Java distributed cache for general purpose caching, Java EE and light-weight containers.

Hadoop : An open-source software framework written in Java for distributed storage and distributed processing of very large data sets on computer clusters built from commodity hardware.

HBase : An open source, non-relational, distributed database modeled after Google's BigTable, written in Java and runs on top of HDFS.

Spark Spark : An open source cluster computing framework.

Memcached : A general-purpose distributed memory caching system.

Apache Hive : It provides an SQL-like layer on top of Hadoop.

Apache Kafka : A high-throughput, distributed, publish-subscribe messaging system originally developed at LinkedIn.

Akka: A toolkit and runtime for building highly concurrent, distributed, and resilient message-driven applications on the JVM.

Neo4j: An open-source graph database implemented in Java.

Solr: An open source enterprise search platform, written in Java, from the Apache Lucene project.

Apache Storm: An open source distributed realtime computation system.

Oracle Coherence: An in-memory data grid solution that enables organizations to predictably scale mission-critical applications by providing fast access to frequently used data.

Titan: A scalable graph database optimized for storing and querying graphs containing hundreds of billions of vertices and edges distributed across a multi-machine cluster.

Amazon DynamoDB: A fast and flexible fully managed NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale.

Amazon Kinesis: A platform for streaming data real-time on AWS.

Datomic: A fully transactional, cloud-ready, distributed database written in Clojure.


Monday, February 8, 2016

What is big file tablespace in oracle?

A big file tablespace is a single tablespace can have  very large  - up to 4G blocks in a data file. As compare to traditional tablespaces which can contain multiple data files, but the files cannot be as large.

Some benefits of its are as under:

  • A Bigfile tablespace with 8K/16K/32K blocks can contain a 32/64/128 TB data file. 
  • The maximum number of datafiles in an Oracle Database is limited to Max. 64K files. 
  • A Bigfile tablespace can significantly improve the storage capacity of the Database.
  • Bigfile tablespaces reduce the number of data files for a database, with this DB_FILES initialization parameter and MAXDATAFILES parameter of the CREATE DATABASE and CREATE CONTROLFILE statements can be adjusted by DBA to reduce the amount of SGA space required for data files and size of the control file.
  • Bigfile tablespaces simplify database management by providing data file transparency instead of using multiple data files.
  • It supports only for locally managed tablespaces with automatic segment space management, and it’s not used for locally managed undo tablespaces, temporary tablespaces and SYSTEM tablespace.
  • It is advised to use big file tablespaces in a database where Automatic Storage Management (ASM) configured or other logical volume managers that support striping or RAID, and dynamically extensible logical volumes.
  • Avoid creating big file tablespaces on a database that is not supporting striping because of negative implications for parallel query execution and RMAN backup parallelization.


If the default tablespace type is specified to BIGFILE at a time of database creation, you need not specify the keyword BIGFILE in the CREATE TABLESPACE statement. A big file tablespace is created by default.

CREATE BIGFILE TABLESPACE tbs_bigtable_ex1
DATAFILE '/u02/oracle/data/tbsbig01.dbf' SIZE 80G;

If the default tablespace type is specified to BIGFILE at a time of database creation, but you want to create a traditional - small file tablespace, then uses a CREATE SMALLFILE TABLESPACE statement to override the default tablespace type which creating database.

CREATE SMALLFILE TABLESPACE tbs_smalltable_ex1
DATAFILE '/u02/oracle/data/tbssmall01.dbf' SIZE 80G;

Friday, February 5, 2016

What are Surrogate Keys, Primary Keys and Candidate keys? Where its used?

A Surrogate key is any column or set of columns that can be declared as the primary key instead of a real or natural key. Sometimes there can be several natural keys that could be declared as the primary key, and these are all called candidate keys. So we can call a  surrogate key is a candidate key.

A Surrogate key is the alternate of primary key that allows duplication of datas/records. It is an immutable set of attributes that uniquely identify a row that were generated specifically and solely to identify this row which is not in case of natural key or primary key.

A table could actually have more than one surrogate keys, although this would be unusual. A natural key is an immutable set of attributes that uniquely identify a row that occur naturally with the row itself. The most common type of surrogate key is an incrementing integer, such as an auto_increment column in MySQL, or a sequence in Oracle, or an identity column in SQL Server.

Primary key and Surrogate key are same but surrogate key is a system generated numeric or integer value to identify each row uniquely, it has a define incremental value for each row in a table.

Surrogate key does not have any business importance for the value it holds but primary key has a significant business value.

OLTP Databases are called as of  Normalised Form  whereas  Data warehouses - DWHs  are called as of De-normalised form as DWH is used to maintain the historic data for analyzing. To remain de-normalised, duplication is allowed. When data inserting in DWH, Surrogate key a new column named serial number is introduced to allow duplication.

A Surrogate key in a data warehouse is more than just a substitute for a natural key. In a data warehouse, a surrogate key is a necessary generalization of the natural production key and is one of the basic elements of data warehouse design. Surrogate Key is the solution for critical column problems.

Ex.  A customer purchases different items from stores at different locations. Here, we have to maintain historical data, by using surrogate key which introduces the row in the data warehouse to maintain historical data. Another example of its, a single mobile number is used by other person if it is not in use for more than one year, it is possible just because of this surrogate key.

Differences between B*Tree and Bitmapped Indexes.


B*Tree indexes:

  • Where we need to maintain the sort order of the data, making it easy to look up range data.
  • Required to use multicolumn indexes, we can use the leading edge columns to resolve a query, even if that query doesn't reference all columns of the index.
  • As per its behavior, its automatically stay balanced.
  • Relatively constant performance of any query.
  • Can also specify reverse and unique
  • Recommended for OLTP databases.

Bitmapped indexes:

  • Use them to index columns with that contain a relatively small number of distinct values.
  • Very compact and using lesser space.
  • Designed for query intensive databases.
  • Not good for range scans.
  • Are available only in Enterprise Edition, in relational databases only.

What are NoSQL Databases? Why we use NoSQL Databases?

The Not Only SQL or NoSQL database is a way on which it works towards managing data as well as database design, largely suitable for huge sets of distributed data.

Since newly introduced concept of Big Data and Cloud, It consists of a number of technologies and architectures that deliver lesser data performance issues and scalability that is not performed by using traditional relational databases. Mainly used when companies and enterprises need to access and analyze large amounts of unstructured data or the data stored in multiple virtual servers in the cloud.

There is no specific definition of what NoSQL is, but we can describe it as:
  •          Not using the relational model
  •          Running well on clusters
  •          Mostly open-source
  •          Built for the 21st century web estates
  •          Schema-less database

There are mainly four types of NoSQL databases – data stores in the market:

Key Value Databases:

Key value databases are the uncomplicated data stores. In a data store, we can either put in a value for a specific key, or get a value from a specific key, or delete a specific key. The key values are the primary access which gives ease of  scalability and great performance.

Document Databases:

According to its name – Documents are focused in such database. The documents that are stored and received from the data stores can be in BSON, XML, JSON, etc. the documents are usually similar to each other and are in a hierarchical tree kind of data structure that are self-describing and consists of scalar values, maps, and collections.

Column family data stores:

Column family data store having rows and a number of columns that are associated with a row key. This is a bunch of data that is related and can be accessed together.

Graph databases:

Graph database comes the storage of nodes or entities and the relationship between these nodes.


Why we Choose NoSQL Databases?

  •          Distributed Computing
  •          Lower cost because its open source
  •          High scalability
  •          Schema flexibility
  •          Un structured data or semi-structured data
  •          No much complex relationships.

Wednesday, December 2, 2015

Difference between B*Tree and Bitmap Index in Oracle.

B*Tree index & Bitmap index is very different, but in functionally they are identical while retrieving rows faster and avoiding a full-table scan. These indexes are used mainly for performance tuning, which turns in retrieving data quite fast.

B*Tree Index
Bitmap Index
A type of index that uses a balanced tree structure for efficient record retrieval.
A type of index that uses a string of bits to quickly locate rows in a table.
B*Tree index stores key data in ascending or descending order and very useful for OLTP.

Bitmap indexes are normally used to index low cardinality columns in a data warehouse environment, useful for Decision Support System.
A B*Tree index is used most when the cardinality is high and it is a default Index type.
We can use Bitmap index where there are a lot of duplicate data in the indexed column (for example Gender).
A B*Tree index does not includes any specific keyword while creating it.
create index person_region on person (region);
A bitmap index includes the "bitmap" keyword while creating it.
create bitmap index person_region on person (region);
B*Tree index is an index that is created on columns that contain very unique values.

A bitmap index generally consumes less space stored in db as a highly compressed index format.

B*Tree index is very useful for speeding searches in OLTP applications, when you are working with very small data sets at a time, most queries filter by ID.
A bitmap index used on a table having low insert./update/delete (DML)  activity. Updating a bitmap index takes a lot of resources, and bitmapped indexes are best for largely read-only tables and tables that are batch updated nightly.
A B*Tree index has index nodes (based on data block size), it a tree form:
Internally, a bitmap index consists of 4 columns, first - the index value, the second and third column consisting of the start and last rowid of the table, and the fourth column consisting of the bitmap.
A B*Tree index stores the index value and the physical rowid of the row. The index values are arranged in the form of leaves.
A bitmap index looks like this, a two-dimensional array with zero and one (bit) values.
In a B*Tree Index all the lower values are placed on the left side & Higher Values on the Right Side.
A bitmap index can cover a few thousand rows in a single block. When you are updating the indexed column, Oracle takes an exclusive lock on the index slot for the duration of the transaction.
A regular B*Tree index covers maybe a few hundred table rows in a single index leaf block. In a regular B*Tree index, this affects just the actual row being updated because each slot in the index covers only a single row. It is made of branch nodes and leaf nodes. Branch nodes holds prefix key value along with the link to the leaf node. The leaf node in turn contains the indexed value and rowed.
In a bit mapped index each slot covers a range of rowid's, so more table rows are locked. The chances of two different processes colliding (and deadlocking) when they are doing bulk updates are increased.
B*Tree index is good choice for most uses:
  • maintain the sort order of the data, making it easy to look up range data
  • multicolumn indexes: you can use the leading edge columns to resolve a query, even if that query doesn't reference all columns in the index
  • they automatically stay balanced
  • performance remains relatively constant
  • can also specify reverse and unique
  • They are very fast when you are selecting just a small very subset of the index data.
  • They work better when you have a lot of distinct indexed values.
  • Combining several B*Tree indexes can be done, but simpler approaches are often more efficient.
  • They are not useful when there are few distinct values for the indexed data, or when you want to get a large subset of the data.
  • Each B*Tree index impose a small penalty when inserting/updating values on the indexed table. This can be a problem if you have a lot of indexes in a very busy table.
A Bitmap index used in below scenarios:
  • Are a more specialized index variant:
  • Use them to index columns with that contain a relatively small number of distinct values
  • They are compact, saving space
  • Were designed for query intensive databases, so not recommended for OLTP databases
  • Not good for range scans
  • Are available only in Enterprise Edition.
  • Mostly created on Transaction Tables on which the data is continuously being added. They are very inefficient when inserting/updating values.
  • They encode indexed values as bitmaps and so are very spaces efficient.
  • DB optimizers can combine several bitmap indexed very easily, this allows for efficient execution of complex filters in queries.
  • Mostly used in data warehouse applications, where the database is read only except for the ETL processes and you usually need to execute complex queries against a star schema, where bitmap indexes can speed up filtering based on conditions in your dimension tables, which do not usually have too many distinct values.
  • Bitmap indexes are not appropriate for tables that have lots of single row DML operations (inserts) and especially concurrent single row DML operations. Deadlock situations are the result of concurrent inserts as the following example shows: Open two windows, one for Session 1 and one for Session 2.

Monday, November 30, 2015

SQL execution steps in Oracle.

Below are the steps which are involved in when a SQL statement executes in Oracle.
Syntax Checking Phase
  • Whether the all keywords present "select . . . from", etc . .
  • DB semantic check against the data dictionary of the database.
  • Whether the used table names spelled correctly, and present in db dictionary.

Parsing Phase
  • Parse call does not return an error if the statement is not syntactically correct.
  • Creation of all possible ways of query execution with different costs as per CBO and identifying and generation of query execution plan with lowest cost.
  • This step uses database client/server cache, SGA, db datafiles, etc. all possible places from where data transition occurs.
  • This is where the table - tablespace - datafile translation occurs.
  • Once the execution plan is created, it is stored in Shared pool - library cache to facilitate re-execution. There are two types of parses:

Hard parse :
  • A new SQL statement must be parsed from scratch. 
  • Parsing can be a very expensive operation that takes a lot of resources to execute, when there is no previously parsed version of the SQL to reuse.
Soft parse :
  • A reused SQL statement where the only unique feature are host variables.
  • The best-case scenario is a parse to execute ratio of 100% which would indicates that the application if fully using bind or host variables in it.
  • It’s like parses SQL once and executes many times.

Bind Phase
  • In this phase, Binding lowest costs execution plan in db.
  • Once the execution plan is syntactically created, Oracle gathers the parameters from the client/application for the execution. It makes the addresses of the program/host/bind variables are known to Oracle.

Execute Phase
  • During the execute phase, Oracle executes the statement, reports any possible errors, and if everything normal, displays the result set. Unless the SQL statement being executed successfully, this is the last step of the  query execution.

Define Phase
  • The Oracle OCI interface defines addresses of the output variables known to the Oracle process in order to make it possible to the fetch call to know where to put the output variables.

Fetch Phase
  • During the fetch phase, Oracle displays the resultset to the application.

Once more, the define and fetch phases are relevant for queries only.The Oracle OCI interface module contain calls to facilitate each of those phases.