Thursday, July 18, 2019

When HARD PARSE and SOFT PARSE and How we can avoid it?


When any new SQL arrives, 
  • It tries to find a suitable child cursor on the library cache, then SOFT PARSE occurs. 
  • If there is no parent cursor found, then HARD PARSE occurs. 
  • If there is a parent cursor found, but It's existing children can't be reused by this call.
  • As it depend on then size of bind variables and optimizer settings and NLS settings, as well. At this time, there will be a HARD PARSE.
  • If there is a parent cursor found, but if any existing child cursors executed with similar execution plan, then it can be reused and, there will be a SOFT PARSE.
Ideally, the Parent cursor contains the SQL statement text only and Child cursor contains Execution plans.

By using bind variables we can avoid unnecessary HARD PARSE to the DB.
If bind variables are not used, then there is HARD PARSE of all SQL statements. 
This has a major server impact on performance and we will face with the high waits time during SQL execution and higher cost of SQL.

Saturday, July 6, 2019

Difference between Hadoop 2.x and Hadoop 3.x

Feature
Hadoop 2.x
Hadoop 3.x
Java Version
Java 7
Java 8, must
Fault tolerance
Achieved with replication
Via erasure coding
Storage
Replication=3 for data reliability
Which increase disk usage.
for ex. File A of 3 blocks occupies 3*3 blocks.
Storage overhead = 9 /3 * 100 = 300%
Erasure coding for data reliability
Under erasure coding the blocks are not replicated in fact HDFS calculates the parity blocks for all file blocks. 
Whenever the file blocks get corrupted, the Hadoop framework recreates using the remaining blocks along with the parity blocks.
Storage overhead is drastically reduced to more than 50%.
for ex. File A of 3 blocks occupies 9 blocks.
Storage overhead = 3/3 * 100 = 100%
Yarn Timeline Service
Scalability issues over data increases.
Yarn Timeline Service 1.x present since Hadoop 1.x, and its not scalable beyond small clusters. It has a single instance of writer and storage.
Yarn Timeline Service 2.x , provides for more scalability, reliability and enhanced usability. It has scalable back-end storage and distributed writer architecture.
Heap Size Management
Need to configure HADOOP HEAPSIZE
There are new ways to configure Map & Reduce daemon heap sizes. Auto tuning based on the memory of the host and globally. HADOOP_HEAPSIZE & JAVA_HEAP_SIZE variable is no longer used. We have HEAP_MAX_SIZE and HEAP_MIN_SIZE variables in MB. Also, if you want to enable the old default then configure HADOOP_HEAPSIZE_MAX in hadoop-env.sh.
Standby NN
Supports only 1 Standby NN, tolerating the failures of cluster.
Supports 2 and more Standby NN. Only One is in active state and others are in standby state.
Containers
Hadoop 2.x works on the principle of guaranteed containers.The container will start running immediately as there is a guarantee that the resources will be available. But it has drawbacks.
FeedBack Delays  Once the container finishes execution it notifies to RM. When RM schedules a new container at that node, AM gets notified. Then AM starts the new container. Hence there is a delay in terms of notifications to RM and AM.
Allocated v/s utilized resources – The resources which RM allocates to the container can be under-utilized. For ex, RM may allocates container of 4 GB and out of which it uses only 2 GB. This reduces effective resource utilization.
 Hadoop 3.x implements opportunistic containers.
Containers wait in a queue if the resources are not available.
The opportunistic containers have less priority than guaranteed containers.
Hence, the scheduler attempts opportunistic containers to be available for guaranteed containers.
Port Numbers for multiple services
It uses ephemeral port numbers range (32768-61000), which lead failure of Hadoop services in startup in Linux Server.
The ephemeral port numbers changes affected to NN, SN and DN port numbers.
Name Node ports: 
50470 –> 9871,
50070 –> 9870,
8020 –> 9820
Secondary Name Node ports: 
50091 –> 9869,
50090 –> 9868
Data Node ports: 
50020 –> 9867,
50010 –> 9866,
50475 –> 9865,
50075 –> 9864
Data Load balancer
A single Data Node manages many disks. These disks fill up during a normal write operation. But, adding or replacing disks can lead to significant issues within a Data Node. Hadoop 2.x has HDFS balancer which cannot handle this situation. 
New intra-Data Node balancing functionality handles the above situation.
diskbalancer CLI invokes intra-DataNode balancer.
To enable  this,
dfs.disk.balancer.enabled=true on all DataNodes.
File System Support
Local filesystems,HDFS (Default FS), FTP File system, Amazon S3 (Simple Storage Service) file system, Windows Azure Storage Blobs (WASB) file system, Distributed Filesystems, etc.
It supports all the previous one as well as Microsoft Azure Data Lake filesystem and Aliyn object storage system.
Scalability
Cluster can be scale upto 10000 nodes.
Cluster can be scale more than 10000 nodes.

Thursday, September 6, 2018

What are Edge Nodes or Gateway Nodes in Hadoop?

Edge nodes are the interface between the Hadoop cluster and the outside network from which Hadoop user can store files in Hadoop cluster. It’s a gateway to the cluster, Hence some time we refer it as a gateway node as well.

Commonly, edge nodes are used to run cluster administration tools and client applications.  Edge-nodes are kept separate from the cluster nodes that contain HDFS, MapReduce, etc components in it, It mainly to keeps the computing resources separate from the outer world.

Edge nodes running within the cluster allow for centralized management of all the Hadoop configurations on the cluster nodes which helps to reduce the administration efforts needed to update the config files through cluster administrators. 
It’s a limited security within Hadoop itself, even if your Hadoop cluster operates in a LAN or WAN behind a security firewall. You may consider a cluster-specific firewall to fully protect non-public data of Hadoop cluster.

Sunday, August 12, 2018

How to change a non-partitioned table into a partitioned table in oracle along with indexes?

There are two ways to change the partitioned table into non-partitioned table.

1. We can use Oracle data pump (expdp/impdp) utilities with option PARTITION_OPTIONS=DEPARTITION.

CREATE OR REPLACE DIRECTORY TEST_DIR AS '/oracle/expimpdp/';
GRANT READ, WRITE ON DIRECTORY TEST_DIR TO MY_USER;

EXPDP MY_USER/MY_PWD@MYDB TABLES=T DIRECTORY=TEST_DIR PARALLEL=5 INCLUDE=TABLE_DATA,INDEX COMPRESSION=ALL DUMPFILE=T_DUMP.DMP LOGFILE=EXPDP_T_DUMP.LOG

IMPDP MY_USER/MY_PWD@MYDB tables=T DIRECTORY=TEST_DIR PARALLEL=5 INCLUDE=TABLE_DATA,INDEX CONTENT=ALL PARTITION_OPTIONS=DEPARTITION DUMPFILE=T_DUMP.DMP LOGFILE=IMPDP_T_DUMP.LOG

2.  we can use ALTER TABLE - "ONLINE" & optional "UPDATE INDEXES"  Clause as below.

ALTER TABLE EMP_PART_CONVERT MODIFY PARTITION BY RANGE (employee_id) INTERVAL (100) 

PARTITION P1 VALUES LESS THAN (100), 
PARTITION P2 VALUES LESS THAN (500) ) 
ONLINE UPDATE INDEXES 
(
IDX1_SALARY LOCAL, 
IDX2_EMP_ID GLOBAL PARTITION BY RANGE (employee_id) ( PARTITION IP1 VALUES LESS THAN (MAXVALUE)
)
);

Please note following things  - When using the UPDATE INDEXES clause:

  • This clause can be used to change the partitioning state of indexes and storage properties of the indexes being converted.
  • Indexes are maintained both for the online and offline conversion to a partitioned table.
  • This clause cannot change the columns on which the original list of indexes are defined.
  • This clause cannot change the uniqueness property of the index.
  • This conversion operation cannot be performed if there are domain indexes.
  • During conversion - All Bitmap indexes become local partitioned indexes, by default.



How to change partitioned table into non-partitioned table in oracle along with data and indexes?

There are three main ways to change the partitioned table into non-partitioned table.

1. We can use Oracle data pump (expdp/impdp) utilities with option PARTITION_OPTIONS=merge.

CREATE OR REPLACE DIRECTORY TEST_DIR AS '/oracle/expimpdp/';
GRANT READ, WRITE ON DIRECTORY TEST_DIR TO MY_USER;

EXPDP MY_USER/MY_PWD@MYDB TABLES=T DIRECTORY=TEST_DIR PARALLEL=5 INCLUDE=TABLE_DATA,INDEX COMPRESSION=ALL DUMPFILE=T_DUMP.DMP LOGFILE=EXPDP_T_DUMP.LOG

IMPDP MY_USER/MY_PWD@MYDB tables=T DIRECTORY=TEST_DIR PARALLEL=5 INCLUDE=TABLE_DATA,INDEX CONTENT=ALL PARTITION_OPTIONS=MERGE DUMPFILE=T_DUMP.DMP LOGFILE=IMPDP_T_DUMP.LOG

2. With simpler method as ALTER TABLE.
 
ALTER TABLE T1 MERGE PARTITIONS P1 TO P6 INTO P0;

3. Create a temporary copy of table along with data, drop the original table and rename the temp table name into the original and create index accordingly. (Considering that my table having not much huge data.)

CREATE TABLE T_TEMP AS
SELECT * FROM T;

RENAME T_TEMP TO T;

CREATE INDEX IDX_T AS T(ID);

Sunday, July 29, 2018

what is the difference between TLS & SSL?

SSL (Secure Sockets Layer) and TLS  (Transport Layer Security) are both cryptographic protocols that provide authentication and data encryption between servers, machines and applications operating over a network.

  • SSL used to transmit information privately along with message integrity and provides guarantee the server identity. 
  • SSL works mainly through using public/private key encryption on data.
  • SSL was originally developed by Netscape and first came onto the public in 1995 with SSL 2.0, SSL 1.0 was never released to the public. 
  • In 1996, SSL 2.0 was quickly replaced by SSL 3.0 after a number of vulnerabilities were found in it.

Over the years, new versions of the protocols have been released to address vulnerabilities and support stronger, more secure cipher suites and algorithms.The Internet Engineering Task Force (IETF) created TLS as the successor to SSL .

  • In 1999, as a new version of SSL, intrduced as TLS 1.1. Currently we are working in TLS 1.2, Yet to come TLS v. 1.3, its in draft.
  • Currently all browsers support TLS 1.0 by default and may optionally support TLS 1.1 and 1.2.
  • The TLS protocol aims primarily to provide privacy and data integrity between two or more communicating computer applications.

In current, Hypertext Transfer Protocol Secure (HTTPS) “HTTP Secure” is an application-specific implementation that is a combination of the Hypertext Transfer Protocol (HTTP) with the SSL/TLS. HTTPS is used to provide encrypted communication with and secure identification of a Web server.

In addition to HTTPS, SSL/TLS can be used to secure other application-specific protocols such as FTP, SMTP, NNTP, etc.


What is the difference between the bad file and the discard file in SQL*Loader?

In Oracle, bad file and discard files both contain rejected rows, but they are rejected for different reasons:

Bad file: The bad file contains records which are rejected because of errors.  These errors might include bad datatypee, type conversions or any referential integrity constraints.


Discard file: The discard file contains rows that were discarded because they are filtered out due to filtering criteria's which yu have written in SQL*Loader control file.