Wednesday, May 5, 2021

Difference between Cloud Storage and HDFS

Feature
Cloud Storage
HDFS
I/O variance
Higher variance
We can avoid it by using caching and read replica, considering cloud big table and cloud datastore.
Lower variance
Support of file appends or truncates
No
can do overwrite of files
Yes
POSIX - compliant 
not compliant
not fully compliant
file system information
no information available
"hadoop fsck --files --blocks"   -  exposes all directory information's in HDFS.
request latency
greater round trip latency
lesser latency
cost
57% lower total cost of ownership 
Much higher compared to Cloud
separation between cloud storage and HDFS
Multiple clusters can access same location
usage of gs:// in cloud workloads
much better performance on Persistent Disk
Single cluster access
access using hdfs:// with in cluster
lesser performance due to low grade HDD
interoperability
seamless connectivity between spark and hadoop instances.
exported files can import in big query.
easier  to manage cloud dataflow tasks
standalone cluster instances operations
manual access provision between instances and storage
storage management
Lesser overhead on operations team
Much overhead on operations team
startup time
quicker job startup time during job execution
it takes significant time to start up jobs
security         
cloud inbuilt security and access using IAM
manual security and access setup. changing security key is an overhead to operations teams

Saturday, August 24, 2019

create ssh keys and copy it on the remote server

Step 1: Create public and private keys using ssh-key-gen on local-host

mkdir -p $HOME/.ssh
chmod 0700 $HOME/.ssh

ssh-keygen -t rsa -b 4096 <> -f ~/.ssh/newremoteweb.key <> -C "My new remote web key" <>
ssh-keygen -t rsa -P ""


Step 2: Copy the public key to remote-host using ssh-copy-id

ssh-copy-id -i ~/.ssh/id_rsa.pub remote-host
ssh-copy-id -i ~/.ssh/id_rsa.pub remote-user@remote-host

scp $HOME/.ssh/id_rsa.pub remote-user@remote-host:~/.ssh/authorized_keys


adding the public key in remote server

## First create .ssh directory on server
ssh remote-user@remote-host "umask 077; test -d .ssh || mkdir .ssh"

## cat local id.rsa.pub file and pipe over ssh to append the public key in remote servercat $HOME/.ssh/id_rsa.pub | ssh remote-user@remote-host "cat >> .ssh/authorized_keys"

Now try logging into the machine, with "ssh 'remote-host'", and check in:

.ssh/authorized_keys

Step 3: Login to remote-host without entering the password

ssh remote-host

It should connect without password. Our testing is over..

Some important stuffs which you can consider.

$HOME/.ssh/id_rsa– contains your private key.
$HOME/.ssh/id_rsa.pub – contain your public key.


restarting  ssh daemon/service on the server.

## on centos/rhel/fedora
sudo service sshd reload

## on linux server
sudo systemctl reload sshd

## on debian/ubuntu - older version
sudo /etc/init.d/ssh reload

## on debian/ubuntu - latest
sudo systemctl reload ssh

## common method to reload sshd
sudo kill -HUP `cat /var/run/sshd.pid`
or
sudo kill -HUP $(cat /var/run/sshd.pid)


## list all public key parameters of all identities from the ssh-agent:
ssh-add -L

## deleting all private keys from the ssh-agent:
ssh-add -D

## kill the ssh agent, which is running:
kill $SSH_AGENT_PID
trap "kill $SSH_AGENT_PID" 0


Thursday, July 18, 2019

When HARD PARSE and SOFT PARSE and How we can avoid it?


When any new SQL arrives, 
  • It tries to find a suitable child cursor on the library cache, then SOFT PARSE occurs. 
  • If there is no parent cursor found, then HARD PARSE occurs. 
  • If there is a parent cursor found, but It's existing children can't be reused by this call.
  • As it depend on then size of bind variables and optimizer settings and NLS settings, as well. At this time, there will be a HARD PARSE.
  • If there is a parent cursor found, but if any existing child cursors executed with similar execution plan, then it can be reused and, there will be a SOFT PARSE.
Ideally, the Parent cursor contains the SQL statement text only and Child cursor contains Execution plans.

By using bind variables we can avoid unnecessary HARD PARSE to the DB.
If bind variables are not used, then there is HARD PARSE of all SQL statements. 
This has a major server impact on performance and we will face with the high waits time during SQL execution and higher cost of SQL.

Saturday, July 6, 2019

Difference between Hadoop 2.x and Hadoop 3.x

Feature
Hadoop 2.x
Hadoop 3.x
Java Version
Java 7
Java 8, must
Fault tolerance
Achieved with replication
Via erasure coding
Storage
Replication=3 for data reliability
Which increase disk usage.
for ex. File A of 3 blocks occupies 3*3 blocks.
Storage overhead = 9 /3 * 100 = 300%
Erasure coding for data reliability
Under erasure coding the blocks are not replicated in fact HDFS calculates the parity blocks for all file blocks. 
Whenever the file blocks get corrupted, the Hadoop framework recreates using the remaining blocks along with the parity blocks.
Storage overhead is drastically reduced to more than 50%.
for ex. File A of 3 blocks occupies 9 blocks.
Storage overhead = 3/3 * 100 = 100%
Yarn Timeline Service
Scalability issues over data increases.
Yarn Timeline Service 1.x present since Hadoop 1.x, and its not scalable beyond small clusters. It has a single instance of writer and storage.
Yarn Timeline Service 2.x , provides for more scalability, reliability and enhanced usability. It has scalable back-end storage and distributed writer architecture.
Heap Size Management
Need to configure HADOOP HEAPSIZE
There are new ways to configure Map & Reduce daemon heap sizes. Auto tuning based on the memory of the host and globally. HADOOP_HEAPSIZE & JAVA_HEAP_SIZE variable is no longer used. We have HEAP_MAX_SIZE and HEAP_MIN_SIZE variables in MB. Also, if you want to enable the old default then configure HADOOP_HEAPSIZE_MAX in hadoop-env.sh.
Standby NN
Supports only 1 Standby NN, tolerating the failures of cluster.
Supports 2 and more Standby NN. Only One is in active state and others are in standby state.
Containers
Hadoop 2.x works on the principle of guaranteed containers.The container will start running immediately as there is a guarantee that the resources will be available. But it has drawbacks.
FeedBack Delays  Once the container finishes execution it notifies to RM. When RM schedules a new container at that node, AM gets notified. Then AM starts the new container. Hence there is a delay in terms of notifications to RM and AM.
Allocated v/s utilized resources – The resources which RM allocates to the container can be under-utilized. For ex, RM may allocates container of 4 GB and out of which it uses only 2 GB. This reduces effective resource utilization.
 Hadoop 3.x implements opportunistic containers.
Containers wait in a queue if the resources are not available.
The opportunistic containers have less priority than guaranteed containers.
Hence, the scheduler attempts opportunistic containers to be available for guaranteed containers.
Port Numbers for multiple services
It uses ephemeral port numbers range (32768-61000), which lead failure of Hadoop services in startup in Linux Server.
The ephemeral port numbers changes affected to NN, SN and DN port numbers.
Name Node ports: 
50470 –> 9871,
50070 –> 9870,
8020 –> 9820
Secondary Name Node ports: 
50091 –> 9869,
50090 –> 9868
Data Node ports: 
50020 –> 9867,
50010 –> 9866,
50475 –> 9865,
50075 –> 9864
Data Load balancer
A single Data Node manages many disks. These disks fill up during a normal write operation. But, adding or replacing disks can lead to significant issues within a Data Node. Hadoop 2.x has HDFS balancer which cannot handle this situation. 
New intra-Data Node balancing functionality handles the above situation.
diskbalancer CLI invokes intra-DataNode balancer.
To enable  this,
dfs.disk.balancer.enabled=true on all DataNodes.
File System Support
Local filesystems,HDFS (Default FS), FTP File system, Amazon S3 (Simple Storage Service) file system, Windows Azure Storage Blobs (WASB) file system, Distributed Filesystems, etc.
It supports all the previous one as well as Microsoft Azure Data Lake filesystem and Aliyn object storage system.
Scalability
Cluster can be scale upto 10000 nodes.
Cluster can be scale more than 10000 nodes.

Thursday, September 6, 2018

What are Edge Nodes or Gateway Nodes in Hadoop?

Edge nodes are the interface between the Hadoop cluster and the outside network from which Hadoop user can store files in Hadoop cluster. It’s a gateway to the cluster, Hence some time we refer it as a gateway node as well.

Commonly, edge nodes are used to run cluster administration tools and client applications.  Edge-nodes are kept separate from the cluster nodes that contain HDFS, MapReduce, etc components in it, It mainly to keeps the computing resources separate from the outer world.

Edge nodes running within the cluster allow for centralized management of all the Hadoop configurations on the cluster nodes which helps to reduce the administration efforts needed to update the config files through cluster administrators. 
It’s a limited security within Hadoop itself, even if your Hadoop cluster operates in a LAN or WAN behind a security firewall. You may consider a cluster-specific firewall to fully protect non-public data of Hadoop cluster.

Sunday, August 12, 2018

How to change a non-partitioned table into a partitioned table in oracle along with indexes?

There are two ways to change the partitioned table into non-partitioned table.

1. We can use Oracle data pump (expdp/impdp) utilities with option PARTITION_OPTIONS=DEPARTITION.

CREATE OR REPLACE DIRECTORY TEST_DIR AS '/oracle/expimpdp/';
GRANT READ, WRITE ON DIRECTORY TEST_DIR TO MY_USER;

EXPDP MY_USER/MY_PWD@MYDB TABLES=T DIRECTORY=TEST_DIR PARALLEL=5 INCLUDE=TABLE_DATA,INDEX COMPRESSION=ALL DUMPFILE=T_DUMP.DMP LOGFILE=EXPDP_T_DUMP.LOG

IMPDP MY_USER/MY_PWD@MYDB tables=T DIRECTORY=TEST_DIR PARALLEL=5 INCLUDE=TABLE_DATA,INDEX CONTENT=ALL PARTITION_OPTIONS=DEPARTITION DUMPFILE=T_DUMP.DMP LOGFILE=IMPDP_T_DUMP.LOG

2.  we can use ALTER TABLE - "ONLINE" & optional "UPDATE INDEXES"  Clause as below.

ALTER TABLE EMP_PART_CONVERT MODIFY PARTITION BY RANGE (employee_id) INTERVAL (100) 

PARTITION P1 VALUES LESS THAN (100), 
PARTITION P2 VALUES LESS THAN (500) ) 
ONLINE UPDATE INDEXES 
(
IDX1_SALARY LOCAL, 
IDX2_EMP_ID GLOBAL PARTITION BY RANGE (employee_id) ( PARTITION IP1 VALUES LESS THAN (MAXVALUE)
)
);

Please note following things  - When using the UPDATE INDEXES clause:

  • This clause can be used to change the partitioning state of indexes and storage properties of the indexes being converted.
  • Indexes are maintained both for the online and offline conversion to a partitioned table.
  • This clause cannot change the columns on which the original list of indexes are defined.
  • This clause cannot change the uniqueness property of the index.
  • This conversion operation cannot be performed if there are domain indexes.
  • During conversion - All Bitmap indexes become local partitioned indexes, by default.



How to change partitioned table into non-partitioned table in oracle along with data and indexes?

There are three main ways to change the partitioned table into non-partitioned table.

1. We can use Oracle data pump (expdp/impdp) utilities with option PARTITION_OPTIONS=merge.

CREATE OR REPLACE DIRECTORY TEST_DIR AS '/oracle/expimpdp/';
GRANT READ, WRITE ON DIRECTORY TEST_DIR TO MY_USER;

EXPDP MY_USER/MY_PWD@MYDB TABLES=T DIRECTORY=TEST_DIR PARALLEL=5 INCLUDE=TABLE_DATA,INDEX COMPRESSION=ALL DUMPFILE=T_DUMP.DMP LOGFILE=EXPDP_T_DUMP.LOG

IMPDP MY_USER/MY_PWD@MYDB tables=T DIRECTORY=TEST_DIR PARALLEL=5 INCLUDE=TABLE_DATA,INDEX CONTENT=ALL PARTITION_OPTIONS=MERGE DUMPFILE=T_DUMP.DMP LOGFILE=IMPDP_T_DUMP.LOG

2. With simpler method as ALTER TABLE.
 
ALTER TABLE T1 MERGE PARTITIONS P1 TO P6 INTO P0;

3. Create a temporary copy of table along with data, drop the original table and rename the temp table name into the original and create index accordingly. (Considering that my table having not much huge data.)

CREATE TABLE T_TEMP AS
SELECT * FROM T;

RENAME T_TEMP TO T;

CREATE INDEX IDX_T AS T(ID);