Thursday, March 26, 2015

What is the difference between InnoDB & MYISAM in MYSQL?

InnoDB 1.1 combines more reliability and performance with usability enhancement compare to InnoDB 1.0.

Since MySQL 5.5, InnoDB is the default storage engine, instead of MyISAM.

CREATE TABLE TABLENAME statement without an ENGINE= clause creates an InnoDB table.

Its supports DML operations and follow the ACID property and with can do commit and rollback for long transactions and supports for crash-recovery to protect user data.
Its supprts Row-level locking and Oracle-style consistent reads to increase multi-user concurrency and performance.
InnoDB tables stores your data on disk to optimize queries based on primary keys.
InnoDB table has a primary key index called the clustered index to organizes the data to minimize I/O.
InnoDB also supports FOREIGN KEY constraints to remove inconsistencies across different tables.
You can freely mix InnoDB tables with tables from other MySQL storage engines. We can join data from InnoDB and MEMORY tables in a single query.
It is designed for CPU efficiency and maximum performance when processing large data.
It use B-tree indexes only. Not supports for T-tree indexes & Hash indexes.
It can store up to 64TB in a table and supports row level locking.
It can store Compressed data and Encrypted data along with Replication support and Foreign key support.
It uses Data caches and Index caches while storing data.

As compare to InnoDB, MyISAM tables have characteristics are as under:

MYISAM tables are stored with the low byte first, which makes the data machine and operating system are as independent. Due to that widely used in mainstream machines.
All numeric key values are stored with the high byte first for better index compression.
It can store (232)2 rows in a MyISAM table and maximum number of indexes per MyISAM table is 64 and max 16 columns can have an index. It stores maximum key length is 1000 bytes, which can be changed by changing the source and recompiling table to default size upto 1024 bytes.
It improves space utilization in index tree by rows are inserted in sorted order when you are using AUTO_INCREMENT column in index tree by splitting in a manner that only high node only contains one key.
It supports Dynamic-sized rows, due to that it is much less fragmented when mixing deletes with updates and inserts, automatically by combining adjacent deleted blocks and by extending blocks if the next block is deleted.
MyISAM supports concurrent inserts, If a table has no free blocks in the middle of the data file, you can INSERT new rows.
We can store the data file and index file in different directories on different physical devices to get more speed with the DATA DIRECTORY and INDEX DIRECTORY table options to CREATE TABLE.
Large files - up to 63-bit file length and 256 TB in size to support large files.
BLOB and TEXT columns can be indexed, NULL values can inserted in indexed columns, which takes 0 to 1 bytes per key.
Each character column can have a different character set.
It Support for a true VARCHAR, a VARCHAR column starts with a length stored in one or two bytes. MYISAM Tables with VARCHAR columns may have fixed or dynamic row length.
It does not supports row level locking and transactions.
It use B-tree indexes only. Not supports for T-tree indexes & Hash indexes.
It can store Compressed data and Encrypted data along with Replication support, but not for Foreign key support.
It use B-tree indexes only. Not supports for T-tree indexes and Hash indexes.
It supports Full-text search indexes and not for the Clustered indexes and Hash indexes.
It uses Index caches while storing data but not uses Data caches.


Sunday, March 8, 2015

PRAGMA RESTRICT_REFERENCES for Restricting Package security.

PRAGMA RESTRICT_REFERENCES is an instruction or a hint to the compiler which is processed at compile time, not at runtime, it used to control the side effects of PL/SQL Subprograms. Every PL/SQL Subprograms must follow some rules in terms of transaction control and security. 


PRAGMA RESTRICT_REFERENCES mainly used in PL/SQL Package and it ensures that a subprogram does not read or write database tables or package variables. 

PRAGMA RESTRICT_REFERENCES can appear only in a package specification or object type specification. Typically, this pragma is specified for functions. If a function calls procedures, specify the pragma for those procedures also.

From Oracle 11g, PRAGMA RESTRICT REFERENCES is deprecated. Oracle recommends using DETERMINISTIC and PARALLEL_ENABLE instead of RESTRICT REFERENCES.

  • RNDS – Read No Database State. function not to read or query tables
  • RNPS – Read No Package State. function not to read or reference package variables
  • WNDS – Write No Database State. function not modify database tables
  • WNPS – Write No Package State. function not modify package variables
  • TRUST – function can be trusted not to violate one or more rules. Used only when C or JAVA routines are called from PL/SQL.
Without using pragma in package, package successfully complied but it may result runtime error, and developers need to change the code.

ORA-14551: cannot perform a DML operation inside a query

Using pragma in package, package will be successfully complied with compile time error, which leads developers need to change the code.

PLS-00452: Subprogram ‘send_emp_alloc’ violates its associated pragma
  
CREATE OR REPLACE PACKAGE pkg_transfergoods
IS
function send_emp_alloc(p_empno integer) return number;    
PRAGMA restrict_references(send_emp_alloc, RNDS, RNPS, WNDS, WNPS);
END pkg_transfergoods;
/

To apply PRAGMA restrict_references functionalities on all package functions – sub programs by using Keyword = DEFAULT to all of the sub programs inside the package. It applies (WNDS – Write No Database State to all package not modify database tables) to all database tables which are used in package.

CREATE OR REPLACE PACKAGE pkg_emp_alloc
IS
function send_emp_alloc(p_empno integer) return number;
function get_emp_name(p_empno integer) return varchar2;
PRAGMA restrict_references(DEFAULT, WNDS);
END pkg_emp_alloc;
/


What are major difference between Traditional Import-Export and Datapump Import-Export?


Traditional EXP/IMP both are separately executing.
Data Pump EXPDP and IMPDP are executed as DATAPUMP jobs. DB server processes access db files and db objects to perform EXP/IMP on respective directory objects of DB. This enforces a security model for DBAs.
Single data stream execution of  Traditional IMP/EXP.
EXPDP/IMPDP can take advantages of server’s parallel process to read or write multiple data streams simultaneously.
EXP/IMP not have very interactive command line mode.
EXPDP/IMPDP have very powerful interactive command line mode, allows user to monitor and control the process.
EXP/IMP cannot have disconnect and reconnect feature. Once started it need to be finish.
DATAPUMP jobs entirely running on server hence we can disconnect, detach and reconnect like, PAUSE and RESUME.
No data transfer between two databases in EXP/IMP.
DATAPUMP has ability to pass data between two databases over a database link, without creating a dump file on disk.
EXP/IMP uses sequential path access method.
DATAPUMP uses Direct Path access method.
EXP/IMP operates on single file.
DATAPUMP operates on group of files.
EXP/IMP can access files of server and client.
DATAPUMP exports/imports files using database directories.
Traditional EXP/IMP supports sequential media like tape drive and others.
DATAPUMP not supports on export/import from/to on Sequential media like tape drives.
EXP/IMP uses Byte by Byte mode.
DUMP Files can be Compressed, Encrypted and uses BLOCK by BLOCK mode.
EXP/IMP not supports for XML Schemas and XML Type data.
DATAPUMP Can support XML schemas and XML Type data.
EXP/IMP works with all oracle versions.
DATAPUMP works with oracle 10g or later versions.


Sunday, October 19, 2014

What are Materialized View & Materialized View Logs?

A materialized view is a database object that contains the results of a query same like a table. They are local copies of data located on remote site. Also, used to create summary tables based on aggregations of a table's data, which store data based on remote tables are also, referred as Snapshots.

A materialized view can query tables, views, and other materialized views. Collectively these are called master tables (a replication term) or detail tables (a data warehouse term).

For replication purposes, materialized views allow you to maintain copies of remote data on your local server as read-only data.If we want to update the local copies of data, we have to use the Advanced Replication features in oracle. We can select data from a materialized view same as local table or view.

Materialized Views refer local data from Materialized View Logs (Snapshot Logs), which need to create on Local tables.

CREATE MATERIALIZED VIEW LOG ON emp;

Primary Key Materialized Views which are created with "WITH PRIMARY KEY" clause.

CREATE MATERIALIZED VIEW mv_emp_pk
REFRESH FAST START WITH SYSDATE 
NEXT  SYSDATE + 1/48
WITH PRIMARY KEY 
AS SELECT * FROM emp@remote;

Rowid Materialized Views which are created with "ROWID" clause.

CREATE MATERIALIZED VIEW mv_emp_rowid
REFRESH WITH ROWID 
AS SELECT * FROM emp@remote;

Subquery Materialized Views which are created by using subquery.

CREATE MATERIALIZED VIEW  mv_empdept
AS SELECT * FROM emp@remote e
WHERE EXISTS
     (SELECT * FROM dept@remote d
     WHERE e.deptid = d.deptid)

For data warehousing purposes, materialized views are the aggregate views, single-table aggregate views and join views.

CREATE MATERIALIZED VIEW  mv_empdept_dwh
AS SELECT * FROM emp@remote a, dept@remote b
WHERE a.deptid = b.deptid
and a.deptid < 1000;

COMPLETE refreshes by recalculating the defining query of the materialized view.

FAST refreshes by incrementally applying changes to the materialized view as per materialized view logs. For local materialized views, it chooses the refresh method which is estimated by optimizer to be most efficient.

FORCE Attempts a FAST refresh. If that is not possible, it does a complete refresh. For local materialized views, it chooses the refresh method which is estimated by optimizer to be most efficient. The refresh methods considered are log based FAST and COMPLETE.


NULLS LAST & NULLS FIRST in Select Statement.

In modern DBMS like Oracle, PostgreSQL, DB2, Firebird, Apache Derby, HSQLDB , H2, etc. you can specify NULLS LAST or NULLS FIRST in a select statement.

Used when you required sorted data in output having required column contains NULL values in it. It shows NULL values per below clauses.

NULLS LAST -  To sort data in which NULLS are showing in the last records.
NULLS FIRST - To sort data to the beginning - Default clause of a sql select statement.

select * from emp order by emp_name DESC NULLS LAST; 

select * from emp order by emp_name DESC NULLS FIRST; 

What are Global Temporary Tables in Oracle?


Global Temporary Tables, introduced by Oracle since 8i and mainly they are the private tables which stored data inserted by a session only and only accessed by that session itself.
Widely used in large applications make extensive use of temporary data storage which preserves data for the whole session or just for current transaction.

CREATE GLOBAL TEMPORARY TABLE GTT_TEMP  
(
AAAA  Number,
BBBB  Varchar2(10),
CCCC  Number
)
ON COMMIT [DELETE | PRESERVE] ROWS;

ON COMMIT DELETE ROWS : It sets the life of the data contained by the table to a single TRANSACTION.
The data is automatically flushed away after each COMMIT/ROLLBACK is executed - such tables are Transaction-specific Temporary tables.

ON COMMIT PRESERVE ROWS: It restricts the life of the data to a single SESSION.
Data is preserved in the table for a session only - such tables are Session-specific Temporary tables.

Below are the major features due to that it is used in large applications:


  • If the TRUNCATE statement is issued against a temporary table, only the session specific data is trucated. There is no affect on the data of other sessions.
  • Data in temporary tables is automatically deleted at the end of the database session, even if it ends abnormally.
  • Indexes can be created on temporary tables. The content of the index and the scope of the index is the same as the database session.
  • Views can be created against temporary tables and combinations of temporary and permanent tables.
  • Triggers can alo be associated with such tables.
  • Export and Import utilities can be used to transfer the table definitions, but no data rows are processed.
  • Table Statistics of temporary tables are common to all sessions.
  • Foreign key constraints are not applicable in case of Temporary tables.
  • Global Temporary tables cannot be partitioned.
  • Data in Global Temporary Tables is stored in temp segments in the temp tablespace only.
  • In Oracle 8i,9i & 10g, TABLESPACE cannot be defined for GTT i.e. GTT segments were created in user’s default tablespace. 
  • After Oracle 11g, GTT segments can be created on other Temporary Tablespaces too, it must be create on a TEMP Tablespace not any other.
  • Use of LOB_STORAGE_CLAUSE, LOGGING/ NOLOGGING, MONITORING/ NOMONITORING, LOB_INDEX_CLAUSE is restricted in GTT definition. 


PIVOT & UNPIVOT functions in Oracle 11g

PIVOT Function is used in oracle 11g to produce query output in matrix style same as what we are doing in xls or xlsx.

in oracle 10g,

select deptid,jobid,sum(sal) from emp
group by deptid,jobid
order by deptno,jobid;  

for to get ouput of a query like in Excel Pivot funcationality.

select deptid, 
sum(decode(jobid, 101, sal)) "Analyst",
sum(decode(jobid, 102, sal)) "Programmer",
sum(decode(jobid, 103, sal)) "DBA",
sum(decode(jobid, 104, sal)) "Developer"
from emp
group by deptid
order by deptid;

Using the same output in oracle 11g with below query.

select * from(select deptid,jobid,sal from emp)
PIVOT (sum(sal) FOR jobid 
in (
101 "Analyst",
102, "Programmer",
103, "DBA",
104, "Developer"
))order by deptid;

same way we can achieve the first query output using UNPIVOT function.

select * from(select deptid,jobid,sal from emp)
UNPIVOT (sum(sal) FOR jobid 
in (
101, "Analyst",
102, "Programmer",
103, "DBA",
104, "Developer",
))order by deptid;