Wednesday, December 2, 2015

Difference between B*Tree and Bitmap Index in Oracle.

B*Tree index & Bitmap index is very different, but in functionally they are identical while retrieving rows faster and avoiding a full-table scan. These indexes are used mainly for performance tuning, which turns in retrieving data quite fast.

B*Tree Index
Bitmap Index
A type of index that uses a balanced tree structure for efficient record retrieval.
A type of index that uses a string of bits to quickly locate rows in a table.
B*Tree index stores key data in ascending or descending order and very useful for OLTP.

Bitmap indexes are normally used to index low cardinality columns in a data warehouse environment, useful for Decision Support System.
A B*Tree index is used most when the cardinality is high and it is a default Index type.
We can use Bitmap index where there are a lot of duplicate data in the indexed column (for example Gender).
A B*Tree index does not includes any specific keyword while creating it.
create index person_region on person (region);
A bitmap index includes the "bitmap" keyword while creating it.
create bitmap index person_region on person (region);
B*Tree index is an index that is created on columns that contain very unique values.

A bitmap index generally consumes less space stored in db as a highly compressed index format.

B*Tree index is very useful for speeding searches in OLTP applications, when you are working with very small data sets at a time, most queries filter by ID.
A bitmap index used on a table having low insert./update/delete (DML)  activity. Updating a bitmap index takes a lot of resources, and bitmapped indexes are best for largely read-only tables and tables that are batch updated nightly.
A B*Tree index has index nodes (based on data block size), it a tree form:
Internally, a bitmap index consists of 4 columns, first - the index value, the second and third column consisting of the start and last rowid of the table, and the fourth column consisting of the bitmap.
A B*Tree index stores the index value and the physical rowid of the row. The index values are arranged in the form of leaves.
A bitmap index looks like this, a two-dimensional array with zero and one (bit) values.
In a B*Tree Index all the lower values are placed on the left side & Higher Values on the Right Side.
A bitmap index can cover a few thousand rows in a single block. When you are updating the indexed column, Oracle takes an exclusive lock on the index slot for the duration of the transaction.
A regular B*Tree index covers maybe a few hundred table rows in a single index leaf block. In a regular B*Tree index, this affects just the actual row being updated because each slot in the index covers only a single row. It is made of branch nodes and leaf nodes. Branch nodes holds prefix key value along with the link to the leaf node. The leaf node in turn contains the indexed value and rowed.
In a bit mapped index each slot covers a range of rowid's, so more table rows are locked. The chances of two different processes colliding (and deadlocking) when they are doing bulk updates are increased.
B*Tree index is good choice for most uses:
  • maintain the sort order of the data, making it easy to look up range data
  • multicolumn indexes: you can use the leading edge columns to resolve a query, even if that query doesn't reference all columns in the index
  • they automatically stay balanced
  • performance remains relatively constant
  • can also specify reverse and unique
  • They are very fast when you are selecting just a small very subset of the index data.
  • They work better when you have a lot of distinct indexed values.
  • Combining several B*Tree indexes can be done, but simpler approaches are often more efficient.
  • They are not useful when there are few distinct values for the indexed data, or when you want to get a large subset of the data.
  • Each B*Tree index impose a small penalty when inserting/updating values on the indexed table. This can be a problem if you have a lot of indexes in a very busy table.
A Bitmap index used in below scenarios:
  • Are a more specialized index variant:
  • Use them to index columns with that contain a relatively small number of distinct values
  • They are compact, saving space
  • Were designed for query intensive databases, so not recommended for OLTP databases
  • Not good for range scans
  • Are available only in Enterprise Edition.
  • Mostly created on Transaction Tables on which the data is continuously being added. They are very inefficient when inserting/updating values.
  • They encode indexed values as bitmaps and so are very spaces efficient.
  • DB optimizers can combine several bitmap indexed very easily, this allows for efficient execution of complex filters in queries.
  • Mostly used in data warehouse applications, where the database is read only except for the ETL processes and you usually need to execute complex queries against a star schema, where bitmap indexes can speed up filtering based on conditions in your dimension tables, which do not usually have too many distinct values.
  • Bitmap indexes are not appropriate for tables that have lots of single row DML operations (inserts) and especially concurrent single row DML operations. Deadlock situations are the result of concurrent inserts as the following example shows: Open two windows, one for Session 1 and one for Session 2.

Monday, November 30, 2015

SQL execution steps in Oracle.

Below are the steps which are involved in when a SQL statement executes in Oracle.
Syntax Checking Phase
  • Whether the all keywords present "select . . . from", etc . .
  • DB semantic check against the data dictionary of the database.
  • Whether the used table names spelled correctly, and present in db dictionary.

Parsing Phase
  • Parse call does not return an error if the statement is not syntactically correct.
  • Creation of all possible ways of query execution with different costs as per CBO and identifying and generation of query execution plan with lowest cost.
  • This step uses database client/server cache, SGA, db datafiles, etc. all possible places from where data transition occurs.
  • This is where the table - tablespace - datafile translation occurs.
  • Once the execution plan is created, it is stored in Shared pool - library cache to facilitate re-execution. There are two types of parses:

Hard parse :
  • A new SQL statement must be parsed from scratch. 
  • Parsing can be a very expensive operation that takes a lot of resources to execute, when there is no previously parsed version of the SQL to reuse.
Soft parse :
  • A reused SQL statement where the only unique feature are host variables.
  • The best-case scenario is a parse to execute ratio of 100% which would indicates that the application if fully using bind or host variables in it.
  • It’s like parses SQL once and executes many times.

Bind Phase
  • In this phase, Binding lowest costs execution plan in db.
  • Once the execution plan is syntactically created, Oracle gathers the parameters from the client/application for the execution. It makes the addresses of the program/host/bind variables are known to Oracle.

Execute Phase
  • During the execute phase, Oracle executes the statement, reports any possible errors, and if everything normal, displays the result set. Unless the SQL statement being executed successfully, this is the last step of the  query execution.

Define Phase
  • The Oracle OCI interface defines addresses of the output variables known to the Oracle process in order to make it possible to the fetch call to know where to put the output variables.

Fetch Phase
  • During the fetch phase, Oracle displays the resultset to the application.

Once more, the define and fetch phases are relevant for queries only.The Oracle OCI interface module contain calls to facilitate each of those phases.

Friday, November 27, 2015

Unix - AWK Command by examples. Part 2

In this part, we will learn on AWK Built in Variables:

FS - Input field separator variable:

By default, AWK assumes that fields in a file are separated by space.
If the fields in the file are separated by any other column separator, we can use the FS variable for that.

Example 5. awk 'BEGIN {FS=":"} {print $2}' test_file
OR
awk -F: '{print $2} test_file

output will be as below.

39 AAAA
15 BBBB
39 EEEE
39 GGGG

We can use -F option for to change column separator in command to display output.

awk -F: '{if ($2 == "") print $1 ": no password!"}'

We can easily move from one column separator to another column separator using awk.

#!/bin/awk -f
{
if ($1 == "#START") {
FS=":";
} else if ($1 == "#STOP") {
FS=" ";
} else {
#print the Roman number in column 3
print $3
}

}


OFS - Output field separator variable: 

Example 6. awk '{print $4,$5}' test_file

When we use print statement in awk, it uses default column seperator while displaying result.

The output of this command will be
unixrun 0
unixrun 175
unixrun 430
unixrun 428

We can change this default column seperator by using the OFS variable as

awk 'BEGIN {OFS=":"} {print $4,$5}' test_file

output will be as below.

unixrun:0
unixrun:175
unixrun:430
unixrun:428

If I run same command in this way, it will not work same as earlier, Here no column separator showing in output.

awk 'BEGIN {OFS=":"} {print $4$5}' test_file

unixrun0
unixrun175
unixrun430
unixrun428

NF - Number of fields variable:

The NF used to get the number of fields in line of a file.

Example 7. awk '{print NF}' test_file

This will display the number of columns in each row, output as below.

9
9
9
9

NR - number of records variable: 

The NR used to get the line number or no of lines in the file.

Example 8. awk '{print NR}' test_file

This will display the line numbers from 1.

1

Example 9. awk 'END {print NR}' test_file

This will display the total number of lines in the file, here output will be.

4

FILENAME - The Current Filename Variable

The last variable known to regular AWK is "FILENAME", which tells you the name of the file.

#!/bin/awk -f
# reports which file is being read
BEGIN {
f="";
}
{ if (f != FILENAME) {
print "reading", FILENAME;
f=FILENAME;
}
print;
}


Unix - AWK Command by examples. Part 1

AWK command is one of the most powerful tools in Unix used for processing the rows and columns in a file. AWK has built in string functions and associative arrays and supports all functionalities of C language. We can also convert AWK scripts into Perl scripts using a2p utility, an added advantage of AWK.

awk 'BEGIN {start_action} {action} END {stop_action}' file

The actions in the begin block are performed before processing the file.
The actions in the end block are performed after processing the file.
The rest of the actions are performed while processing the file.

Create a file test_file with the following data. This file can be easily created using the output of ls -l.
-rw-rw-rw- 1 unixrun unixrun   0 Nov  26 21:39 AAAA
-rw-rw-rw- 1 unixrun unixrun 175 Nov  26 21:15 BBBB
-rw-rw-rw- 1 unixrun unixrun 430 Nov  26 21:39 EEEE
-rw-rw-rw- 1 unixrun unixrun 428 Nov  26 21:39 GGGG

Example 1. awk '{print $1}' test_file

$1, $2, $3... represents the first, second, third column,.. in a row respectively.
Here, awk command will print the first column in each row as shown below.

The variable "$0" refers to the entire line that AWK reads in. That is, if you had nine fields in a line,
print $0
is similar to
print $1, $2, $3, $4, $5, $6, $7, $8, $9

-rw-rw-rw-
-rw-rw-rw-
-rw-rw-rw-
-rw-rw-rw-

To print the 2th and 5th columns in a file use awk '{print $2,$5}' test_file

Example 2. awk 'BEGIN {cnt=0} {cnt=cnt+5+$3} END {print cnt}' test_file

This will prints the sum of the value as  (value of 3th column + 5 ).
In the Begin block the cnt variable is assigned with value 0.
In the next block the v (value of 3rd column + 5) is added to cnt variable.
In addition of this, same scenario is repeated for no of rows in a file.
When all the rows are processed the cnt variable will hold the sum of the values as per our formula (value of 3rd column + 5).
Finally, the output printed in the End block.

We can perform the above steps in another way also - by creating command file.

vi summation.awk
#!/usr/bin/awk -f
BEGIN {cnt=0}
{cnt=cnt+$3+5}
END {print cnt}
:wq

This will create a file named as summation.awk, it will be executed by using awk command as below:

awk -f summation.awk test_file.

This will run the script in summation.awk file and displays the sum as per our formula (sum of the 3rd column + 5 ) in the test_file.

Example 3. awk '{ if($5 == "time") print $0;}' test_file

Above command is used for to find a particular string in a file in a specific column.
Here, it searches the string "time" in the 5th column  of the file and prints entire line in output.

-rw-r--r-- 1 unixrun unixrun 43 Nov  26 21:39 time

Example 4. awk 'BEGIN { for(i=1;i<=4;i++) print "Cube of", i, "is",i*i*i; }'

This will print the Cube of numbers from 1 to 4, The output of the command is

Cube of 1 is 1
Cube of 2 is 8
Cube of 3 is 27
Cube of 4 is 64

Monday, November 2, 2015

Constructors in Java.

Constructor in java is a special type of method that is used to initialize the object. Java constructor is invoked at the time of object creation. It constructs the values i.e. provides data for the object that is why it is known as constructor. 

Constructor name must be same as its class name and must have no explicit return type.

There are two types of constructors:
1. Default constructor (Non Parameterized constructor)
Default constructor provides the default values to the object like 0, null etc. depending on the type.

2. Parameterized constructor
A constructor that have parameters is known as parameterized constructor.
Parameterized constructor is used to provide different values to the distinct objects.  



Constructor overloading used in Java in which a class can have no. of constructors that differ in parameter lists.The compiler differentiates these constructors by taking into account the number of parameters in the list and their type. 

class Car
{
    int id;
    String carname;
    String brand;
    int qty;
    Car(int i,String n){
    id = i;
    carname = n;
    }
    Car(int i,String n,Str a){
    id = i;
    carname = n;
    brand =a;
    }
    Car(int i,String n,Str a,int t){
    id = i;
    carname = n;
    brand =a;
    qty = t;
    }
    void display()
    {
    System.out.println(id+" "+carname+" "+brand+" "+qty);
    }
    public static void main(String args[]){
    Car s1 = new Student5(101,"Alto");
    Car s2 = new Student5(102,"Swift","Maruti",25);
    Car s2 = new Student5(102,"Swift","Maruti");    
    s1.display();
    s2.display();
    s3.display();
   }
}

Output:

101 Alto 0
102 Swift Maruti 25
102 Swift Maruti 0

Wednesday, October 14, 2015

Operator Precedence in Java.

Java has well-defined rules for execution order in which the operators in an expression are evaluated when the expression has several operators.
Below are the execution orders i.e. precedence in Java Coding.

LevelOperatorDescriptionAssociativity
1=
+=
-=
*=
/=
%=
Assignment
Addition assignment
Subtraction assignment
Multiplication assignment
Division assignment
Modulus assignment
Right
2? :Ternary conditionalRight
3||Logical ORLeft
4&&Logical ANDLeft
5|Bitwise inclusive ORLeft
6^Bitwise exclusive ORLeft
7&Bitwise ANDLeft
8==
!=
Relational is equal to
Relational is not equal to
Left
9<
<=
>
>=
instanceof
Relational less than
Relational less than or equal
Relational greater than
Relational greater than or equal
Type comparison (objects only)
Left
10<<
>>
>>>
Bitwise left shift
Bitwise right shift with sign extension
Bitwise right shift with zero extension
Left
11+
-
Addition
Subtraction
Left
12*
/
%
Multiplication
Division
Modulus
Left
13++
--
+
-
!
~
type )
Unary pre-increment
Unary pre-decrement
Unary plus
Unary minus
Unary logical negation
Unary bitwise complement
Unary type cast
Right
14++
--
Unary post-increment
Unary post-decrement
Right
15()
[]
·
Parentheses
Array subscript
Member selection
Left

How to imporve efficiency while working in Unix/Linux?

There are various flavors of Unix/Linux currently available in market.Below techniques which would be helpful to you while working in Unix/Linux. 

1. Using history command 


It is a Unix/Linux command used to get list of commands fired on Unix/Linux kernel.

You can use command  as  history | grep "mykeyword" to get desired list of commands which you have used recently.
HISTSIZE - a unix/linux environment variable, which specifies how many command history can store.

2. Using pipe (|) instead of two commands


Here, history and  grep command used to get the output of history command having "myword" word used in earlier command executions. 

3. Use  of ! and !! command


! command is used for to execute last command which is used in Unix/Linux, whether it successfully executed or not.

eg. 
In Unix, After doing ls –lrt myword.txt, I need to open the file myword.txt, you can use command like - vi !$.  Here !$ is the last argument of the shell.
In Linux, if you have installed vim editor , you can use it like  - !vim, 

This method saves your 30-35% time  time at a time of critical conditions.

Depending on Unix/Linux flavor and depends on shell, sometime SSH client like Putty, MultiTabbed Putty, etc. also require some settings, with that you can refer previous command with help of keyboard Up and Down arrows. 

!! command is used to execute the very last command you have executed and extremely fast as in processing. It works on the shells in which Up and Down arrow not working.


4. Use CTRL+R


It is used for repeating the last matching command.

Usage of this method is like that press CTRL+R and type "myword" which i have used in last command.Unix/Linux will find "myword" and once displaying the search result, you just need to press "Enter" to execute that command.

5. Use of aliases


You have to define aliases , in your  bash profile .profile or .bashrc file, which are used by your frequently while working. You can find this file by using ls -a command in your home directory.

eg. 
"l" which finds all files. 
"l." which finds all hidden files. 
"ls" which finds all files of using option - ls -lrt.
"cdh" which finds all files of using option - cd /home/myyword directory.

6. Using regular expression in grep and find and common commands like cd.

  
grep and find are two best commands provided by Unix/Linux mainly to search in a file , a directory , certain words in file e.g. ERROR or Warning.
eg.
grep "error" *.tmp  | grep "warning" *.tmp - Using two grep command for finding error and warning individually.
egrep "error|warning" *.tmp - Instead of  using two grep command for finding error and warning individually, usage of a single command.
cd  - to switch between two directories in Unix/Linux.

7. Use of CTRL+Z, fg and bg commands


Use  CTRL+Z and fg and bg to suspend a process running for a particular task.

Use fg and bg to run a particular command to run in background and foreground.
eg.
fg 1  - last background command or job to bring on foreground. 
fg 2  - second last background command or job to bring it on foreground.

Lastly, You have to Minimize the key strokes or increase the speed of typing, in Unix/Linux to get quicker output. You have to learn more n more commands with different options and usage of them, it will reduce your thinking time and increase your productivity. 


Some techniques might work in one flavor of Linux, but same might be not work for you.

Friday, October 2, 2015

What is the difference between sleep() and wait() in Java?

sleep() 
  • It is the blocking operation which keeps a holding on the screen or locking of a shared object properties while a process execution for the specified number of milliseconds.
  • It is most commonly used for polling during execution.
  • It is used to check for certain results after specific interval during process execution.
wait() 
  • It simply stops / pauses the process execution thread until  given number of milliseconds to be elapsed  or after receiving a notification from parent process execution without keeping a holding or locking of the shared object.
  • It is generally used in multi threaded applications where as required to interaction with notify() / notifyAll() to achieve synchronization during process execution.

Friday, March 27, 2015

find command in UNIX/LINUX.

Here, are some find Command examples in Unix/Linux.

To run  last executed find command :
!find will repeat the last find command executed by user..

root@testenv ~/java : !find
./emp.java
./dept.java

"." says starts finding from current directory and include all sub directory and
"/" says starts finding from root directory 

To find files names as test.java:
find -name "test.java"

To find files names other then test.java file in test directory:
find /test/ –not -name "test.java"

To find files which has been modified/accessed/created less than a day, minute or hour :

-mtime is used to search files based on last modification time in days.
-atime is used to search files based on last accessed time in days.
-ctime is used to search files based on last changed time in days.

+ sign is used to search for greater than.
- sign is used to search for less than.
without sign is used for exact.

(all the files modified exact 1 day)
find . -mtime 1 

(find all the files modified less than 1 day)
find . -mtime -1

(find all the files modified more than 1 day)
find . -mtime +1

We can use same for –atime and –ctime for last accessed and last changed files.

-mmin is used to search files based on last modification time in minutes.
-amin is used to search files based on last accessed time in minutes.
-cmin is used to search files based on last changed time in minutes.

(find all the files created exact 15 min)
find . -cmin 15

(find all the files created less than 15 min)
find . -cmin -15

(find all the files created more than 15 min – means all other files)
find . -cmin +15

To find all the files and directories which holds the 777 permission:
–perm option is used to find files based upon permissions.

(find all the files with file permission as 777 – rwx on owner, group and others)
find –perm 777

(find all the files with file permission as 750 – rwx on owner, r-x on group and no permission to others)
find –perm  750

(find all the files with file permission as 444 – rrr on owner, group and others)
find –perm 444

(find all the files with file permission as 644 – rw- on owner, r—on group and others)
find –perm 644

To find case insensitive search.
-iname,  by default find searches are case sensitive. It is used for errors and exceptions in log files.
find . –iname "error" –print

To delete temporary files.
–delete option to do a remove specific files from the search criteria.
find . –delete –name "error" –print

use -print0 to avoid problems with white space in the path.
find . -name "*.tmp" –print0 | xargs rm –f
find . -name "*.tmp" -print | xargs -0 rm –f

To find all java file which contains word “Error” with sorted o/p or without sorted o/p.
find . –name "*.java" –print | xargs grep “Error”,
find . –name "*.java" –print | xargs grep “Error” | sort

To find files only in current directory not searching on sub directories:

While using find command I required only files, directories and links that are new, only in the current directory.
find . -type f -newer –name "*.java"
find . -type d –newer –name "*.java"
find . -type l –newer –name "*.java"

While using find command I required only files, directories and links that are new, only in the current directory with searching for two sub directories – specifying the depth of find.

-mindepth  for to find files upto minimum depth levels in directory.
-maxdepth  for to find files upto maximum depth levels in directory.

find . –maxdepth 2 -type f -newer –name "*.java"
find . –maxdepth 2 -type d –newer –name "*.java"
find . –maxdepth 2 -type 1 –newer –name "*.java"

find . –mindepth 2 –maxdepth 10 -type f –name "*.java"

To find files based on size.
–size option to find files based upon certain size. It finds all files in current directory and sub-directory, greater than some size, here size is 1MB = 1048576 bytes:

find . -size +1048576c -exec ls -l {} \;
find . -size +10M -exec ls -l {} \;

-c after the number means specify the size in bytes.
To find files using a range of file sizes, a minus or plus sign can be used before the number.
- minus sign means less than
+ plus sign means "greater than.

If I want to find all the files within a range of 5 MB to 10 MB:
find . -size +5242880c -size -10485760c -print
find . -size +5M -size -10M -print

To find files 30 days older and above 10 MB size.
You can use –mtime and –size option to find files which are 30 days old and greater than 10MB in size
find . -mtime +30 -size +10485760c -exec ls -l {} \;
find . -mtime +30 -size +10M -exec ls -l {} \;


To find all of the symbolic links in your home directory, and print the files your symbolic links points to:
find . -type l -print | xargs ls -ld | awk '{print $10}'

To find all empty files from the server.
find . -empty

To find largest and smallest file in the current directory:
find . -type f -exec ls -s {} \; | sort -n -r | head -1

find . -type f -exec ls -s {} \; | sort -n -r | tail -1
find . -type f -exec ls -s {} \; | sort -n | head -1

To find all hidden files from the directory:
find -type d -name ".*"

find –print and find is same as –print is a default option of find command.
find –print0 should be used to avoid any issue with white space in file name or path while forwarding output to xargs, also use xargs -0 along with find –print0.

find has an option called –delete which can be used in place of  -exec rm {} \;

Thursday, March 26, 2015

What is the difference between InnoDB & MYISAM in MYSQL?

InnoDB 1.1 combines more reliability and performance with usability enhancement compare to InnoDB 1.0.

Since MySQL 5.5, InnoDB is the default storage engine, instead of MyISAM.

CREATE TABLE TABLENAME statement without an ENGINE= clause creates an InnoDB table.

Its supports DML operations and follow the ACID property and with can do commit and rollback for long transactions and supports for crash-recovery to protect user data.
Its supprts Row-level locking and Oracle-style consistent reads to increase multi-user concurrency and performance.
InnoDB tables stores your data on disk to optimize queries based on primary keys.
InnoDB table has a primary key index called the clustered index to organizes the data to minimize I/O.
InnoDB also supports FOREIGN KEY constraints to remove inconsistencies across different tables.
You can freely mix InnoDB tables with tables from other MySQL storage engines. We can join data from InnoDB and MEMORY tables in a single query.
It is designed for CPU efficiency and maximum performance when processing large data.
It use B-tree indexes only. Not supports for T-tree indexes & Hash indexes.
It can store up to 64TB in a table and supports row level locking.
It can store Compressed data and Encrypted data along with Replication support and Foreign key support.
It uses Data caches and Index caches while storing data.

As compare to InnoDB, MyISAM tables have characteristics are as under:

MYISAM tables are stored with the low byte first, which makes the data machine and operating system are as independent. Due to that widely used in mainstream machines.
All numeric key values are stored with the high byte first for better index compression.
It can store (232)2 rows in a MyISAM table and maximum number of indexes per MyISAM table is 64 and max 16 columns can have an index. It stores maximum key length is 1000 bytes, which can be changed by changing the source and recompiling table to default size upto 1024 bytes.
It improves space utilization in index tree by rows are inserted in sorted order when you are using AUTO_INCREMENT column in index tree by splitting in a manner that only high node only contains one key.
It supports Dynamic-sized rows, due to that it is much less fragmented when mixing deletes with updates and inserts, automatically by combining adjacent deleted blocks and by extending blocks if the next block is deleted.
MyISAM supports concurrent inserts, If a table has no free blocks in the middle of the data file, you can INSERT new rows.
We can store the data file and index file in different directories on different physical devices to get more speed with the DATA DIRECTORY and INDEX DIRECTORY table options to CREATE TABLE.
Large files - up to 63-bit file length and 256 TB in size to support large files.
BLOB and TEXT columns can be indexed, NULL values can inserted in indexed columns, which takes 0 to 1 bytes per key.
Each character column can have a different character set.
It Support for a true VARCHAR, a VARCHAR column starts with a length stored in one or two bytes. MYISAM Tables with VARCHAR columns may have fixed or dynamic row length.
It does not supports row level locking and transactions.
It use B-tree indexes only. Not supports for T-tree indexes & Hash indexes.
It can store Compressed data and Encrypted data along with Replication support, but not for Foreign key support.
It use B-tree indexes only. Not supports for T-tree indexes and Hash indexes.
It supports Full-text search indexes and not for the Clustered indexes and Hash indexes.
It uses Index caches while storing data but not uses Data caches.