Important Topics :
1) Class2) Object
3) Constructor
4) Member variable or Instance variable
5) Private Constructor
6) Static :
a) Static Variable
b) Static Constructor
c) Static Objects
d) Static Class
7) Access modifiers :
a) Public
b) Protected
c) Internal (C#)
d) Internal Protected (C#)
e) Private
7) Encapsulation
8) Abstraction
9) Inheritance
10) Polymorphism
11) Data hiding
12) Overloading
13) Overriding
14) Abstract (calss,method)
15) Virtual
16) Interface
Q)- Difference between static variables and instance variables?
Q)- Difference between Abstraction and Encapsulation?
Q)- Difference between Inheritance and Container-ship?
Q)- Difference between data hiding and data abstraction?
Q)- Difference between Overloading and Overriding?
Q)- Difference between Abstract class and interface?
------------------------------------------------------SQL-------------------------------------------------
------------part 1
Database Normalization :
normaliztion is the process of organizing data to minimize data redundency, which in turn ensure data consistency.
Dataredundency problems :
1. disk space wastage
2. data inconsistency
3. dml query can become slow
ex. employee(id,name,gender,salary,deptid,deptHead,deptLocation)
suppose there are 6000 employee in it dept, if head of it department chang we need to update employee table means 6000 rows otherwise data inconsistency.
First Normal form :
Table is said to be in 1NF;if
1. Data in each column should be atomic, no multiple values separated by comma.
2. table does not contain any repeating column groups
3. identify each recode uniquely using primary key.
ex. deptname|employee - if we enter rahul,raman,saini in employee column thans should not be
DeptName|Employee1|Emp2|Emp3 - if new emp join need to modyfy table structure.
2NF :
1. Meets all the condition of 1NF
2. Move redundent data to a separate table
3. cretate relationship between those tables using forig keys.
3NF :
1. Meets all the condition of 1NF and 2NF
2. Does not contain columns that are not fully dependent on primary key.
Employee(ID,Nmae,Gender,Salary,AnualSalary,DeptID)
here anual salary not dependent on primary key.
PIVOT IN SQL :
Pivot is sql server operation that can be used to turn unique values from one column tinto multiple columns in the output, there by effective rotating a table.
tblProductSale(salesagent,salescountry,salesamount)
output : result(salesagent,country1,country2,country3)
select salesagent,india, us, uk
from tableProductSales
PIVOT
(
sum(salesamount)
for salesCountry
in([india],[US],[UK])
)
as pivottable
if table tblProductSale has id column
tblProductSale(id,salesagent,salescountry,salesamount)
then above pivot query will not give expected result.
select salesagent,india, us, uk
from
(
slect salesagent, salescountry,salesamount from tboProductSale
) as sourceTable
PIVOT
(
sum(salesamount)
for salesCountry
in([india],[US],[UK])
)
as pivottable
this will give expected result
Transaction :
Transaction is a group of commands that change the data stored in the database.a transaction,is treated as a single unit. a transaction ensure that, either all of the commands succeed, or none of them. if one of the commands in the transaction fails, all of the commandsfail, and any data that was modified in the database is rolled back.in this way transaction maintain the integrity of data in database.
begin trans
process database commands
check the error
if error occur
rollback the trans
else
commit the trans
default isolation level is readcommitted means database read only committed data. so if any transaction is uocommited or running u can't read from this table.
set transaction isolation level read uncommitted
now u can read uncommited data.
begin try
begin trans
commit trans
end try
begin catch
rollback trans
end catch
SUBQUERY VS INNERJOIN
CHECKPOINT;
GO
DBCC DROPCLEANBUFFERS -- CLEARS QUERY CACHE
GO
DBCC FREEPROCCACHE -- CLEAR EXECUTION PLAN CACHE
aCCORDING TO MSDN, in most cases, there is usually no performance difference between queries that uses sub-queries and equivalent queries using joins.
according to msdn, in some cases where existance must be checked, a join produuces better performance. otherwise , the nested query mustbe processed for each result of the outer query.in such cases, a join approach would yield better results.
every thing is depend on execution plan. rather than going by theory, turn on client stats and execution plan to see performance of each option and then make decision.
select id , name, desc from tblProduct
where not Exist(select * from tblproductsales where productid=tblProduct.productid)
Cursor in sql :
if we need to process data row by row cursor can be usedbut cursor is very bad for performance nad should be avoided . cursor can be easily replaced using joins.
cursor is pointer to a row..
Declare @productid int
declare @name varchar(100)
declare productcursor cursor for
select id, name from tblproduct where id <= 1000
open productcursor
fetch next from productcursor into @productid,@name
while(@@FETCH_STATUS=0)
begin
fetch next from productcursor into @productid, @name
end
close productcursor
deallocate productcursor
type of cursors:
1. forward only
2. static
3.keyset
4.dynamic
Cncurrency problems in sql :
1.Dirty read
2.Lost Updates
3.Nonrepeatable reads
4.Phantom reads
Transaction isolation level in sql :
1.read uncommitted
2.read committed
3.repeatable read
4.snapshot
5.serializable
isolationlevel|dirtyread|lostUpdate|Phantoread
ReadUncommitted|yes|yes|yes|yes
Readcommitted |No |yes|yes|yes
Repeatable read|No | No| No|yes
Snap shot |NO |No |NO |No
Serializable |NO |No |NO |No
Dirty read : A dirty read happens when one transaction is permitted to read data that has been modified by another transactionthat has not yet been committed.
in in most cases this would not cause a problem. however, if the first transaction is rolled back after the second reads the data, the second transaction has dirty data that does not exist anymore.
readuncommitted transaction isolation level is the only isolation level that has dirty read side effect.this is the lease restrictive of the all isolation levels.another option to read dirty data is by using Nolock lable hint.
set transaction isolation level read uncommitted
select * from tblinventoty where id = 1
above query is similar as
select * from tblinventoty(nolock) where id = 1
Lost update problem :
lost update problem happens when 2 transactions read and update the same data.
--transaction 1
begin transaction
declare @iteminstock int
select @iteminstock = iteminstock from tboinventory where id=1
waitfor delay '00:00:10'
set @iteminstock = @iteminstock - 1
update tblinventory
set iteminstock = @iteminstock where id=1
print @iteminstock
commit transaction
--transaction 2
begin transaction
declare @iteminstock int
select @iteminstock = iteminstock from tboinventory where id=1
waitfor delay '00:00:01'
set @iteminstock = @iteminstock - 1
update tblinventory
set iteminstock = @iteminstock where id=1
print @iteminstock
commit transaction
as wait of transation 1 is more so it will overwrid e the change of transation 2 and table has 9 in stock rather than 7 (initially table has 10).
if you run both transaction with isolation level with repeatable read, transaction 2 will be rolled back with error message resource allocated to other transaction.
Non repeatable read :
non repeatable read happens when one transaction reads the same data twice and another transaction updates that data in between the first and second read of transaction 1.
Phantom read :
phantom read happens when one transaction executes a query twice and it gets a different numbers of rows in the result set each time. this happens when a second transaction inserts a new row that matches the where clause of the query executed by the first transaction.
repeatable read vs serializable isolation :
Repeatable read prevents only non-repeatable read. repeatable read isolation level ensure that the data that one transation has read, will be prevented from being updated or deleted by any other transaction, but it does not prevent new rows from being inserted by other transaction resulting in phantom read concurrency problem.
serializable prevents both non-repeatable read and phantom read problems.
Serializable isolation level ensures that the data that one transaction has read, will be prevented from being updated or deleted by any other transaction. it also prevents from being updated or deleted ny anyother transaction.it also prevents new rows from being inserted by other transactions, so this isolation level prevents both non-repeatable read and phantom read problems.
snapshot isolation :
difference b/w serializable and snapshot isolation level :
serialization isolation is implemented by acquiring locks which means the resources are locked for the duration of the current transaction.this isolation level doesn't have any concurency side effects but at the cost of significant reduction in concurrency.
Snapshot isolation does't acquire locks, it maintains versioning in tempdv.since snapshot isolation does not lock resources, it can significantly increase the number of concurrent transaction while providing the same level of data consistency as serializable isolation does.
read commited snapshot isolation level :
this is not a different isolation, it is a different way of implementing read commited isolation level.one problem we have with read committed isolation level is that, it blocks the transaction if it is trying to read the data, that another transaction is updating at the same time..
to use READ_COMMITTED_SNAPSHOT isolation, eable it at the database level.
alter database SampleDB set READ_COMMITTED_SNAPSHOT on
Difference between snapshot isolation and read comitted snapshot :
read committed isolation | snapsho isolation
no update conflict (means second transaction with update will complete after trans1 without conflict) | vulnerable to update conflict(have update conflict)
works with existing application without requiring any changes to the application | application changes may be required to use with an existing application
can be used with with distributed transaction | can't use
Provide statement-level read consistency | provides transaction level read consistency
in snapshot isolation,read statement always return last commited value before the transaction started.meanwhile after starting transaction if any other transaction update the value you will not get that value throughout the transaction.but in read commited snapshot u will always get committed value means read committed problem
---------------------------------------part2
Count_Big
Schema binding option. : can't chane underlying objects that could affect funcion,sp and view
-----------------------------------
Deterministic and nondeterministic function
Encryption function : use WITH ENCRYPTION keyword before as keyword in funcition.
schema binding function :
scenario - we have a student table, create a function that return name of student using id, now delete table and execute function, u will get error invalid objectname.
u have fun that is refering table but u are allowing to delete table, that is not a good practice.
u can't change the underlying object at any way that will affect the function if u use schema binding in ur function.
if u are using with schema binding, name of object(table) must be two part name like dbo.tablename
Deterministic functions always return the same result any time they are called with a specific set of input values and given the same state of database.
square(), Power(), Sum(), AVG() and Count() -- all aggregate fun are deterministic
NonDeterministic functions may return deferent result each time they called with a specific set of input values even if the database state they access remains the same.
GetDate() and Current_Timestamp
--------------------------------
Views..................
want to provide row and column level security
want to provide aggregate data instead of detailed data.
views are just a compiled and saed sql statement.
used to provide abstraction over table data.
Updatable view...................
by default view doesn't store any data so when we insert update or delete from view it make changes in base tables and view may not update correctly. to correctly update a view, that is based on multiple table, Instead of triggers are used.
Updatable view in sql is called indexed view and in orracle it is called materialized view.
indexed view should be created with SchemaBinding option
IF an aggregate function in the select list, references an expression, and if there is a possibility for that expression to become NULL, then, a replacement value should be specified.
If groupby is specified, the view select list must contain a COUNT_BIG(*) expression.
the base table in the view, should be referenced with 2 part name
The base table in the view, should be referenced with 2 part name.like dbo.tablename
after creating indexed view it does not store data untill u create unique clustered index on it.
create Unique Clustered Index <Name of index> on <View Name>(<Column Name>)
after creating index, when u run view it store data physically.
Limitation of view :
U can't pass parameters to view, table valued function is exelent replacemen for parameterized view.
Rules and defaults can't associate with views.
The order by is invalid in view unless TOP or For XML is also specified.
view can't be based on temporary tables.
No outer joins or full joins are allowed in clustered view, use inner join
-----------------------------------------------------
Triggers in sql server
-----------------------------------------------------
in sql there are 3 types of triggers.
1. DML triggers
2. DDL triggers
3. Login triggers
DML triggers are fired automatically in response of DML events.
DML triggers can be again classified into 2 types.
1.After triggers - For trigers
2.Instead of triggers.
After trigger fires after the triggering action. The insert, update and delete statements causes an after trigger to fire after the respective statements complete action.
Instead of triggers, fires instead of the triggering action. the inser, update and delete statements causes an instead of trigger to fire Instead of the respective statement execution.
Update after trigger : u can use both inserted and deleted magic table in update after trigger.
create trigger <name>
on <table name>
for update
as
begin
select * from deleted
select * from inserted
end
here deleted table contain old data and inserted table contain new data.
Instead of trigger :
If u insert/update in view and its update multipale tables then sql server throw an error like " view is not updatable because the modificaion affects multiple base table", if update/inser affect only one table it will execute but may be misbehave of also affect other rows by changing master table
this can be handle by instead of insert trigger
create trigger <name>
on <table name>
instead of insert
as
begin
select * from deleted
select * from inserted
end
when u insert/update in view/table instead of performing insert operation trigger will fire so u have to respecify insert operation in this trigger.
eample of instead of insert/update trigger can be used in view of following query
alter view v_student
as
select studentid,studentName,gender from dbo.testStudent
inner join dbo.genderMaster on dbo.testStudent.genderID=genderMaster.genderID
if u want to inser/update in this view u have to use instead of insert trigger other wise u can't .
use return and raiseerror("statement terminated",16,1) if any problem occur otherwise execute further statement
ex :
create trigger <name>
on <table name>
instead of insert
as
begin
declare @deptid int
select @deptid = deptid
from tbldepartment
join inserted
on inserted.deptname = tbldepartment.deptname
if(@deptid is null)
begin
raiseerror("invalid expression",16,1)
return
end
insert into tblemployee(id,name,gender,deptid)
select id,name,gender,@deptid
from inserted
end
create trigger <name>
on <table name>
instead of Update
as
begin
if(update(id)) -- this is a function, return true if u are updating id otherwise false
begin
raiserror("id can't be change",16,1)
return
end
if(update(deptName))
begin
declare @deptid int
select @deptid = deptid
from tbldepartment
join inserted
on inserted.deptname=tbldepartment.deptname
if(@deptid is null)
begin
raiseerror("invalid name")
end
end
u can't delete from view if it affects multiple tables
------------------------------------------------------------------------------------------------------
derived tble and common table expression :
suppose u want department name and no. of employee on that department with condition where no. of emoloyee is less than 5.
u have several ways like crete viwe, cte, derived table, table variable and temp table.
global temp table destroy when last sessin referencing the table closed and visible to other sessions also.
temp table visible to current session and shared betweeen nested stored proc.
-------------------------part 3
SQL Deadlock :
In a database, a deadlock occurs when two or more processeshave a resource locked, and each pocess requests a lock on the resource that another process has already locked.neither of the transaction here can move forwar, as each one is waiting for the other to release the lock.
When deadlock occur, sql server will choose one of the processes as the deadlock victim and rollback the process, so the other process can move forword.
How sql server detects deadlocks :
Lock monitor thread in sql server, run every 5 seconds by default to detect if there are any deadlocks. if the lock monitor thread finds deadlocks, the deadlock detection interval will drop from 5 seconds to 100 miliseconds depending on the frequency of deadlocks. if the lock monitor thread stops finding deadlocks, the database engine increases the intervals between searches to 5 seconds.
What happens when a deadlock is detected :
When a deadlock is detected, the database engine ends the deadlock by choosing one of the threads as the deadlock victim's transaction is then rolled back and returns a 1205 error to the application.rolling back the transaction of the deadlock victim releases all locks held by that transaction. this is allows the other transactions to become unblocked and move forward.
What is deadlock_priority :
By default, sql server chooses a transaction as the deadlock victim that is least expensive to roll back. however, as user can specify the priority of session in a deadlock situation using the SET DEADLOCK_PRIORITY statement. the session with the lowest deadlock priority is choosen as the deadlock victim.
Example : SET DEADLOCK PRIORITY NORMAL
DEADLOCK_PRIORITY
1.The default is Normal
2.Can be set to Low,Normal or High
3.Can also be set to a integer value in the range of -10 to 10
Low: -5
Normal: 0
High: 5
What is the deadlock victim selection criteria :
1. if the Deadlock_Priority is different, the session with the lowest priority is selected as the victim
2. if both the session have the same priority, the transaction that is least expensive to rollback is selected as the victim.
3. if both the session have the same deadlock priority and the same cost, a victim is chosen randomly.
Logging deadlock in sql server :
When deadlock occurs, sql server chooses one of the transactions as the deadlock victim and rolls it back.there are several ways in sql server to track down the queries that are causing deadlocks.one of the option is to use sql server trace flag 1222 to write the deadlock information to the sql error log.
--set sql server trace flag 1222
DBCC Traceon(1222,-1)
--Check the status of the trace flag
DBCC TraceStatus(1222,-1)
--Turn off the trace flag
DBCC Traceoff(1222,-1)
--To read the error log
execute sp_readerrorlog
1 parameter indicates that the trace flag must be set at the global level.if you omit -1 parameter the trace flag will be set only at the session level.
deadlock information in the error log has three sections
Deadlock Victim : id of the process that was selected as the deadlock victim and killed by sql server.
Process List : contains the list of processes that participated in the deadlock.
Resource List : contains the list of resources(objects) owned by the processes involved in the deadlock
Capture deadlock in sql profiler :
catch deadlock using try/catch in sql :
Begin catch
if(Error_Number() = 1205)
begin
select 'Deadlock occur'
end
end catch
Handling deadlock in ado.net :
catch(SqlExceprion ex)
{
if(ex.Number == 1205)
{
Label1.text = "Deadlock, transaction failed";
}
}
How to find blocking query in sql server or query causing deadlock :
DBCCOpenTran command will display only the oldest active transaction. it is not going to show you all the open transaction.
sql server process can be killed using
1. sql server active monitor
2. using sql commond Kill Process_ID
What happen when u kill a session :
All the work that the transaction has done eill be rolled back. the database must be put back in the state it was in, before the transaction started.
-------------------------------part 4
Except operator :
Except operator returns unique rows from the left query that aren,t in the right query's results.
number of colums,order of columns and datatype of column must be same and at least compatible.
select id,name from table1
except
select id,name from table2
want to use order by use it after second query not work after first query.
except operator excepts the same number of columns in both the queries, where as not in, compares a single columns from the outer query with a single column from the sub query.
except filters duplicates and returns only DISTINCT rows from the left query hat aren't in the right query's results, wher eas not in does not filter the duplicates.
Intersect vs innerjoin vs union :
Intersect operator retrieves the common records from both the left and right query of thhe intersect operator.
1.introduced in sql 2005
2.the number and the order of the columns must be same in both the queries
3.the data types must be same or at least compatible.
select id,name from table1
intersect
select id,name from table2
intersect filters duplicates and returns only DISTINCT rows that are common between the left and right query, where an inner join does not filter the duplicates
to make inner join behave like intersect operator use distinct operator.
inner joins treat two nulls as two different values. so if you joining two tables based on nullable columns and if both table have nulls in the joining column then, inner join will not include those rows in the result-set, where as intersect treats two nulls as a same value and it returns all matching rows.
Cross apply and outer apply :
the apply operator is used to join a table to a table valued function.
cross apply returns only matching rows(same as inner join)
outer apply returns matching + non matching rows(same as left outer join)
we can not use function that returns table in joining two tables.
select d.deptid,e.name,e.gender,e.salary
from department d
inner join fn_GetEmployeeByDepartmentId(d.id)
on d.id=e.deptid
this will give error so we can use this like following, no need to use join condition because function will be called for each row
select d.deptid,e.name,e.gender,e.salary
from department d
cross apply fn_GetEmployeeByDepartmentId(d.id)
-- on d.id=e.deptid
in place of left join we can use outer apply.
select d.deptid,e.name,e.gender,e.salary
from department d
left join fn_GetEmployeeByDepartmentId(d.id)
on d.id=e.deptid
select d.deptid,e.name,e.gender,e.salary
from department d
outer apply fn_GetEmployeeByDepartmentId(d.id)
-- on d.id=e.deptid
DDL trigger :
ddl triggers fires in response to ddl events-create,alter, and drop(table,function,index,stored procedure etc)
certain system stored procedures that perform ddl-like operaion can also fire DDL triggers.
example - sp_rename system stored procedure
use of ddl trigger :
1. if u want to execute some code in response to specific ddl event
2. to prevent certain changes to ur dtabase schema
3. audit the changes that the users are making to the database structure
create trigger [trigger_name]
on [scope (server|database)]
for [eventType1,eventType2,...],
as
begin
--trigger
end
create trigger testtrigger
on database
for create_table,alter_table,drop_table
as
begin
print 'u just created,modify or delete table'
end
create trigger testtrigger
on database
for create_table,alter_table,drop_table
as
begin
rollback
print 'u can not created,modify or delete table'
end
reside in database -->programability --> trigger
server scope trigger
create trigger testtrigger
on All server
for create_table,alter_table,drop_table
as
begin
rollback
print 'u can not created,modify or delete table'
end
reside within
server -->server objects -->triggers
for disable : disable trigger <triggername> on database|server
Trigger execution level :
server scoped triggers will always fire before any of the database scoped triggers
using sp_settriggerorder stored proc, u can set the execution order of server scoped or dtabase scoped triggers
we can use EventData() function if we want to log changes using ddl trigger.
Logon trigger :
logon triggers fire in response to a logon event. logon triggers fire aftr the authentication phase of logging in finishes, but before the user session is actually established.
logon triggers can be used for
1. Tracking login activity
2.restricting logins to sql server
3.limiting the number of sessions for a specific login
select is_user_process, original_login_name, *
from
sys.dmm_exec_session order by login_time desc
above query will show u the number of logins for users.and u want to restrict more than 3 sessions for user using trigger.
create trigger tr_auditLogin
on all server
for logon
as
begin
declare @loginname nvarchar(100)
set @loginname = Original_Login() -- return name of new connection
if(select count(*) from sys.dm_exec_sessions where is_user_process = 1 and original_login_name = @loginName)>3
begin
print 'fourth connection attempt blocked'
rollback;
end
end
above error msg can be read using sp_readerrorlog sp.
where vs having :
where clause filter rows before aggregate calculation are performed where as having clause filters rows after aggregate calculation performed.having is slower than where and should be avoided.
rollup and cube :
rollup is used to do aggregate operation on multiple levels in a hierarchy.
retrieve salary by counry along with grand total:
select country, sum(salary) as totalsalary
from emoloyees
group by rollup(country)
if we want Group salary by country and gender. also compute the subtotal at country level and grand total.
select country,gender,, sum(salary) as totalsalary
from emoloyees
group by rollup(country,gender)
using groupings :
select country,gender, sum(salary) as totalsalary
from emoloyees
group by GROUPING SETS
(
(country,gender),
(country),
()
)
Cube :
cube() in sql server produces the result set by generating all combinations of columns specified in group by cube()
ex. write a query to retrieve sum of salary grouped by alll combinations of the 2 columns as well as grand total.
select country,gender,, sum(salary) as totalsalary
from emoloyees
group by cube(country,gender)
select country,gender, sum(salary) as totalsalary
from emoloyees
group by GROUPING SETS
(
(country,gender),
(country),
(gender),
()
)
difference b/w cube and rollups :
cube generates a result set that shows aggregates for all combinations of values in the selected columns, where as rollup generates a result set that shows aggregates for a hierarchy of values in the selected columns.
rollup(continent,country,city)
continent,country,city
continent,country
continent
()
cube(continent,country,city)
continent,country,city
continent,country
continent,city
continent
country,city
country
city
()
Grouping function in sql :
Grouping(column) indicates whether the column in a group by list is aggregated or not, grouping returns 1 for aggregated or 0 for not aggregated in the result set.
select
case when GROUPING(continent) = 1 then 'all' else continect end as continent,
case when GROUPING(country) = 1 then 'all' else country end as country,
case when GROUPING(city) = 1 then 'all' else city end as city,
sum(saleamount) as totalsale
from sales
group by rollup(continent,country,city)
instead of grouping we can use isnull if we don't have any null data in columns.
Over clause in sql :
The over clause combined with partitioned by is used to break up data into partitions.
the specific function operates for each partition.
syntax : function() over(partition by col1, col2,...)
function can be any of follows :
count(),avg(),sum(),max(),row_number(), rank(), dense_rank() etc
Row_Number() :
returns the sequential number of a row starting at 1
2.order by claus is required
3.partitioned by claus is optional
4. when the data is partitioned, row number is reset to 1 when partition changes
with EmployeeCTE as
(
select *, ROW_NUMBER() over(partition by id order by id) as rownumber
from employee
)
delete from employeeCTE where rownumber>1
rank and dense_rank function :
1. return a rank starting at 1 based on the ordering of rows imposed by the order by clause
2.order by is required
3.partition by is optional
4. when data is partitioned, rank is reset to 1 when the partition changes
rank function skips rankings if there is a tie where as dense_rank will not.
exmple : if u have 2 rows at rank 1 and u have 5 rows in total
rank() : 1,1,3,4,5
dense_rank() : 1,1,2,3,4
No comments:
Post a Comment