Errors and Error Handling

There might be errors encountered when issuing SQL statements in a routine such as a procedure. In SQL Server 2005 and beyond, you can programmatically handle them using TRY and CATCH. The TRY block is where you put SQL statements you want to execute. The CATCH block is where control is transferred in case of error.
You can call different functions in the CATCH block to find out what went wrong, and where. Here is a list of those functions:
·         ERROR_NUMBER()
·         ERROR_MESSAGE()
·         ERROR_SEVERITY()
·         ERROR_LINE()
·         ERROR_PROCEDURE()
·         ERROR_STATE()
To indicate an error occurred in code, you used to call the RAISEERROR() function. You pass it the error message, severity, and state. This would not terminate the batch you are in. In SQL Server 2012 and beyond, you could instead do a THROW. You give it the error number, message, and state. Note that these values are not enclosed in parentheses for a THROW. The severity for a throw is always 16. THROW terminates the batch. The statement directly before a THROW must end in a semicolon.
There is a special version of THROW that has no parameters/values. It can only be issued in the CATCH block. It will effectively re-raise the error to the caller.

Locks

There are multiple modes for data that is locked due to transactions occurring in other sessions. One mode is a shared lock mode. This happens when a SELECT Statement is reading data. Another is an exclusive lock mode, where changes are occurring in some other transaction. In this second scenario, other changes to the data that is locked are blocked. Any other sessions will need to wait until the first transaction is ended. If multiple different items are locked between more than 1 session, a deadlock scenario can occur where neither session can gain locks on data they need to change in order to complete.
There are settings that control how the database can respond to locking. These are called the isolation levels. They are set on a session basis. The default level is read commited. Other sessions can only read data once changes are commited to the database. Another level is read uncommitted, where other sessions can read the changes made in the transaction of another session. There is some danger here, as the changes could later be rolled back.
There is still another level of isolation called read committed snapshot. It is also known as TCSI. This is a variation of read committed. Other sessions can proceed to read values from a snapshot of the version of data prior to a transaction making changes. As an aside, this is the default level used in the cloud Azure database.

Transactions


A transaction is an atomic unit of work. All of the DML operations within a transaction either succeed or fail as a group. The most important type of transactions deal with changes from INSERT/UDPATE/DELETE. But even a SELECT statement is part of a transaction, albeit a read only one.
There are 3 modes of transactions. The first is auto commit. This is the default. DML statements get changes made permanent via an implied COMMIT. The second mode is implicit. You issue a bunch of DML statements, following by a COMMIT statement. The third mode is explicit. In this mode you start the transaction with BEGIN TRAN, perform DML, and end it with a COMMIT TRAN.
If you do not wish to COMMIT the changes in a transaction, you can issue a ROLLBACK command. It will undo any changes from the transaction. Transactions can be nested. But a single ROLLBACK will undo all transactions that are present in the session. You can check the @@TRANSCOUNT to find out how far deep you are into nested transactions. A value of 0 means you are not in a transaction. There is also a XACT_STATE() function that will return 0 if there is no transaction, 1 if there is an uncommitted transaction, and -1 on a fatal error.

Sequences, MERGE, and OUTPUT

Database sequences are objects that create sequential values, kind of like some types of IDENTITY values. However, sequences are not strictly tied to just a single database table. The default data type for a sequence is BIGINT. Sequences have the ability to cycle when they go past the maximum value. If you plan on that happening, make sure to set the MINVALUE property of the sequence. You have the ability to cache a set of sequence values if you will be using a lot of them in a batch operation.
The MERGE statement is a combination of INSERT and UPDATE statements rolled into one. It is good for OLTP systems. With a MERGE, you are dealing with SOURCE and TARGET tables. You define rules to follow in scenarios of data between these tables. For example, if there is not a match in the target table for source values, you will normally perform an INSERT. And if there is not a match in the source, you will normally do an UPDATE. Both of these can be taken into account and performed by the MERGE statement.
OUTPUT allows you to return information related to rows affected by DML. The format is similar to a SELECT statement. You use the INSERTED and DELETED keywords to specify which information you are after. This is a good technique to use when an IDENTITY column value is being inserted into a table and you want to know what value got created.

Identify Columns

The IDENTITY is a potential property for at most 1 column in a table. It will cause keys to be automatically produced for new rows added to the table. The values used are of type NUMERIC with scale 0 (i.e. whole number values without decimal). Unlike sequences, the IDENTITY will not cycle. By default, IDENTITY values start at 1 and increment by 1 for each new row.
You cannot update an IDENTITY column value. Any IDENTITY data type column will normally automatically get a value generated when a row is inserted into the table. But if you turn on the SET IDENTITY INSERT ON option, you can override the value placed into the IDENTITY column with a value of your choosing.
The SCOPE_IDENTITY() function will return the last identity value produced in your session and scope. And @@IDENTITY gives you the last identity value for your entire session. In case of situations such as a ROLLBACK, there can be gets in the values of the IDENTITY columns.

Data Manipulation Language

When you issue an INSERT command, it is best to explicitly specify the column names of the table you are inserting into. However, if you omit those target column names, you must ensure that the order of values that you are inserting matches the order that the columns were presented when the table was defined.
There is a way to create a table based on the result of a SQL query. That is done with the SELECT INTO statement. This will copy IDENTITY values from the source table. However it does not copy any of the following properties of the source table:
·         Indexes
·         Triggers
·         Permissions
·         Constraints
The simple UPDATE statement is normally dealing with a single that that gets some changes. However, you can perform on update on a JOIN of multiple tables. The only caveat is that only one of those tables can be affected with changes. You can also use a CTE in your update statement if it makes your life easier.
TRUNCATE is an operation that is related to DELETE. There are some differences though. It is much faster than a DELETE. It resets the IDENTITY column if there is one. The TRUNCATE will not work if there are any foreign keys present in the data of the table you are truncating. The TRUNCATE affects all rows of the table. You cannot TRUNCATE just a portion of a table like DELETE can.

Views and Synonyms

One of the limitations of database views is that you cannot pass any parameters to the view. So you cannot dynamically control the rows in the view. You could do some filtering in the SELECT statement that queries the view. However you could also employ an inline table valued function. This is a function that returns a row set. It can accept parameters that affect the results returned. While it can accept parameters, the function itself does not have local variables.
Jumping to another topic, database synonyms are objects that reference other objects. You generate a synonym with SQL like this:
            CREATE SYNONYM dbo.my_syn FOR my_table;
The object that you reference does not need to exist yet when you create the synonym. It will get resolved when the synonym is accessed. The object that you reference can be a table, view, function or procedure. However a synonym cannot be created to refer to another synonym.