Dynamic Management Views

Dynamic Management Views (DMVs) allow you to inspect SQL Server internals. These views exist in the SYS schema. Here are some examples of these views:
  • sys.objects
  • sys.tables
  • sys.queries
  • sys.indexes
The DMVs can help identify performance characteristics. For example, dm_db_index_uage_stats tells you how many seeks and scans a user did. Another useful view is dm_db_missing_index_details. This will tell you an index that does not exist, but would help with perf.

Transactions

A transaction is a group of database statements that are all done as a single atomic unit. Or they might all be invalidated (rolled back). Transactions can be started implicitly when you issue a statement. They start before the statement, and end after it. Transactions can also be explicit. You determine when the transaction begins, and when it ends or rolls back.

If you have started a transaction but not committed it, other sessions cannot change or query the data you are modifying. This is done by locks that are applied when you make changes. Transactions can be nested. You can tell how deep you are by checking the @@TRANCOUNT. This value is 0 for implicit transactions. It is 1 inside the first level of transaction, 2 for the second level, and so on. It gets reverted to 0 upon commit.

Obviously locks are placed when you INSERT/UPDATE/DELETE. However locks can also be placed upon SELECT of data. The data you are querying is locked from any change until you have finished retrieving the data. The discussion so far is for the default behavior in SQL Server. That is, this is how things work when the isolation level is set to the default "read committed" level. There are other levels you can set on a session basis.

Temporal Tables

SQL Server allows you to create temporal tables. This functionality is new as of SQL Server 2016. These tables store the latest version of data in the table. But they also store prior versions of the data, along with when those versions were valid. Such a table needs a primary key. It also requires a ValidFrom, ValidTom, and Period column.

You cannot drop a temporal table while it is performing the versioning. You first need to disable that to do a drop. It is possible to alter existing tables to become temporal. You will need to provide default values for the required temporal table columns (ValidFrom, ValidTom, and Period).

Once you have the versioning on, then you can query values in the table between certain timeframes. This let's you look back at the value that columns had previously. This is useful and acts like an auditing system for changes in column values.

Using JSON in SQL Server

If you want to extract a single value out of a JSON string, you can use the JSON_VALUE() function. Note that parameters passed to it are case sensitive. By default, any errors are ignored. You pass this function a JSON variable to interrogate, plus a path to the value of interest. The output is a single value from an element. Note that if you are using arrays, they use zero-based indexes.

If you want to extract an object/array from a JSON type, you use the JSON_QUERY() function. The output will be in XML format. You pass this function a JSON variable, plus a location in single quotes. The dollar sign represents the root.

You can also make changes to a JSON object using the JSON_MODIFY() function. Or if you want to convert JSON to a SQL table, use the OPENJSON() function. Finally to go the other way around, to produce JSON from a database table, you use FOR JSON PATH, which if kind of like FOR XML PATH.

Introduction to JSON

JSON stands for JavaScript Object Notification. It is a text format for object. It is similar and simpler than XML. It is also shorter. It is hierarchical and self-describing. Thus JSON is easier to write than XML. It has support for arrays built in, which are denoted by brackets.

JSON has object that are key/value pairs. They must be in curly brackets. These are separated by commas. You can nest key/value pairs. The key and values should be strings enclosed in quotation marks. Speaking of string, JSON objects should be declared as NVARCHAR data type.

JSON is supported natively in SQL Server 2016 or later. There are no questions on JSON in Microsoft Exam 70-461. However, this topic will be tested in Exam 70-761, which I plan to take. Guess I better get SQL Server 2016 installed (that requires an operating system upgrade for me though).

Bulk Operations

Sometimes you need to Extract/Transform/Load data en masse. In these scenarios, you will be doing bulk operations for performance. There is a Bulk Copy Program called bcp that can import and export data from/to a file. It is a piece of SQL Server. Instead it is installed along of it. You call it from the command prompt.

Within SQL Server, you can insert data from a file using BULK INSERT. There are lot of options you can specify, such as the size of the batch for transactions. You can also specify the name of a file to log errors to. The input file can be in CSV format. It can also be in some other custom format defined by a format file you specify.

Alternatively, you can use the OPENROWSET() function to bulk read data from a file. You must specify the data file input parameter as BULK. This bulk read can be done over a network. Under the hood, this utilizes OLE DB technology to get the data. Errors can be logged to a file you specify. The format of the data file can be CSV, or a custom format you define in another file.

XML Overview and Data Type

XML is an expandable/extendable syntax. You need to meet certain rules to qualify as being well formed XML. Elements are placed between names in < >. You can nest elements within each other. There is one root element in a well formed XML document. Elements can have attributes.

Variables can be of type XML. You then assign XML that is in string format to those variables. You can retrieve data from that variable. By default, it comes back in 1 row/column. This data can also be stored in the column if that column is of type XML.

There are multiple ways to retrieve data from a regular table into XML output format. The easiest is to SELECT the data FOR XML RAW. That by default produces elements that are named "row". The data in each element is a row from the table you are selecting from. There are options to change the name of the element, make the source columns be output elements, and so on.

Another way to retrieve data is to SELECT the data FOR XML AUTO. The first table selected will generate the outer elements. If you SELECT another table in the query, those records become nested elements in the output.

If you don't like elements and sub-elements, you can use FOR XML PATH. This gives you control over which source data becomes elements, and which becomes attributes in the output.

There is one more way to retrieve data into XML output. That is FOR XML EXPLICIT. This requires specific items being selected to control the output such as the root element tag name. It is not a common technique for grabbing XML data.