Ms SQL Server
Ms SQL Server
Databases
A database consists of a collection of tables that contain data and other objects, such as views, indexes, stored procedures, and triggers, defined to support activities performed with the data. The data stored in a database is usually related to a particular subject or process, such as inventory information for a manufacturing warehouse.
SQL Server can support many databases. Each database can store either interrelated orunrelated data from other databases. For example, a server can have one database that stores personnel data and another that stores product-related data. Alternatively, one database can store current customer order data, and another related database can store historical customer orders used for yearly reporting.
Before you create a database, it is important to understand the parts of a database and how to design these parts to ensure that the database performs well after it is implemented.
Parts of a Database
A database consists of a collection of tables that stores a specific set of structured data. A table contains a collection of rows (referred to as records or tuples) and columns (referred to as attributes). Each column in the table is designed to store a certain type of information (for example, dates, names, dollar amounts, or numbers). Tables have several types of controls (constraints, rules, triggers, defaults, and customized user data types) that ensure the validity of the data. Tables can have indexes similar to those in books that allow rows to be found quickly. Declarative referential integrity (DRI) constraints can be added to the tables to ensure that interrelated data in different tables remains consistent. A database can also store procedures that use Transact-SQL programming code to perform operations with the data in the database, such as storing views that provide customized access to table data.
For example, you create a database named MyCoDB to manage the data in your company. In the MyCoDb database, you create a table named Employees to store information about each employee, and the table contains columns named EmpId, LastName, FirstName, Dept, and Title. To ensure that no two employees share the same EmpId and that the Dept column contains only valid numbers for the departments in your company, you must add constraints to the table. Because you want to be able to quickly find the data for an employee, based on the employee ID or last name, you define indexes. You will have to add a row of data to the Employees table for each employee, so you create a procedure named AddEmployee, which is customized to accept the data values for a new employee and performs the operation of adding the row to the Employees table. You may need a departmental summary of employees, in which case you define a view called DeptEmps that combines data from the Departments and Employees tables and produces the output. This illustration shows the parts of the MyCoDB that is created.
Creating a Database
To create a database determine the name of the database, its owner (the user who creates the database), its size, and the files and filegroups used to store it.
Before creating a database, consider that:
- Permission to create a database defaults to members of the sysadmin and dbcreator fixed server roles, although permissions can be granted to other users.
- The user who creates the database becomes the owner of the database.
- A maximum of 32,767 databases can be created on a server.
- The name of the database must follow the rules for identifiers.
Three types of files are used to store a database:
• Primary files
These files contain the startup information for the database. The primary files are also used to store data. Every database has one primary file.
• Secondary files
These files hold all the data that does not fit in the primary data file. Databases do not need secondary data files if the primary file is large enough to hold all the data in the database. Some databases may be large enough to need multiple secondary data files, or they may use secondary files on separate disk drives to spread the data across multiple disks.
• Transaction log
These files hold the log information used to recover the database. There must be at least one transaction log file for each database, although there may be more than one. The minimum size for a log file is 512 kilobytes (KB).
Important Microsoft® SQL Server™ 2000 data and transaction log files must not be placed on compressed file systems or a remote network drive, such as a shared network directory.
When a database is created, all the files that comprise the database are filled with zeros to overwrite any existing data left on the disk by previously deleted files. Although this means that the files take longer to create, this action prevents the operating system from having to fill the files with zeros when data is written to the files for the first time during usual database operations. This improves the performance of day-to-day operations.
It is recommended that you specify a maximum size to which the file is permitted to grow. This prevents the file from growing, as data is added, until disk space is exhausted. To specify a maximum size for the file, use the MAXSIZE parameter of the CREATE DATABASE statement or the Restrict filegrowth (MB) option when using the Properties dialog box in SQL Server Enterprise Manager to create the database.
After you create a database, it is recommended that you create a backup of the master database.
Fundamentals of SQL Server 2000 Architecture
Microsoft SQL Server™ is a family of products that meet the data storage requirements of the largest data processing systems and commercial Web sites, yet at the same time can provide easy-to-use data storage services to an individual or small business.
The data storage needs of a modern corporation or government organization are very complex. Some examples are:
- Online Transaction Processing (OLTP) systems must be capable of handling thousands of orders placed at the same time.
- Increasing numbers of corporations are implementing large Web sites as a mechanism for their customers to enter orders, contact the service department, get information about products, and for many other tasks that previously required contact with employees. These sites require data storage that is secure, yet tightly integrated with the Web.
- Organizations are implementing off-the-shelf software packages for critical services such as human resources planning, manufacturing resources planning, and inventory control. These systems require databases capable of storing large amounts of data and supporting large numbers of users.
- Organizations have many users who must continue working when they do not have access to the network. Examples are mobile disconnected users, such as traveling sales representatives or regional inspectors. These users must synchronize the data on a notebook or laptop with the current data in the corporate system, disconnect from the network, record the results of their work while in the field, and then finally reconnect with the corporate network and merge the results of their fieldwork into the corporate data store.
- Managers and marketing personnel need increasingly sophisticated analysis of trends recorded in corporate data. They need robust Online Analytical Processing (OLAP) systems easily built from OLTP data and support sophisticated data analysis.
- Independent Software Vendors (ISVs) must be able to distribute data storage capabilities with applications targeted at individuals or small workgroups. This means the data storage mechanism must be transparent to the users who purchase the application. This requires a data storage system that can be configured by the application, and then tune itself automatically so that the users do not need to dedicate database administrators to constantly monitor and tune the application.
Features of SQL Server 2000
- Internet Integration.
The SQL Server 2000 database engine includes integrated XML support. It also has the scalability, availability, and security features required to operate as the data storage component of the largest Web sites. The SQL Server 2000 programming model is integrated with the Windows DNA architecture for developing Web applications, and SQL Server 2000 supports features such as English Query and the Microsoft Search Service to incorporate user-friendly queries and powerful search capabilities in Web applications. - Scalability and Availability.
The same database engine can be used across platforms ranging from laptop computers running Microsoft Windows® 98 through large, multiprocessor servers running Microsoft Windows 2000 Data Center Edition. SQL Server 2000 Enterprise Edition supports features such as federated servers, indexed views, and large memory support that allow it to scale to the performance levels required by the largest Web sites. - Enterprise-Level Database Features.
The SQL Server 2000 relational database engine supports the features required to support demanding data processing environments. The database engine protects data integrity while minimizing the overhead of managing thousands of users concurrently modifying the database. SQL Server 2000 distributed queries allow you to reference data from multiple sources as if it were a part of a SQL Server 2000 database, while at the same time, the distributed transaction support protects the integrity of any updates of the distributed data. Replication allows you to also maintain multiple copies of data, while ensuring that the separate copies remain synchronized. You can replicate a set of data to multiple, mobile, disconnected users, have them work autonomously, and then merge their modifications back to the publisher. - Ease of installation, deployment, and use.
SQL Server 2000 includes a set of administrative and development tools that improve upon the process of installing, deploying, managing, and using SQL Server across several sites. SQL Server 2000 also supports a standards-based programming model integrated with the Windows DNA, making the use of SQL Server databases and data warehouses a seamless part of building powerful and scalable systems. These features allow you to rapidly deliver SQL Server applications that customers can implement with a minimum of installation and administrative overhead. - Data warehousing.
SQL Server 2000 includes tools for extracting and analyzing summary data for online analytical processing. SQL Server also includes tools for visually designing databases and analyzing data using English-based questions.
Data Warehousing
Microsoft® SQL Server™ 2000 includes several components you can use to build data warehouses that effectively support your decision support processing needs.
Data Warehousing Framework
The Data Warehousing Framework is a set of components and APIs that implement the data warehousing features of SQL Server 2000. It provides a common interface to be used by various components seeking to build and use a data warehouse or data mart.
Data Transformation Services
Data Transformation Services (DTS) provides a set of services used to build a data warehouse or data mart. Decision support systems analyze data to find trends of interest to the database users. Online transaction processing databases store large numbers of records covering the details of each transaction, and online analytical processing (OLAP) systems aggregate and summarize the information to speed analysis of the trends exhibited in the data.
DTS offers support for extracting data from heterogeneous OLE DB data sources and the summarizing or aggregating of data to build a data warehouse.
Online Analytical Processing Support
Microsoft SQL Server 2000 Analysis Services allows you to build flexible, powerful business intelligence applications for Web sites and large enterprise systems.
Microsoft SQL Server 2000 Analysis Services provides OLAP processing capabilities against heterogeneous OLE DB data sources. It has efficient algorithms for defining and building multidimensional cubes that can be referenced by applications using the OLE DB 2.0 OLAP extensions or the Microsoft ActiveX® Data Objects Multidimensional extensions. Analysis Services is an excellent tool for multidimensional analysis of data in SQL Server 2000 databases.
Analysis Services supports multidimensional queries against cubes with hundreds of millions of dimensions. You can control cube security down to the level of cells and members. You can create custom rollup functions that tailor the types of aggregations and processing that can be performed in multidimensional cubes.
Data Mining Support
Data mining allows you to define models containing grouping and predictive rules that can be applied to data in either a relational database or multi-dimensional OLAP cubes. These predictive models are then used to automatically perform sophisticated analysis of the data to find trends that help you identify new opportunities and chose the ones that have a winning outcome. SQL Server 2000 Analysis Services includes support for data mining models, including API support of the OLE DB for Data Mining specification. Through the OLE DB for Data Mining API, Analysis Services supports integration with third-party data mining providers.
English Query
English Query makes a definition of the entities and relationships defined in a SQL Server 2000 database. Given this definition, an application can use an Automation API to pass English Query a string containing a natural-language question about the data in the database. English Query returns a SQL statement that the application can use to extract the necessary data.
Meta Data Services
SQL Server 2000 includes Microsoft Meta Data Services, which consists of a set of Microsoft ActiveX® interfaces and information models that define database schema and data transformations as defined by the Microsoft Data Warehousing Framework. A goal of the Microsoft Data Warehousing Framework is to provide meaningful integration of multiple products through shared meta data. It combines business and technical meta data to provide an industry standard method for storing the schema of production data sources and destinations.
Meta Data Services is the preferred means of storing DTS packages in a data warehousing scenario because it is the only method of providing data lineage for packages. DTS also uses Meta Data Services storage to allow transformations, queries, and ActiveX scripts to be reused by heterogeneous applications.
Relational Database Components
The database component of Microsoft® SQL Server™ 2000 is a Structured Query Language (SQL)–based, scalable, relational database with integrated Extensible Markup Language (XML) support for Internet applications. Each of the following terms describes a fundamental part of the architecture of the SQL Server 2000 database component:
Relational Database
Although there are different ways to organize data in a database, relational databases are one of the most effective. Relational database systems are an application of mathematical set theory to the problem of effectively organizing data. In a relational database, data is collected into tables (called relations in relational theory).
A table represents some class of objects that are important to an organization. For example, a company may have a database with a table for employees, another table for customers, and another for stores. Each table is built of columns and rows (called attributes and tuples in relational theory). Each column represents some attribute of the object represented by the table. For example, an Employee table would typically have columns for attributes such as first name, last name, employee ID, department, pay grade, and job title. Each row represents an instance of the object represented by the table. For example, one row in the Employee table represents the employee who has employee ID 12345.
When organizing data into tables, you can usually find many different ways to define tables. Relational database theory defines a process called normalization, which ensures that the set of tables you define will organize your data effectively.
Scalable
SQL Server supports having a wide range of users access it at the same time. An instance of SQL Server 2000 includes the files that make up a set of databases and a copy of the DBMS software. Applications running on separate computers use a SQL Server 2000 communications component to transmit commands over a network to the SQL Server 2000 instance. When an application connects to an instance of SQL Server 2000, it can reference any of the databases in that instance that the user is authorized to access. The communication component also allows communication between an instance of SQL Server 2000 and an application running on the same computer. You can run multiple instances of SQL Server 2000 on a single computer.
SQL Server is designed to support the traffic of the largest Web sites or enterprise data processing systems. Instances of SQL Server running on large, multiprocessor servers are capable of supporting connections to thousands of users at the same time. The data in SQL Server tables can be partitioned across multiple servers, so that several multiprocessor computers can cooperate to support the database processing requirements of extremely large systems. These groups of database servers are called federations.
Although SQL Server is designed to work as the data storage engine for thousands of concurrent users who connect over a network, it is also capable of working as a stand-alone database directly on the same computer as an application. The scalability and ease-of-use features of SQL Server 2000 allow it to work efficiently on a single computer without consuming too many resources or requiring administrative work by the stand-alone user. The same features allow SQL Server to dynamically acquire the resources required to support thousands of users, while minimizing database administration and tuning. The SQL Server 2000 relational database engine dynamically tunes itself to acquire or free the appropriate computer resources required to support a varying load of users accessing an instance of SQL Server at any specific time. The SQL Server 2000 relational database engine has features to prevent the logical problems that occur if a user tries to read or modify data currently used by others.
Structured Query Language
To work with data in a database, you have to use a set of commands and statements (language) defined by the DBMS software. Several different languages can be used with relational databases; the most common is SQL. The American National Standards Institute (ANSI) and the International Standards Organization (ISO) define software standards, including standards for the SQL language. SQL Server 2000 supports the Entry Level of SQL-92, the SQL standard published by ANSI and ISO in 1992. The dialect of SQL supported by Microsoft SQL Server is called Transact-SQL (T-SQL). T-SQL is the primary language used by Microsoft SQL Server applications.
Extensible Markup Language
XML is the emerging Internet standard for data. XML is a set of tags that can be used to define the structure of a hypertext document. XML documents can be easily processed by the Hypertext Markup Language, which is the most important language for displaying Web pages.
Although most SQL statements return their results in a relational, or tabular, result set, the SQL Server 2000 database component supports a FOR XML clause that returns results as an XML document. SQL Server 2000 also supports XPath queries from Internet and intranet applications. XML documents can be added to SQL Server databases, and the OPENXML clause can be used to expose data from an XML document as a relational result set.
Advantages of SQL Server 2000 as a Database Server
Microsoft SQL Server 2000 is capable of supplying the database services needed by extremely large systems. Large servers may have thousands of users connected to an instance of SQL Server 2000 at the same time. SQL Server 2000 has full protection for these environments, with safeguards that prevent problems, such as having multiple users trying to update the same piece of data at the same time. SQL Server 2000 also allocates the available resources effectively, such as memory, network bandwidth, and disk I/O, among the multiple users.
Extremely large Internet sites can partition their data across multiple servers, spreading the processing load across many computers, and allowing the site to serve thousands of concurrent users.
Multiple instances of SQL Server 2000 can be run on a single computer. For example, an organization that provides database services to many other organizations can run a separate instance of SQL Server 2000 for each customer organization, all on one computer. This isolates the data for each customer organization, while allowing the service organization to reduce costs by only having to administer one server computer.
SQL Server 2000 applications can run on the same computer as SQL Server 2000. The application connects to SQL Server 2000 using Windows Interprocess Communications (IPC) components, such as shared memory, instead of a network. This allows SQL Server 2000 to be used on small systems where an application must store its data locally.
The illustration shows an instance of SQL Server 2000 operating as the database server for both a large Web site and a legacy client/server system.
The largest Web sites and enterprise-level data processing systems often generate more database processing than can be supported on a single computer. In these large systems, the database services are supplied by a group of database servers that form a database services tier. SQL Server 2000 does not support a load-balancing form of clustering for building a database services tier, but it does support a mechanism that can be used to partition data across a group of autonomous servers. Although each server is administered individually, the servers cooperate to spread the database-processing load across the group. A group of autonomous servers that share a workload is called a federation of servers. For more information.
Desktop Database Systems
Although SQL Server 2000 works effectively as a powerful database server, the same database engine can also be used in applications that need stand-alone databases stored locally on the client. SQL Server 2000 can configure itself dynamically to run efficiently with the resources available on a client desktop or laptop computer, without the need to dedicate a database administrator to each client. Application vendors can also embed SQL Server 2000 as the data storage component of their applications.
When clients use local SQL Server 2000 databases, applications connect to local instances of the database engine in much the same way they connect across the network to a database engine running on a remote server. The primary difference is that local connections are made through local IPCs such as shared memory, and remote connections must go through a network.
The illustration shows using SQL Server 2000 in a desktop database system
Tabular Data Stream Protocol
SQL Server uses an application-level protocol called Tabular Data Stream (TDS) for communication between client applications and SQL Server. The TDS packets are encapsulated in the packets built for the protocol stack used by the Net-Libraries. For example, if you are using the TCP/IP Sockets Net-Library, then the TDS packets are encapsulated in the TCP/IP packets of the underlying protocol.
The contents of the packets that send result sets back to the application depends on whether FOR XML is specified in the Transact-SQL statement transmitted to the database engine:
- If FOR XML is not specified, the database engine sends a relational result set back to the application. The TDS packets contain the rows of the result set, with each row comprised of one or more columns, as specified in the select list of the SELECT statement.
- If FOR XML is specified, the database engine streams an XML document back to the application. The XML document is formatting in the TDS packets as if it were a single, long Unicode value, with each packet being approximately 4 KB in size.
You can configure the SQL Server packet size, which is the size of the TDS packets. The size of the TDS packets defaults to 4 KB on most clients (DB-Library applications default to 512 bytes), which testing has shown to be the optimal TDS packet size in almost all scenarios. The size of the TDS packets can be larger than the size of the packets in the underlying protocol. If this is the case, the protocol stack on the sending computer disassembles the TDS packets automatically into units that fit into the protocol packets, and the protocol stack on the client computer reassembles the TDS packets on the receiving computer.
The Microsoft Distributed Transaction Coordinator (MS DTC) is a transaction manager that allows client applications to include several different sources of data in one transaction. MS DTC coordinates committing the distributed transaction across all the servers enlisted in the transaction.
An installation of Microsoft® SQL Server™ can participate in a distributed transaction by:
- Calling stored procedures on remote servers running SQL Server.
- Automatically or explicitly promoting the local transaction to a distributed transaction and enlist remote servers in the transaction.
- Making distributed updates that update data on multiple OLE DB data sources.
If these OLE DB data sources support the OLE DB distributed transaction interface, SQL Server can also enlist them in the distributed transaction.
The MS DTC service coordinates the proper completion of the distributed transaction to ensure that either all of the updates on all the servers are made permanent, or, in the case of errors, all erased.
SQL Server applications can also call MS DTC directly to start a distributed transaction explicitly. One or more servers running SQL Server can then be instructed to enlist in the distributed transaction and coordinate the proper completion of the transaction with MS DTC.
The data in a Microsoft® SQL Server™ 2000 database is organized into several different objects. These objects are what a user can see when they connect to the database.
In SQL Server 2000, these components are defined as objects:
Constraints Tables Defaults Triggers Indexes User-defined data types Keys User-defined functions Stored procedures Views
Data Types and Table Structures
All the data in Microsoft® SQL Server™ 2000 databases is contained in objects called tables. Each table represents some type of object meaningful to the users. For example, in a school database you would find tables such as a class table, an instructor table, and a student table.
SQL Server tables have two main components:
- Columns
Each column represents some attribute of the object modeled by the table, such as a parts table having columns for ID, color, and weight. - Rows
Each row represents an individual occurrence of the object modeled by the table. For example, the parts table would have one row for each part carried by the company.
Data Types
Because each column represents one attribute of an object, the data in each occurrence of the column is similar. One of the properties of a column is called its data type, which defines the type of data the column can hold. SQL Server has several base data types that can be specified for columns:
binary
Bigint
bit
Char
datetime
decimal
Float
image
Int
Money
nchar
Ntext
nvarchar
Numeric
Real
smalldatetime
smallint
smallmoney
sql_variant
sysname
text
timestamp
tinyint
varbinary
varchar
uniqueidentifier
SQL Server also supports a table base data type, which can be used to store the result set of an SQL statement. The table data type cannot be used for columns in a table.
It can only be used for Transact-SQL variables and the return values of user-defined functions. For more information
Users can also create their own user-defined data types, for example:
-- Create a birthday data type that allows nulls.
EXEC sp_addtype birthday, datetime, 'NULL'
GO
-- Create a table using the new data type.
CREATE TABLE employee
(emp_id char(5),
emp_first_name char(30),
emp_last_name char(40),
emp_birthday birthday)
A user-defined data type makes a table structure more meaningful to programmers and helps ensure that columns holding similar classes of data have the same base data type.
SQL Server provides several data type synonyms to help support SQL-92 data type names not included as base data types, such as national character and character varying. When a synonym is specified in a CREATE TABLE statement, the column is assigned the base data type associated with the synonym. For more information, see Data Type Synonyms.
A domain is the set of all allowable values in a column. It includes not only the concept of enforcing data types, but also the values allowed in the column. For example, a part color domain would include both the data type, such as char(6), and the character strings allowed in the column, such as Red, Blue, Green, Yellow, Brown, Black, White, Teal, Grey, and Silver. Domain values can be enforced through mechanisms such as CHECK constraints and triggers.
When a column has been assigned a data type, all values placed into the column must be of that data type. SQL statements can specify that values of different data types be used as the source value only if SQL Server can implicitly convert the source value data type to the data type of the column. For example, SQL Server supports the implicit conversion of int values to decimal; therefore, SQL statements can specify int values as the value to be assigned to a decimal column.
The SQL Server 2000 sql_variant data type is a special data type that allows you to store values of multiple base data types in the same column. For example, you can store nchar values, int values, and decimal values in the same column. For more information
Null Values
Columns can either accept or reject null values. NULL is a special value in databases that represents the concept of an unknown value. NULL is not the same as a blank character or 0. Blank is actually a valid character, and 0 is a valid number. NULL simply represents the idea that we do not know what this value is. NULL is also different from a zero-length string. If a column definition contains the NOT NULL clause, you cannot insert rows having the value NULL for that row. If the column definition has only the NULL keyword, it accepts NULL values.
Allowing NULL values in a column can increase the complexity of any logical comparisons using the column. The SQL-92 standard states that any comparison against a NULL value does not evaluate to TRUE or FALSE, it evaluates to UNKNOWN. This introduces three-value logic to comparison operators, which can be difficult to manage correctly.
System Tables
SQL Server stores the data defining the configuration of the server and all its tables in a special set of tables known as system tables. Users should not query or update the system tables directly unless there is no other way to get the data required by the application. Only SQL Server should reference the system tables in response to administration commands issued by users. The system tables can change from version to version; applications referencing system tables directly may have to be rewritten before they can be upgraded to a newer version of SQL Server with a different version of the system tables. SQL Server exposes most of the information from the system tables through other means.
Temporary Tables
SQL Server supports temporary tables. These tables have names that start with a number sign (#). If a temporary table is not dropped when a user disconnects, SQL Server automatically drops the temporary table. Temporary tables are not stored in the current database; they are stored in the tempdb system database.
There are two types of temporary tables:
- Local temporary tables
The names of these tables begin with one number sign (#). These tables are visible only to the connection that created them. - Global temporary tables
The names of these tables begin with two number signs (##). These tables are visible to all connections. If the tables are not dropped explicitly before the connection that created them disconnects, they are dropped as soon as all other tasks stop referencing them. No new tasks can reference a global temporary table after the connection that created it disconnects. The association between a task and a table is always dropped when the current statement completes executing; therefore, global temporary tables are usually dropped soon after the connection that created them disconnects.
Many traditional uses of temporary tables can now be replaced with variables that have the table data type.
Working with Tables
Users work with the data in tables using data manipulation language (DML) SQL statements:
-- Get a list of all employees named Smith:
SELECT emp_first_name, emp_last_name
FROM employee
WHERE emp_last_name = 'Smith'
-- Delete an employee who quit:
DELETE employee
WHERE emp_id = 'OP123'
-- Add a new employee:
INSERT INTO employee
VALUES ( 'OP456', 'Dean', 'Straight', '01/01/1960')
-- Change an employee name:
UPDATE employee
SET emp_last_name = 'Smith'
WHERE emp_id = 'OP456'
SQL Views
A view can be thought of as either a virtual table or a stored query. The data accessible through a view is not stored in the database as a distinct object. What is stored in the database is a SELECT statement. The result set of the SELECT statement forms the virtual table returned by the view. A user can use this virtual table by referencing the view name in Transact-SQL statements the same way a table is referenced. A view is used to do any or all of these functions:
- Restrict a user to specific rows in a table.
For example, allow an employee to see only the rows recording his or her work in a labor-tracking table. - Restrict a user to specific columns.
For example, allow employees who do not work in payroll to see the name, office, work phone, and department columns in an employee table, but do not allow them to see any columns with salary information or personal information. - Join columns from multiple tables so that they look like a single table.
- Aggregate information instead of supplying details.
For example, present the sum of a column, or the maximum or minimum value from a column.
Views are created by defining the SELECT statement that retrieves the data to be presented by the view. The data tables referenced by the SELECT statement are known as the base tables for the view. In this example, titleview in the pubs database is a view that selects data from three base tables to present a virtual table of commonly needed data:
CREATE VIEW titleview
AS
SELECT title, au_ord, au_lname, price, ytd_sales, pub_id
FROM authors AS a
JOIN titleauthor AS ta ON (a.au_id = ta.au_id)
JOIN titles AS t ON (t.title_id = ta.title_id)
You can then reference titleview in statements in the same way you would reference a table:
SELECT *
FROM titleview
A view can reference another view. For example, titleview presents information that is useful for managers, but a company typically discloses year-to-date figures only in quarterly or annual financial statements. A view can be built that selects all the titleview columns except au_ord and ytd_sales. This new view can be used by customers to get lists of available books without seeing the financial information:
CREATE VIEW Cust_titleview
AS
SELECT title, au_lname, price, pub_id
FROM titleview
Views can be used to partition data across multiple databases or instances of Microsoft® SQL Server™ 2000. Partitioned views can be used to distribute database processing across a group of servers. The group of servers has the same performance benefits as a cluster of servers, and can be used to support the processing needs of the largest Web sites or corporate data centers. An original table is subdivided into several member tables, each of which has a subset of the rows from the original table. Each member table can be placed in databases on separate servers. Each server also gets a partitioned view. The partitioned view uses the Transact-SQL UNION operator to combine the results of selects against all the member tables into a single result set that behaves exactly like a copy of the full original table. For example, a table is partitioned across three servers. On the first server you define a partitioned view similar to this:
CREATE VIEW PartitionedView AS
SELECT *
FROM MyDatabase.dbo.PartitionTable1
UNION ALL
SELECT *
FROM Server2.MyDatabase.dbo.PartitionTable2
UNION ALL
SELECT *
FROM Server3.MyDatabase.dbo.PartitionTable3
You define similar partitioned views on each of the other servers. With these three views, any Transact-SQL statements on any of the three servers that reference PartitionedView will see the same behavior as from the original table. It is as if a copy of the original table exists on each server, when in fact there is only one member table and a partitioned view on each table. For more information.Views in all versions of SQL Server are updatable (can be the target of UPDATE, DELETE, or INSERT statements), as long as the modification affects only one of the base tables referenced by the view, for example:
-- Increase the prices for publisher '0736' by 10%.
UPDATE titleview
SET price = price * 1.10
WHERE pub_id = '0736'
GO
SQL Server 2000 supports more complex types of INSERT, UPDATE, and DELETE statements that reference views. INSTEAD OF triggers can be defined on a view to specify the individual updates that must be performed against the base tables to support the INSERT, UPDATE, or DELETE statement. Also, partitioned views support INSERT, UDPATE, and DELETE statements that modify multiple member tables referenced by the view.
Indexed views are a SQL Server 2000 feature that greatly improves the performance of complex views of the type usually found in data warehouses or other decision support systems.
Views are called virtual tables because the result set of a view is us not usually saved in the database The result set for a view is dynamically incorporated into the logic of the statement and the result set is built dynamically at run time. For more information.
SQL Stored Procedures
A stored procedure is a group of Transact-SQL statements compiled into a single execution plan.
Microsoft® SQL Server™ 2000 stored procedures return data in four ways:
- Output parameters, which can return either data (such as an integer or character value) or a cursor variable (cursors are result sets that can be retrieved one row at a time).
- Return codes, which are always an integer value.
- A result set for each SELECT statement contained in the stored procedure or any other stored procedures called by the stored procedure.
- A global cursor that can be referenced outside the stored procedure.
Stored procedures assist in achieving a consistent implementation of logic across applications. The SQL statements and logic needed to perform a commonly performed task can be designed, coded, and tested once in a stored procedure. Each application needing to perform that task can then simply execute the stored procedure. Coding business logic into a single stored procedure also offers a single point of control for ensuring that business rules are correctly enforced.
Stored procedures can also improve performance. Many tasks are implemented as a series of SQL statements. Conditional logic applied to the results of the first SQL statements determines which subsequent SQL statements are executed. If these SQL statements and conditional logic are written into a stored procedure, they become part of a single execution plan on the server. The results do not have to be returned to the client to have the conditional logic applied; all of the work is done on the server. The IF statement in this example shows embedding conditional logic in a procedure to keep from sending a result set to the application:
IF (@QuantityOrdered < (SELECT QuantityOnHand
FROM Inventory
WHERE PartID = @PartOrdered) )
BEGIN
-- SQL statements to update tables and process order.
END
ELSE
BEGIN
-- SELECT statement to retrieve the IDs of alternate items
-- to suggest as replacements to the customer.
END
Applications do not need to transmit all of the SQL statements in the procedure: they have to transmit only an EXECUTE or CALL statement containing the name of the procedure and the values of the parameters.
Stored procedures can also shield users from needing to know the details of the tables in the database. If a set of stored procedures supports all of the business functions users need to perform, users never need to access the tables directly; they can just execute the stored procedures that model the business processes with which they are familiar.
An illustration of this use of stored procedures is the SQL Server system stored procedures used to insulate users from the system tables. SQL Server includes a set of system stored procedures whose names usually start with sp_. These system stored procedures support all of the administrative tasks required to run a SQL Server system. You can administer a SQL Server system using the Transact-SQL administration-related statements (such as CREATE TABLE) or the system stored procedures, and never need to directly update the system tables.
Stored Procedures and Execution Plans
In SQL Server version 6.5 and earlier, stored procedures were a way to partially precompile an execution plan. At the time the stored procedure was created, a partially compiled execution plan was stored in a system table. Executing a stored procedure was more efficient than executing an SQL statement because SQL Server did not have to compile an execution plan completely, it only had to finish optimizing the stored plan for the procedure. Also, the fully compiled execution plan for the stored procedure was retained in the SQL Server procedure cache, meaning that subsequent executions of the stored procedure could use the precompiled execution plan.
SQL Server 2000 and SQL Server version 7.0 incorporate a number of changes to statement processing that extend many of the performance benefits of stored procedures to all SQL statements. SQL Server 2000 and SQL Server 7.0 do not save a partially compiled plan for stored procedures when they are created. A stored procedure is compiled at execution time, like any other Transact-SQL statement. SQL Server 2000 and SQL Server 7.0 retain execution plans for all SQL statements in the procedure cache, not just stored procedure execution plans. The database engine uses an efficient algorithm for comparing new Transact-SQL statements with the Transact-SQL statements of existing execution plans. If the database engine determines that a new Transact-SQL statement matches the Transact-SQL statement of an existing execution plan, it reuses the plan. This reduces the relative performance benefit of precompiling stored procedures by extending execution plan reuse to all SQL statements.
Temporary Stored Procedures
SQL Server 2000 also supports temporary stored procedures that, like temporary tables, are dropped automatically when you disconnect. Temporary stored procedures are stored in tempdb and are useful when connected to earlier versions of SQL Server. Temporary stored procedures can be used when an application builds dynamic Transact-SQL statements that are executed several times. Rather than have the Transact-SQL statements recompiled each time, you can create a temporary stored procedure that is compiled on the first execution, and then execute the precompiled plan multiple times. Heavy use of temporary stored procedures, however, can lead to contention on the system tables in tempdb.
Two features of SQL Server 2000 and SQL Server 7.0 eliminate the need for using temporary stored procedures:
- Execution plans from prior SQL statements can be reused. This is especially powerful when coupled with the use of the new sp_executesql system stored procedure.
- Natively support for the prepare/execute model of OLE DB and ODBC without using any stored procedures.
For more information about alternatives to using temporary stored procedures
Stored Procedure Example
This simple stored procedure example illustrates three ways stored procedures can return data:
- It first issues a SELECT statement that returns a result set summarizing the order activity for the stores in the sales table.
- It then issues a SELECT statement that fills an output parameter.
- Finally, it has a RETURN statement with a SELECT statement that returns an integer. Return codes are generally used to pass back error checking information. This procedure runs without errors, so it returns another value to illustrate how returned codes are filled.
USE Northwind
GO
DROP PROCEDURE OrderSummary
GO
CREATE PROCEDURE OrderSummary @MaxQuantity INT OUTPUT AS
-- SELECT to return a result set summarizing
-- employee sales.
SELECT Ord.EmployeeID, SummSales = SUM(OrDet.UnitPrice * OrDet.Quantity)
FROM Orders AS Ord
JOIN [Order Details] AS OrDet ON (Ord.OrderID = OrDet.OrderID)
GROUP BY Ord.EmployeeID
ORDER BY Ord.EmployeeID
-- maximum quantity from Order Details.
SELECT @MaxQuantity = MAX(Quantity) FROM [Order Details]
-- Return the number of all items ordered.
RETURN (SELECT SUM(Quantity) FROM [Order Details])
GO
-- Test the stored procedure.
-- DECLARE variables to hold the return code
-- and output parameter.
DECLARE @OrderSum INT
DECLARE @LargestOrder INT
-- Execute the procedure, which returns
-- the result set from the first SELECT.
EXEC @OrderSum = OrderSummary @MaxQuantity = @LargestOrder OUTPUT
-- Use the return code and output parameter.
PRINT 'The size of the largest single order was: ' +
CONVERT(CHAR(6), @LargestOrder)
PRINT 'The sum of the quantities ordered was: ' +
CONVERT(CHAR(6), @OrderSum)
GO
The output from running this sample is:
EmployeeID SummSales
----------- --------------------------
1 202,143.71
2 177,749.26
3 213,051.30
4 250,187.45
5 75,567.75
6 78,198.10
7 141,295.99
8 133,301.03
9 82,964.00
The size of the largest single order was: 130
The sum of the quantities ordered was: 51317
SQL User-Defined Functions
Functions in programming languages are subroutines used to encapsulate frequently performed logic. Any code that must perform the logic incorporated in a function can call the function rather than having to repeat all of the function logic.
Microsoft® SQL Server™ 2000 supports two types of functions:
• Built-in functions
Operate as defined in the Transact-SQL Reference and cannot be modified. The functions can be referenced only in Transact-SQL statements using the syntax defined in the Transact-SQL Reference. For more information about these built-in functions.
• User-defined functions
Allow you to define your own Transact-SQL functions using the CREATE FUNCTION statement. For more information about these built-in functions.
User-defined functions take zero or more input parameters, and return a single value. Some user-defined functions return a single, scalar data value, such as an int, char, or decimal value.
For example, this statement creates a simple function that returns a decimal:
CREATE FUNCTION CubicVolume
-- Input dimensions in centimeters.
(@CubeLength decimal(4,1), @CubeWidth decimal(4,1),
@CubeHeight decimal(4,1) )
RETURNS decimal(12,3) -- Cubic Centimeters.
AS
BEGIN
RETURN ( @CubeLength * @CubeWidth * @CubeHeight )
END
This function can then be used anywhere an integer expression is allowed, such as in a computed column for a table:
CREATE TABLE Bricks
(
BrickPartNmbr int PRIMARY KEY,
BrickColor nchar(20),
BrickHeight decimal(4,1),
BrickLength decimal(4,1),
BrickWidth decimal(4,1),
BrickVolume AS
(
dbo.CubicVolume(BrickHeight,
BrickLength, BrickWidth)
)
)
SQL Server 2000 also supports user-defined functions that return a table data type:
• A function can declare an internal table variable, insert rows into the variable, and then return the variable as its return value.
• A class of user-defined functions known as in-line functions, return the result set of a SELECT statement as a variable of type table.
These functions can be used in places where table expressions can be specified. For more information about the table data type.
User-defined functions that return a table can be powerful alternatives to views. A user-defined function that returns a table can be used where table or view expressions are allowed in Transact-SQL queries. Views are limited to a single SELECT statement; however, user-defined functions can contain additional statements that allow more powerful logic than is possible in views.
A user-defined function that returns a table can also replace stored procedures that return a single result set. The table returned by a user-defined function can be referenced in the FROM clause of a Transact-SQL statement, whereas stored procedures that return result sets cannot. For example, fn_EmployeesInDept is a user-defined function that returns a table and can be invoked by a SELECT statement:
SELECT *
FROM tb_Employees AS E,
dbo.fn_EmployeesInDept('shipping') AS EID
WHERE E.EmployeeID = EID.EmployeeID
This is an example of a statement that creates a function in the Northwind database that will return a table:
CREATE FUNCTION LargeOrderShippers ( @FreightParm money )
RETURNS @OrderShipperTab TABLE
(
ShipperID int,
ShipperName nvarchar(80),
OrderID int,
ShippedDate datetime,
Freight money
)
AS
BEGIN
INSERT @OrderShipperTab
SELECT S.ShipperID, S.CompanyName,
O.OrderID, O.ShippedDate, O.Freight
FROM Shippers AS S
INNER JOIN Orders AS O ON (S.ShipperID = O.ShipVia)
WHERE O.Freight > @FreightParm
RETURN
END
In this function, the local return variable name is @OrderShipperTab. Statements in the function build the table result returned by the function by inserting rows into the variable @OrderShipperTab. External statements invoke the function to reference the table returned by the function:
SELECT *
FROM LargeOrderShippers( $500 )
Constraints, Rules, Defaults, and Triggers
Table columns have properties other than data type and size. These other properties are an important part in ensuring the integrity of data in a database:
• Data integrity refers to each occurrence of a column having a correct data value.
The data values must be of the right data type and in the correct domain.
• Referential integrity indicates that the relationships between tables have been properly maintained.
Data in one table should only point to existing rows in another table; it should not point to rows that do not exist.
Objects used to maintain both types of integrity include:
• Constraints
• Rules
• Defaults
• Triggers
Constraints
Constraints allow you to define the way Microsoft® SQL Server™ 2000 automatically enforces the integrity of a database. Constraints define rules regarding the values allowed in columns and are the standard mechanism for enforcing integrity. Using constraints is preferred to using triggers, rules, and defaults. The query optimizer also uses constraint definitions to build high-performance query execution plans.
Classes of Constraints
SQL Server 2000 supports five classes of constraints.
• NOT NULL specifies that the column does not accept NULL values.
• CHECK constraints enforce domain integrity by limiting the values that can be placed in a column.
A CHECK constraint specifies a Boolean (evaluates to TRUE or FALSE) search condition that is applied to all values entered for the column; all values that do not evaluate to TRUE are rejected. You can specify multiple CHECK constraints for each column. This sample shows the creation of a named constraint, chk_id, that further enforces the domain of the primary key by ensuring that only numbers within a specified range are entered for the key.
CREATE TABLE cust_sample
(
cust_id int PRIMARY KEY,
cust_name char(50),
cust_address char(50),
cust_credit_limit money,
CONSTRAINT chk_id CHECK (cust_id BETWEEN 0 and 10000 )
)
• UNIQUE constraints enforce the uniqueness of the values in a set of columns.
No two rows in the table are allowed to have the same not null values for the columns in a UNIQUE constraint. Primary keys also enforce uniqueness, but primary keys do not allow null values. A UNIQUE constraint is preferred over a unique index.
• PRIMARY KEY constraints identify the column or set of columns whose values uniquely identify a row in a table.
No two rows in a table can have the same primary key value. You cannot enter a NULL for any column in a primary key. NULL is a special value in databases that represents an unknown value, which is distinct from a blank or 0 value. Using a small, integer column as a primary key is recommended. Each table should have a primary key.
A table may have more than one combination of columns that could uniquely identify the rows in a table; each combination is a candidate key. The database administrator picks one of the candidate keys to be the primary key. For example, in the part_sample table both part_nmbr and part_name could be candidate keys, but only part_nmbr is chosen as a primary key.
CREATE TABLE part_sample
(part_nmbr int PRIMARY KEY,
part_name char(30),
part_weight decimal(6,2),
part_color char(15) )
• FOREIGN KEY constraints identify the relationships between tables.
A foreign key in one table points to a candidate key in another table. Foreign keys prevent actions that would leave rows with foreign key values when there are no candidate keys with that value. In the following sample, the order_part table establishes a foreign key referencing the part_sample table defined earlier. Usually, order_part would also have a foreign key against an order table, but this is a simple example.
CREATE TABLE order_part
(order_nmbr int,
part_nmbr int
FOREIGN KEY REFERENCES part_sample(part_nmbr)
ON DELETE NO ACTION,
qty_ordered int)
GO
You cannot insert a row with a foreign key value (except NULL) if there is no candidate key with that value. The ON DELETE clause controls what actions are taken if you attempt to delete a row to which existing foreign keys point. The ON DELETE clause has two options:
• NO ACTION specifies that the deletion fails with an error.
• CASCADE specifies that all the rows with foreign keys pointing to the deleted row are also deleted.
The ON UPDATE clause defines the actions that are taken if you attempt to update a candidate key value to which existing foreign keys point. It also supports the NO ACTION and CASCADE options.
Column and Table Constraints
Constraints can be column constraints or table constraints:
• A column constraint is specified as part of a column definition and applies only to that column (the constraints in the earlier samples are column constraints).
• A table constraint is declared independently from a column definition and can apply to more than one column in a table.
Table constraints must be used when more than one column must be included in a constraint.
For example, if a table has two or more columns in the primary key, you must use a table constraint to include both columns in the primary key. Consider a table that records events happening in a computer in a factory. Assume that events of several types can happen at the same time, but that no two events happening at the same time can be of the same type. This can be enforced in the table by including both the type and time columns in a two-column primary key.
CREATE TABLE factory_process
(event_type int,
event_time datetime,
event_site char(50),
event_desc char(1024),
CONSTRAINT event_key PRIMARY KEY (event_type, event_time) )
Rules
Rules are a backward-compatibility feature that perform some of the same functions as CHECK constraints. CHECK constraints are the preferred, standard way to restrict the values in a column. CHECK constraints are also more concise than rules; there can only be one rule applied to a column, but multiple CHECK constraints can be applied. CHECK constraints are specified as part of the CREATE TABLE statement, while rules are created as separate objects and then bound to the column.
This example creates a rule that performs the same function as the CHECK constraint example in the preceding topic. The CHECK constraint is the preferred method to use in Microsoft® SQL Server™ 2000.
CREATE RULE id_chk AS @id BETWEEN 0 and 10000
GO
CREATE TABLE cust_sample
(
cust_id int
PRIMARY KEY,
cust_name char(50),
cust_address char(50),
cust_credit_limit money,
)
GO
sp_bindrule id_chk, 'cust_sample.cust_id'
GO
Defaults
Defaults specify what values are used in a column if you do not specify a value for the column when inserting a row. Defaults can be anything that evaluates to a constant, such as:
• Constant
• Built-in function
• Mathematical expression
There are two ways to apply defaults:
• Create a default definition using the DEFAULT keyword in CREATE TABLE to assign a constant expression as a default on a column.
This is the preferred, standard method. It is also the more concise way to specify a default.
• Create a default object using the CREATE DEFAULT statement and bind it to columns using the sp_bindefault system stored procedure.
This is a backward compatibility feature.
This example creates a table using one of each type of default. It creates a default object to assign a default to one column, and binds the default object to the column. It then does a test insert without specifying values for the columns with defaults and retrieves the test row to verify the defaults were applied.
USE pubs
GO
CREATE TABLE test_defaults
(keycol smallint,
process_id smallint DEFAULT @@SPID, --Preferred default definition
date_ins datetime DEFAULT getdate(), --Preferred default definition
mathcol smallint DEFAULT 10 * 2, --Preferred default definition
char1 char(3),
char2 char(3) DEFAULT 'xyz') --Preferred default definition
GO
/* Illustration only, use DEFAULT definitions instead.*/
CREATE DEFAULT abc_const AS 'abc'
GO
sp_bindefault abc_const, 'test_defaults.char1'
GO
INSERT INTO test_defaults(keycol) VALUES (1)
GO
SELECT * FROM test_defaults
GO
The output of this sample is:
Default bound to column.
(1 row(s) affected)
keycol process_id date_ins mathcol char1 char2
------ ---------- --------------------------- ------- ----- -----
1 7 Oct 16 1997 8:34PM 20 abc xyz
(1 row(s) affected)
Triggers
Microsoft® SQL Server™ 2000 triggers are a special class of stored procedure defined to execute automatically when an UPDATE, INSERT, or DELETE statement is issued against a table or view. Triggers are powerful tools that sites can use to enforce their business rules automatically when data is modified. Triggers can extend the integrity checking logic of SQL Server constraints, defaults, and rules, although constraints and defaults should be used instead whenever they provide all the needed functionality.
Tables can have multiple triggers. The CREATE TRIGGER statement can be defined with the FOR UPDATE, FOR INSERT, or FOR DELETE clauses to target a trigger to a specific class of data modification actions. When FOR UPDATE is specified, the IF UPDATE (column_name) clause can be used to target a trigger to updates affecting a particular column.
Triggers can automate the processing for a company. In an inventory system, update triggers can detect when a stock level reaches a reorder point and generate an order to the supplier automatically. In a database recording the processes in a factory, triggers can e-mail or page operators when a process exceeds defined safety limits.
The following trigger generates an e-mail whenever a new title is added in the pubs database:
CREATE TRIGGER reminder
ON titles
FOR INSERT
AS
EXEC master..xp_sendmail 'MaryM',
'New title, mention in the next report to distributors.'
Triggers contain Transact-SQL statements, much the same as stored procedures. Triggers, like stored procedures, return the result set generated by any SELECT statements in the trigger. Including SELECT statements in triggers, except statements that only fill parameters, is not recommended. This is because users do not expect to see any result sets returned by an UPDATE, INSERT, or DELETE statement.
You can use the FOR clause to specify when a trigger is executed:
• AFTER
The trigger executes after the statement that triggered it completes. If the statement fails with an error, such as a constraint violation or syntax error, the trigger is not executed. AFTER triggers cannot be specified for views, they can only be specified for tables. You can specify multiple AFTER triggers for each triggering action (INSERT, UPDATE, or DELETE). If you have multiple AFTER triggers for a table, you can use sp_settriggerorder to define which AFTER trigger fires first and which fires last. All other AFTER triggers besides the first and last fire in an undefined order which you cannot control.
AFTER is the default in SQL Server 2000. You could not specify AFTER or INSTEAD OF in SQL Server version 7.0 or earlier, all triggers in those versions operated as AFTER triggers.
• INSTEAD OF
The trigger executes in place of the triggering action. INSTEAD OF triggers can be specified on both tables and views. You can define only one INSTEAD OF trigger for each triggering action (INSERT, UPDATE, and DELETE). INSTEAD OF triggers can be used to perform enhance integrity checks on the data values supplied in INSERT and UPDATE statements. INSTEAD OF triggers also let you specify actions that allow views, which would normally not support updates, to be updatable.
SQL Server Enterprise Manager
SQL Server Enterprise Manager is the primary administrative tool for Microsoft® SQL Server™ 2000 and provides a Microsoft Management Console (MMC)–compliant user interface that allows users to:
- Define groups of servers running SQL Server.
- Register individual servers in a group.
- Configure all SQL Server options for each registered server.
- Create and administer all SQL Server databases, objects, logins, users, andpermissions in each registered server.
- Define and execute all SQL Server administrative tasks on each registered server.
- Design and test SQL statements, batches, and scripts interactively by invoking SQL Query Analyzer.
- Invoke the various wizards defined for SQL Server.
MMC is a tool that presents a common interface for managing different server applications in a Microsoft Windows® network. Server applications provide a component called an MMC snap-in that presents MMC users with a user interface for managing the server application. SQL Server Enterprise Manager is the Microsoft SQL Server 2000 MMC snap-in.
To launch SQL Server Enterprise Manager, select the Enterprise Manager icon in the Microsoft SQL Server program group. On computers running Windows 2000, you can also launch SQL Server Enterprise Manager from Computer Management in Control Panel. MMC snap-ins launched from Computer Management do not have the ability to open child windows enabled by default. You may have to enable this option to use all the SQL Server Enterprise Manager features.
Note If you register additional SQL servers in Computer Management, and then either close Computer Management or connect to another computer, the servers will no longer appear in Computer Management. The registered servers will appear in SQL Server Enterprise Manager.
SQL Query Analyzer
SQL Query Analyzer is a graphical user interface for designing and testing Transact-SQL statements, batches, and scripts interactively. SQL Query Analyzer can be called from SQL Server Enterprise Manager.
SQL Query Analyzer offers:
- A Free-form text editor for keying in Transact-SQL statements.
- Color-coding of Transact-SQL syntax to improve the readability of complex statements.
- Object browser and object search tools for easily finding the objects in a database and the structure of the objects.
- Templates that can be used to speed development of the Transact-SQL statements for creating SQL Server objects. Templates are files that include the basic structure of the Transact-SQL statements needed to create objects in a database.
- An interactive debugger for analyzing stored procedures.
- Results presented in either a grid or a free-form text window.
- Graphical diagram of the showplan information showing the logical steps built into the execution plan of a Transact-SQL statement.
This allows programmers to determine what specific part of a poorly performing query is using a lot of resources. Programmers can then explore changing the query in ways that minimize the resource usage while still returning the correct data. - Index Tuning Wizard to analyze a Transact-SQL statement and the tables it references, to see if adding additional indexes will improve the performance of the query.
Import and Export Data
The Import and Export Data item in the Microsoft® SQL Server™ program group starts the Data Transformation Services (DTS) Import/Export Wizard. The wizard walks users through the DTS functions of importing, exporting, validating, and transforming data and objects between heterogeneous OLE DB and ODBC data sources.
SQL Profiler
SQL Profiler is a tool that captures Microsoft® SQL Server™ 2000 events from a server. The events are saved in a trace file that can later be analyzed or used to replay a specific series of steps when trying to diagnose a problem. SQL Profiler is used for activities such as:
• Stepping through problem queries to find the cause of the problem.
• Finding and diagnosing slow-running queries.
• Capturing the series of SQL statements that lead to a problem. The saved trace can then be used to replicate the problem on a test server where the problem can be diagnosed.
• Monitoring the performance of SQL Server to tune workloads.
SQL Profiler also supports auditing the actions performed on instances of SQL Server. Audits record security-related actions for later review by a security administrator. SQL Server 2000 auditing meets C2 security certification requirements.
SQL Server Service Manager
SQL Server Service Manager is used to start, stop, and pause the Microsoft® SQL Server™ 2000 components on the server. These components run as services on Microsoft Windows NT® or Microsoft Windows® 2000 and as separate executable programs on Microsoft Windows 95 and Microsoft Windows 98:
- SQL Server service
Implements the SQL Server database engine. There is one SQL Server service for each instance of SQL Server running on the computer. - SQL Server Agent service
Implements the agent that runs scheduled SQL Server administrative tasks. There is one SQL Server Agent service for each instance of SQL Server running on the computer. - Microsoft Search service (Windows NT and Windows 2000 only)
Implements the full-text search engine. There is only one service, regardless of the number of SQL Server instances on the computer. - MSDTC service (Windows NT and Windows 2000 only)
Manages distributed transactions. There is only one service, regardless of the number of SQL Server instances on the computer. - MSSQLServerOlAPService service (Windows NT and Windows 2000 only)
Implements SQL Server 2000 Analysis Services. There is only one service, regardless of the number of SQL Server instances on the computer.
Operating the SQL Server Service Manager
SQL Server Service Manager is a taskbar application and follows the standard behavior of taskbar applications. When minimized, the SQL Server Service Manager icon appears in the area of the taskbar clock on the right of the taskbar. To get a menu that includes all the tasks SQL Server Service Manager supports, right-click the taskbar item.
To maximize SQL Server Service Manager, double-click the icon. When SQL Server Service Manager is maximized, clicking the close button of the SQL Server Service Manager window does not terminate the application; it only minimizes SQL Server Service Manager to the taskbar. To terminate SQL Server Service Manager, right-click the SQL Server Service Manager icon on the taskbar, and then select the File/Exit menu item.
SQL Server and ADO
Microsoft® ActiveX® Data Objects are a set of Automation objects that consume the OLE DB API and allow applications to consume data from OLE DB data sources. This includes data stored in many different formats, not only SQL databases. The ActiveX Data Object (ADO) API can be used from applications written in any automation-enabled language, such as Microsoft Visual Basic®, Microsoft Visual C++®, Microsoft Visual J++®, and Microsoft Visual FoxPro®.
ADO applications access data through OLE DB providers. Microsoft SQL Server™ 2000 includes a native Microsoft OLE DB Provider for SQL Server used by ADO applications to access the data in SQL Server. In SQL Server version 6.5 and earlier, ADO applications had to use the OLE DB Provider for ODBC layered over the Microsoft SQL Server ODBC driver. Although ADO applications can still use the OLE DB Provider for ODBC with the SQL Server ODBC driver, it is more efficient to only use the OLE DB Provider for SQL Server.
ADO is the API most recommended for general-purpose data access to SQL Server for these reasons:
• ADO is easy to learn and program.
• ADO has the feature set required by most general-purpose applications.
• ADO enables programmers to quickly produce robust applications.
The core capabilities of the OLE DB specification provide all the data access functionality needed by most applications. In addition, OLE DB allows individual providers to define provider-specific mechanisms to support additional features of the data engine accessed by the provider. ADO exposes the core capabilities of OLE DB, but does not expose provider-specific features. ADO applications cannot access a few SQL Server features exposed through provider-specific features of the OLE DB Provider for SQL Server, such as the IRowsetFastLoad bulk copy methods, SQL Server-specific extended diagnostic information, and auto-fetch cursors.
ADO also supports the XML functionality of SQL Server 2000. This provides an easy migration path for Internet applications coded to use ADO to retrieve a rowset and then convert the rowset into an XML document. The application can instead use ADO to execute an XPath query or a SELECT statement with a FOR XML clause, in which case the result set is built as an XML document on the server rather than having to be converted on the application computer.
ADO has evolved from the earlier, ODBC-based Remote Data Objects (RDO) and Data Access Objects (DAO) APIs. RDO and DAO applications can be converted to ADO, and RDO and DAO application programmers quickly learn ADO. ADO is used extensively in Active Server Pages (ASP).
SQL Server and OLE DB
OLE DB is an API that allows COM applications to consume data from OLE DB data sources. OLE DB data sources include data stored in many different formats, not only SQL databases. An application uses an OLE DB provider to access an OLE DB data source. An OLE DB provider is a COM component that accepts calls to the OLE DB API and does whatever is necessary to process that request against the data source.
Microsoft® SQL Server™ 2000 includes a native Microsoft OLE DB Provider for SQL Server used by OLE DB applications to access the data in SQL Server. The OLE DB Provider for SQL Server complies with the OLE DB 2.0 specification. Each OLE DB provider supports a command language; the OLE DB Provider for SQL Server accepts the command syntax specified as DBGUID_SQL. DBGUID_SQL syntax is primarily SQL-92 syntax with ODBC escape sequences.
In SQL Server version 6.5 and earlier, OLE DB applications had to use the OLE DB Provider for ODBC layered over the Microsoft SQL Server ODBC driver. While OLE DB applications can still use the OLE DB Provider for ODBC with the SQL Server ODBC driver, it is more efficient to use only the OLE DB Provider for SQL Server.
OLE DB is the API recommended for tools, utilities, or system level development needing either top performance or access to SQL Server features not exposed through ADO. The core capabilities of the OLE DB specification provide all the data access functionality needed by most applications. In addition, OLE DB allows individual providers to define provider-specific mechanisms to support additional features of the data engine accessed by the provider. ADO applications cannot access some SQL Server features exposed through provider-specific features of the OLE DB Provider for SQL Server, so applications needing to use the provider-specific features of the OLE DB Provider for SQL Server must use the OLE DB API. These features include:
- An IRowsetFastLoad interface to the SQL Server bulk copy component.
- An ISQLServerErrorInfo interface to get SQL Server-specific information from messages and errors.
- A LINKEDSERVERS rowset that exposes catalog information from the linked servers used in SQL Server distributed queries.
- Various provider-specific properties to control SQL Server-specific behaviors.
OLE DB also supports the XML functionality of SQL Server 2000. This provides an easy migration path for Internet applications coded to use OLE DB to retrieve a rowset and then convert it into an XML document. The application can instead use OLE DB to execute an XPath query or a SELECT statement with a FOR XML clause, in which case the result set is built as an XML document on the server rather than having to be converted on the application computer. Most application working with XML are written in ADO or use URLs, which are less complex than OLE DB.
SQL Server and ODBC
Open Database Connectivity (ODBC) is a Call-Level Interface (CLI) that allows C and C++ applications to access data from ODBC data sources. A CLI is an API consisting of functions an application calls to obtain a set of services. ODBC data sources include data stored in different formats, not just SQL databases. An application uses an ODBC driver to access a data source. An ODBC driver is a dynamic-link library (DLL) that accepts calls to the ODBC API functions and does whatever is necessary to process that request against the data source.
ODBC is aligned with these specifications and standards defining a CLI for data access:
- The X/Open CAE Specification "Data Management: SQL Call-Level Interface (CLI)"
- ISO/IEC 9075-3:1995(E) Call-Level Interface (SQL/CLI)
ODBC has been widely accepted by database programmers, and several database vendors or third-party companies supply ODBC drivers. Several other Microsoft data access APIs were defined as simplified object models over ODBC, such as: - Remote Data Objects (RDO)
- Data Access Objects (DAO)
- Microsoft Foundation Classes (MFC) Database Classes
Microsoft® SQL Server™ 2000 includes a native Microsoft SQL Server ODBC driver used by ODBC applications to access the data in SQL Server. The SQL Server ODBC Driver complies with Level 2 of the ODBC 3.51 specification and exposes all the features of SQL Server. In SQL Server 2000 all of the SQL Server utilities except isql use the ODBC API and the SQL Server ODBC Driver.
ODBC can be used in tools, utilities, or system level development needing either top performance or access to SQL Server features, and which are not COM applications. ODBC, like OLE DB, allows individual drivers to define driver-specific mechanisms to support additional features of the data engine accessed by the driver. These features include:
- A set of bulk copy functions based on the earlier DB-Library bulk copy functions.
- Extensions to the ODBC diagnostic functions and records to get SQL Server-specific information from messages and errors.
- A set of functions that exposes catalog information from the linked servers used in SQL Server distributed queries.
- Various driver-specific attributes and connection string keywords to control SQL Server–specific behaviors.
Data Transformation Services API
The Data Transformation Services (DTS) application programming interface (API) is a set of objects encapsulating services that assist with building a data warehouse. DTS can be used in applications written in languages that support Automation or COM:
• DTS transfers data between heterogeneous OLE DB data sources.
• DTS performs customized transformations that can convert detailed online transaction processing (OLTP) data to a summarized form for easy analysis of trend information
SQL Server and Embedded SQL
Embedded SQL (ESQL) is a SQL-92 standard application programming interface (API) for SQL database access. ESQL requires a two-step compilation process:
1. A precompiler translates Embedded SQL statements into commands in the programming language used to write the application. The generated statements are specific to the database that supplied the precompiler, so although the original source is generic to ESQL, the generated statements and the final executable file are specific to one database vendor.
2. The source generated by the precompiler is then compiled using the compiler for the application programming language.
Embedded SQL has a simpler syntax than COM APIs such as OLE DB or Call Level Interfaces such as ODBC, so it is easier to learn and program. It is less flexible than OLE DB or ODBC, where well-written applications can switch from one DBMS to another by simply switching drivers or providers. OLE DB and ODBC are also better at dealing with environments where the SQL statements are not known when the application is compiled, such as when developing as-required query tools.
Microsoft® SQL Server™ 2000 provides an Embedded SQL precompiler for C applications. The SQL Server precompiler translates Embedded SQL statements as calls to the appropriate DB-Library API functions. The Microsoft implementation of ESQL has the same restrictions as DB-Library applications.
SQL Server is designed such that it can support COBOL Embedded-SQL applications compiled with third-party Embedded SQL precompilers that support Microsoft SQL Server
Starting, Pausing, and Stopping SQL Server
Before you log in to an instance of Microsoft® SQL Server™, you need to know how to start, pause, and stop an instance of SQL Server. After you are logged in, you can perform tasks such as administering the server or querying a database.
Using the SQL Server Service
When you start an instance of SQL Server, you are starting the SQL Server service. After you start the SQL Server service, users can establish new connections to the server. The SQL Server service can be started and stopped as a Microsoft Windows NT® 4.0 or Windows® 2000 service, either locally or remotely. The SQL Server service is referred to as MSSQLServer if it is the default instance, or MSSQL$instancename if it is a named instance.
Using SQL Server Service Manager
If you are running Microsoft Windows 98, SQL Server Service Manager can be used start, pause, stop and check the state of local services, though it cannot remotely administer services.
If you have to restart your computer, SQL Server Service Manager appears automatically and the default service is displayed. It is possible to change the default service on the local computer through the SQL Server Service Manager. When you restart the computer, the default service will now be displayed in SQL Server Service Manager. For example, if you change the default service to SQL Server Agent service, and then shut down the computer, the next time you start it, SQL Server Agent service will be displayed in SQL Server Service Manager.
SQL Server Service Manager can also be used to start, pause, or stop an instance of SQL Server 2000 Analysis Services.
Running SQL Server
Running SQL Server on a Network
For SQL Server to communicate over the network, the SQL Server service must be running. By default, Microsoft Windows NT® 4.0 and Windows® 2000 automatically start the built-in SQL Server service. To find out whether the SQL Server service has been started, at the command prompt, type:
net start
If the SQL Server service has been started, the following appears in the net start output:
C:\> net start
These Windows NT services are started:
ClipBook Server
Computer Browser
EventLog
Messenger
Network DDE
Network DDE DSDM
Server
Workstation
The command completed successfully.
If the SQL Server service has not been started, at the command prompt, type:
net start server
The following message indicates that the service has been started:
The Server service was started successfully.
You can also use the Services application in Control Panel to check service status and to start and stop services. For more information, see the Windows NT 4.0 and Windows 2000 documentation.
Running SQL Server Without a Network
When running an instance of SQL Server without a network, you do not need to start the built-in SQL Server service. Because SQL Server Enterprise Manager, SQL Server Service Manager, and the net start and net stop commands are functional even without a network, the procedures for starting and stopping an instance of SQL Server are identical for a network or stand-alone operation.
When connecting to an instance of a stand-alone SQL Server from a local client such as osql, you bypass the network and connect directly to the instance of SQL Server by using a local pipe. The difference between a local pipe and a network pipe is whether you are using a network. Both local and network pipes establish a connection with an instance of SQL Server by using the standard pipe (\pipe\sql\query), unless otherwise directed.
When you connect to an instance of a local SQL Server without specifying a server name, you are using a local pipe. When you connect to an instance of a local SQL Server and specify a server name explicitly, you are using either a network pipe or another network interprocess communication (IPC) mechanism, such as Internetwork Packet Exchange/Sequenced Packet Exchange (IPX/SPX) (assuming you have configured SQL Server to use multiple networks). Because a stand-alone SQL Server does not support network pipes, you must omit the unnecessary /Sserver_name argument when connecting to the instance of SQL Server from a client. For example, to connect to a stand-alone instance of SQL Server from osql, type:
osql /Usa /P
Stopping SQL Server
You can stop an instance of Microsoft® SQL Server™ locally from the server or remotely from a client or another server. If you stop an instance of SQL Server without pausing it, all server processes are terminated immediately. Stopping an instance of SQL Server prevents new connections and disconnects current users.
Using the Web Assistant Wizard
You can use the Web Assistant Wizard to generate standard HTML files from Microsoft® SQL Server™ data. The Web Assistant Wizard generates HTML files by using Transact-SQL queries, stored procedures, and extended stored procedures. You can use the wizard to generate an HTML file on a one time basis or as a regularly scheduled SQL Server task. You also can update an HTML file using a trigger.
With the Web Assistant Wizard, you can:
- Schedule a task to update a Web page automatically. For example, you can update a price list when a new item is added or a price is changed, thereby maintaining a dynamic inventory and price list for customers and sales staff.
- Publish and distribute management reports, including the latest sales statistics, resource allocations, or other SQL Server data.
- Publish server reports with information about who is accessing the server currently, and about which locks are being held by which users.
- Publish information outside of SQL Server using extended stored procedures.
- Publish server jump lists using a table of favorite Web sites.
- Use the sp_makewebtask stored procedure to generate an HTML file. This system stored procedure can be called by a Transact-SQL program. You can also call system stored procedures to run or drop the task.
The Web Assistant Wizard runs from SQL Server Enterprise Manager.
SQL Server Enterprise Manager
SQL Server Enterprise Manager is the primary administrative tool for Microsoft® SQL Server 2000 and provides a Microsoft Management Console (MMC)-compliant user interface that allows users to:
" Define groups of servers running SQL Server.
- Register individual servers in a group.
- Configure all SQL Server options for each registered server.
- Create and administer all SQL Server databases, objects, logins, users, and permissions in each registered server.
- Define and execute all SQL Server administrative tasks on each registered server.
- Design and test SQL statements, batches, and scripts interactively by invoking SQL Query Analyzer.
- Invoke the various wizards defined for SQL Server.
To launch SQL Server Enterprise Manager, select the Enterprise Manager icon in the Microsoft SQL Server program group. On computers running Windows 2000, you can also launch SQL Server Enterprise Manager from Computer Management in Control Panel. MMC snap-ins launched from Computer Management do not have the ability to open child windows enabled by default. You may have to enable this option to use all the SQL Server Enterprise Manager features.
Note If you register additional SQL servers in Computer Management, and then either close Computer Management or connect to another computer, the servers will no longer appear in Computer Management. The registered servers will appear in SQL Server Enterprise Manager.
SQL Query Analyzer
SQL Query Analyzer is a graphical user interface for designing and testing Transact-SQL statements, batches, and scripts interactively. SQL Query Analyzer can be called from SQL Server Enterprise Manager.
SQL Query Analyzer offers:
- A Free-form text editor for keying in Transact-SQL statements.
- Color-coding of Transact-SQL syntax to improve the readability of complex statements.
- Object browser and object search tools for easily finding the objects in a database and the structure of the objects.
- Templates that can be used to speed development of the Transact-SQL statements for creating SQL Server objects. Templates are files that include the basic structure of the Transact-SQL statements needed to create objects in a database.
- An interactive debugger for analyzing stored procedures.
- Results presented in either a grid or a free-form text window.
- Graphical diagram of the showplan information showing the logical steps built into the execution plan of a Transact-SQL statement.
This allows programmers to determine what specific part of a poorly performing query is using a lot of resources. Programmers can then explore changing the query in ways that minimize the resource usage while still returning the correct data. - Index Tuning Wizard to analyze a Transact-SQL statement and the tables it references, to see if adding additional indexes will improve the performance of the query.
Retrieving and Writing XML Data
You can execute SQL queries to return results as XML rather than standard rowsets. These queries can be executed directly or from within stored procedures. To retrieve results directly, you use the FOR XML clause of the SELECT statement, and within the FOR XML clause you specify an XML mode: RAW, AUTO, or EXPLICIT.
For example, this SELECT statement retrieves information from Customers and Orders table in the Northwind database. This query specifies the AUTO mode in the FOR XML clause:
SELECT Customers.CustomerID, ContactName, CompanyName,
Orders.CustomerID, OrderDate
FROM Customers, Orders
WHERE Customers.CustomerID = Orders.CustomerID
AND (Customers.CustomerID = N'ALFKI'
OR Customers.CustomerID = N'XYZAA')
ORDER BY Customers.CustomerID
FOR XML AUTO
Whereas you can use the FOR XML clause to retrieve data as an XML document, you can use the Transact-SQL OPENXML function to insert data represented as an XML document. OPENXML is a rowset provider similar to a table or a view, providing a rowset over in-memory XML documents. OPENXML allows access to XML data as if it is a relational rowset by providing a rowset view of the internal representation of an XML document. The records in the rowset can be stored in database tables. OPENXML can be used in SELECT, and SELECT INTO statements where a source table or view can be specified.
The following example shows the use of OPENXML in an INSERT statement and a SELECT statement. The sample XML document consists of <Customers> and <Orders> elements. First, the sp_xml_preparedocument stored procedure parses the XML document. The parsed document is a tree representation of the nodes (elements, attributes, text, comments, and so on) in the XML document. OPENXML then refers to this parsed XML document and provides a rowset view of all or parts of this XML document. An INSERT statement using OPENXML can insert data from such a rowset into a database table. Several OPENXML calls can be used to provide rowset view of various parts of the XML document and process them, for example, inserting them into different tables (this process is also referred to as "Shredding XML into tables"). In the following example, an XML document is shredded in a way that <Customers> elements are stored in the Customers table and <Orders> elements are stored in the Orders table using two INSERT statements.
The example also shows a SELECT statement with OPENXML that retrieves CustomerID and OrderDate from the XML document.
DECLARE @hDoc int
EXEC sp_xml_preparedocument @hDoc OUTPUT,
N'<ROOT>
<Customers CustomerID="XYZAA" ContactName="Joe"
CompanyName="Company1">
<Orders CustomerID="XYZAA"
OrderDate="2000-08-25T00:00:00"/>
<Orders CustomerID="XYZAA"
OrderDate="2000-10-03T00:00:00"/>
</Customers>
<Customers CustomerID="XYZBB" ContactName="Steve"
CompanyName="Company2">No Orders yet!
</Customers>
</ROOT>'
-- Use OPENXML to provide rowset consisting of customer data.
INSERT Customers
SELECT *
FROM OPENXML(@hDoc, N'/ROOT/Customers')
WITH Customers
-- Use OPENXML to provide rowset consisting of order data.
INSERT Orders
SELECT *
FROM OPENXML(@hDoc, N'//Orders')
WITH Orders
-- Using OPENXML in a SELECT statement.
SELECT * FROM OPENXML(@hDoc, N'/ROOT/Customers/Orders') with (CustomerID nchar(5) '../@CustomerID', OrderDate datetime)
-- Remove the internal representation of the XML document.
EXEC sp_xml_removedocument @hDoc
This illustration shows the parsed XML tree of the preceding XML document that was created by sp_xml_pareparedocument.
OLE DB Provider for SQL Server Extensions for XML
The Microsoft® OLE DB Provider for SQL Server (SQLOLEDB) supports a new dialect called DBGUID_MSSQLXML to execute XML templates with embedded queries (such as SQL FOR XML and XPath queries). Templates are valid XML documents containing one or more queries. The FOR XML and XPath queries return a document fragment. The templates act as a container for the resulting document fragment.
Setting an XML Command Using ICommandText
The ICommandText::SetCommandText and ICommand::Execute methods have been enhanced to allow XML documents to be set as command text, to execute the command and retrieve the results as a stream that can then be used in further processing, such as passing the XML document to DOM (Document Object Model).
The XML templates can be passed to the ICommandText::SetCommandText method. When XML templates are set as command text using ICommandText::SetCommandText, the consumer must pass DBGUID_MSSQLXML as the GUID of the command syntax. This new GUID indicates that the command text is an XML template.
The consumer must call ICommand::Execute to execute XML templates. To obtain an XML document as a result set, the riid is set to IStream, in which case the provider returns the result set as a stream.
Limitations of ICommandText
The template being passed to ICommandText::SetCommandText can be large. And if the template being executed is stored in a file, overhead is required to read the file, buffer its contents, and then set command text using ICommandText::SetCommandText.
In addition, the ICommandText::SetCommandText expects the command string to be a Unicode string. If the actual XML file is in some encoding, additional overhead is required to convert the file to Unicode before passing it to IcommandText::SetCommandText as a command.
Setting an XML Command Using ICommandStream
The OLE DB (version 2.6) interface ICommandStream, although similar to ICommandText, passes a command as a stream object rather than as a string.
SQLOLEDB has implemented the optional ICommandStream interface on the command object. The ICommandStream interface allows you to pass a stream to the command object.
The ICommandStream interface allows a command to be in any encoding that the XML parser understands. Thus, when ICommand::Execute is called, the command text is read out of the stream directly and no conversion is required. Executing XML commands using ICommandStream interface is more efficient.
Both the ICommandStream::GetCommandStream and ICommandStream::SetCommandStream interfaces are implemented in SQLOLEDB.
For ICommandStream, the default dialect (DBGUID_DEFAULT) is DBGUID_MSSQLXML. The dialects supported by ICommandStream::SetCommandStream are provider-specific. SQLOLEDB supports DBGUID_MSSSQLXML only (DBGUID_SQL and DBGUID_XPATH are not supported.)
If you read from the stream returned by GetCommandStream before EXECUTE is called, EXECUTE may fail unless EXECUTE can read from the proper position in the stream.
Support for the OLE DB (Version 2.6) DBPROPSET_STREAM Property Set
SQLOLEDB has implemented DBPROPSET_STREAM property set (in the Stream property group), which includes these properties:
DBPROP_OUTPUTSTREAM
The value passed in this property is a Variant containing a pointer to either IStream or ISequentialStream. When this property is set, ICommand::Execute returns results in the stream specified by this property. This avoids extra copies of the data because you can pass the stream to other users, such as the XML parser.
DBPROP_OUTPUTENCODING
This property specifies the requested encoding for the stream returned by the Execute method. Some of the commonly used encodings are UTF-8, ANSI, and Unicode. The UTF-8 is the default encoding if the value of this property is NULL.
Requesting ISequentialStream on ICommand::Execute
You can request ISequentialStream on ICommand::Execute.
While reading from a stream as long as there is data to read, ISequentialStream::Read will return S_OK. After the end of the stream is reached, a subsequent read will return S_FALSE, unless there were errors during the execution of the command. If there were any errors, DB_S_ERRORSOCCURED is returned only on the first read after the end of the stream was reached. All the subsequent reads will return S_FALSE.
In executing the command, if there are any errors, the errors are returned as processing instructions (PIs) in the stream. All the errors are returned after the last read. Applications that do not have access to error objects can examine the stream contents for the PI containing the errors.
ISequentialStream is supported only when the selected result is a single-column rowset.
SQLOLEDB Provider-Specific Properties
To support XML-specific behavior, SQLOLEDB has implemented the following provider-specific properties in the DBPROPSET_SQLSERVERSTREAM property set (Stream property group). These properties allow you to specify the mapping schema against which an XPath query can be specified as a command, or to specify an XSL file to process the result. Some of these properties are useful for enhancing security and performance.
SSPROP_STREAM_BASEPATH
This property is used to specify the base path. This base path is used in resolving relative paths specified for the XSL file, mapping schema, or external schema references in a template.
SSPROP_STREAM_MAPPINGSCHEMA
This property is used for specifying a schema for the XPath queries. The path specified can be relative or absolute. If the path specified is relative, the base path specified in SSPROP_STREAM_BASEPATH is used to resolve the relative path. If the base path is not specified, the relative path is relative to the current directory
SSPROP_STREAM_XSL
This property is used for specifying an XSL file. The path specified can be relative or absolute. If the path specified is relative, the base path specified in SSPROP_STREAM_BASEPATH is used to resolve the relative path. If the base path is not specified, the relative path is relative to the current directory.
SSPROP_STREAM_CONTENTTYPE
If an XSL style sheet is applied to the result, the media-type property on <xsl:output> in the XSL file is returned as the value of this property.
SSPROP_STREAM_FLAGS
This property is used to specify certain security restrictions. For example, you may not want to allow URL references to files or absolute paths to files (such as external sites). You may not want to allow queries in the templates. The property can be assigned values STREAM_FLAGS_DISALLOW_URL, STREAM_FLAGS_DISALLOW_ABSOLUTE_PATH, or STREAM_FLAGS_DISALLOW_QUERY.